Future of memory
While traditional HBM is a "black box" provided entirely by memory manufacturers, custom HBM (cHBM) for the Feynman architecture allows for a new division of labor:
Nvidia's Role: They will design their own custom logic base die. By doing this, they can use advanced logic-optimized nodes (like TSMC's A16 or potentially Intel Foundry's 14A/18A) rather than the DRAM-optimized nodes typically used by memory makers.
They will still need memory for this, and that will continue to be in high demand, but this is a significant advancement, and only one example of how Nvidia continues to innovate this technology. Really it’s less significant than them 3D stacking at this generation.
The notion that this company isn’t going to continue to provide the best solutions for producing AI for many years to come is laughable at best.