Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/openai-future-of-compute-and-the-american-dream-with-jensen-huang-ceo-nvidia.23728/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

OpenAI, Future of Compute, and the American Dream with Jensen Huang CEO Nvidia

Daniel Nenni

Admin
Staff member
Open Source bi-weekly convo w/ Bill Gurley and Brad Gerstner on all things tech, markets, investing & capitalism. This week, Brad and Clark Tang sit down with Jensen Huang, founder & CEO of NVIDIA, for a sweeping deep dive on the new era of AI. From the $100B partnership with OpenAI to the rise of AI factories, sovereign AI, and protecting the American Dream—this episode explores how accelerated computing is reshaping the global economy. NVIDIA, OpenAI, hyperscalers, and global infrastructure: the AI race is on. Don’t miss this must-listen BG2.

 
Hard not to be a Jensen Fan Boy after watching this. Puts a whole new perspective on the AI Bubble.
He gotta do what he gotta do to get 'em breads.

I watched his speech when he visited Taiwan. He praised this and that in Taiwan, and then he flew to China and praised this and that in China.

商人無國界 or 商人無祖國 (a businessman has no homeland) - fit Jensen very well

Jensen is a CEO and business man. He gotta do what he gotta do.
 
A few standouts for me:
* Moving general purpose computation to accelerators vs CPUs - SQL/Snowflake/Databricks and associated processing on NVIDIA ?
* Yearly new chip/hardware generations is a huge differentiator. They use AI to speed.
* Extreme co-design - chip/software/rack/system/datacenter all developed concurrently
* ASIC vs CPU vs GPU -
- Rubin CPX (long context processing, diffusion video generation accelerator) is precursor for other application specific specialized accelerators.
- Maybe a data processing app specific chip/subsystem next
- Transformer architecture still changing rapidly - programmability still required.
- Only real system-level AI chip competition is Google/TPU
- ASICs only useful for mid-volume - too much gross margin given up for middleman. Smart NICs, and Transcoders are good candidates for ASICs. Not a good option for fundamental compute engine for AI, where underlying algorithms are changing regularly.
- Data centers / AI factories are a soup of ASICs and other chips - need to be orchestrated and co-developed with supply chain.
- NVIDIA targeting lowest Total Cost of Ownership at data center level. Someone could offer ASIC chips at zero $$ and still be less economical. Tokens per gig and tokens per watt are compelling.
* NVLink Fusion and Dynamo leading the way in creating next-gen open AI solutions and associated ecosystem.
* Not just a chip company. The AI infrastructure company.
 
A few standouts for me:
* Moving general purpose computation to accelerators vs CPUs - SQL/Snowflake/Databricks and associated processing on NVIDIA ?
* Yearly new chip/hardware generations is a huge differentiator. They use AI to speed.
* Extreme co-design - chip/software/rack/system/datacenter all developed concurrently
* ASIC vs CPU vs GPU -
- Rubin CPX (long context processing, diffusion video generation accelerator) is precursor for other application specific specialized accelerators.
- Maybe a data processing app specific chip/subsystem next
- Transformer architecture still changing rapidly - programmability still required.
- Only real system-level AI chip competition is Google/TPU
- ASICs only useful for mid-volume - too much gross margin given up for middleman. Smart NICs, and Transcoders are good candidates for ASICs. Not a good option for fundamental compute engine for AI, where underlying algorithms are changing regularly.
- Data centers / AI factories are a soup of ASICs and other chips - need to be orchestrated and co-developed with supply chain.
- NVIDIA targeting lowest Total Cost of Ownership at data center level. Someone could offer ASIC chips at zero $$ and still be less economical. Tokens per gig and tokens per watt are compelling.
* NVLink Fusion and Dynamo leading the way in creating next-gen open AI solutions and associated ecosystem.
* Not just a chip company. The AI infrastructure company.
good summary. Biggest take away is the system level co-design part, that makes their performance/watt highest, and whoever uses their system the highest margin on the same watt, and perhaps more importantly significantly higher revenue opportunities. This is absolutely NV's key moat, and Cuda is only part of this.

AMD should seriously consider acquiring companies like Marvell asap to build the same vertical AI infrastructure capability
 
While Jensen is without a doubt a brilliant mind, to me, a lot more is desired from his take on China competition as well as immigration (he mistake the new H1B $100K app fee as part of countering illegal immigration)
 
Back
Top