Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/nvidia-ceo-surprised-amd-gave-away-10-of-the-company-in-clever-openai-deal.23752/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Nvidia CEO: "Surprised AMD gave away 10% of the company in 'clever' OpenAI deal"

Xebec

Well-known member
CNBC Video:

A few take-aways:

- "Our deal" (Nvidia's) - is the first time Nvidia is directly selling to OpenAI; previously Nvidia was only selling to Cloud providers

- 3:35 ; Jensen starts to say "[OpenAI] is the most profitable ..." then corrects himself "Valuable start-up company ever" :)

- 3:55 for the AMD deal - "Considering [AMD] was so excited about their next generation product, I was surprised AMD would give away 10% of the company before they built it. Anyway, it's clever, I guess".

- (Paraphrase) "Nvidia is unique because it provides every chip needed for an AI datacenter"

- "Moore's Law is really slowing down"
 
We already have a term for all these "creative" and "circular" deals: they are basically the same "vendor financing" deals during the Dotcom era.

If we want to be rigorous, Jensen is vendor financing the old fashion way, with cash; Lisa did get creative and vendor financed with equity warrants.

The only difference this time is the concentration (Nvidia and OpenAI being the foci) and the GPD-like deal sizes.

It didn't end well last time.
 
- (Paraphrase) "Nvidia is unique because it provides every chip needed for an AI datacenter"
Not just every chip, but AI data center level software as well. This new set of benchmarks highlights the cost (TCO) and power advantages with respect to key token factory multi-user parameters (speed, interactivity, capacity) for aggressive AI data center co-optimization for inference. You have to read to the end of the article - it starts with essentially chip level results, but highlights rack level results in the second half (hint - rack level results for TCO and power efficiency are far better than chip / rack slot level results.

InferenceMAX™: Open Source Inference Benchmarking​

NVIDIA GB200 NVL72, AMD MI355X, Throughput Token per GPU, Latency Tok/s/user, Perf per Dollar, Cost per Million Tokes, Tokens per Provisioned Megawatt, DeepSeek R1 670B, GPTOSS 120B, Llama3 70B​


 
Back
Top