Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/jensen-says-nvidia%E2%80%99s-china-ai-gpu-market-share-has-plummeted-from-95-to-0.23847/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Jensen says Nvidia’s China AI GPU market share has plummeted from 95% to 0%

Daniel Nenni

Admin
Staff member
1760893739461.png


NVIDIA’s once-dominant position in China’s AI GPU market has effectively collapsed after new U.S. export restrictions rendered its advanced chips off-limits. CEO Jensen Huang confirmed that NVIDIA’s market share in China has fallen from 95% to zero — a dramatic reversal for a region that previously contributed 20–25% of its data center revenue. This loss highlights how geopolitics, rather than competition, reshaped one of NVIDIA’s most profitable markets.

In the short term, the company is offsetting the revenue hit with explosive demand from U.S. and international cloud providers building AI infrastructure. Global hyperscalers, sovereign AI projects, and enterprise adoption of generative AI continue to drive orders for the H100, H200, and upcoming Blackwell GPUs. However, the long-term implications are significant. China is accelerating efforts to achieve semiconductor self-sufficiency, with Huawei’s Ascend 910B and startups like Birente and Moore Threads filling the vacuum left by NVIDIA. AMD and Intel may capture some global share, but neither can sell unrestricted advanced AI chips to China either.

NVIDIA’s pivot toward software, AI ecosystems, and custom data center solutions underscores its strategy to remain indispensable — even as one of its largest regional markets slips permanently out of reach.

 
Anyone who saw the movie "Lord of War" with Nicolas Cage as a weapon dealer, might recall that Simeon Weisz, his rival, told him he would eventually have to pick a side. At the end of the movie, Cage's character was arrested but then released and paid with a bag full of money. Cage's character acknowledged that he got to walk free because the US government needed him to supply weapons to its enemies, but couldn't be seen doing so.

I felt like this whole AI "movie" is like Lord of War, Huang has to know he's needed by both sides for now, who knows what comes after
 
I wonder what the real story is. Did Nvidia's marketshare in China really drop to zero, or are Chinese buyers still using Nvidia products, but they're purchased through covert channels?
Agreed - officially it's zero, but it's clearly not zero based on the black market video by Gamers Nexus. Universities and consumers are getting these cards.

Also, while not as profitable - a growing number of small enterprises are using GeForce cards for AI purposes too.
 
Agreed - officially it's zero, but it's clearly not zero based on the black market video by Gamers Nexus. Universities and consumers are getting these cards.

Also, while not as profitable - a growing number of small enterprises are using GeForce cards for AI purposes too.
I bet Jasen has the number of real market share including the black market sales
 
China is accelerating efforts to achieve semiconductor self-sufficiency, with Huawei’s Ascend 910B and startups like Birente and Moore Threads filling the vacuum left by NVIDIA.
Huawei, Cambricon, Biren, MooreThreads, Sophgo, and more.
In automotive ADAS and self-driving you have Huawei, Black Sesame, and Horizon Robotics.

AMD and Intel may capture some global share, but neither can sell unrestricted advanced AI chips to China either.
AMD and Intel CPUs have long been banned from Chinese government infrastruture.
The Chinese government uses Huawei, Zhaoxin, Loongson, or Phytium processors.

Only the private sector still buys AMD and Intel chips.
 
I know Jensen spent time with the CCP over the last year. Nvidia even made a chip specifically for China. I wonder what happened? Could Jensen have failed at something :eek:. Or did the US Government pull the rug out from under him? Jensen also spent time at the White House.
 
I know Jensen spent time with the CCP over the last year. Nvidia even made a chip specifically for China. I wonder what happened? Could Jensen have failed at something :eek:. Or did the US Government pull the rug out from under him? Jensen also spent time at the White House.
I think what happened is that Jinping has placed a high priority at decoupling China from US & Euro semiconductors. I don't think Jensen did anything wrong in either country. I think Nvidia just got caught in a no-win political situation, and in China causing near-term pain for long advantage does not have election ramifications. Just thinking about what Trump might do if that was the case in the US makes my head spin.
 
I think what happened is that Jinping has placed a high priority at decoupling China from US & Euro semiconductors. I don't think Jensen did anything wrong in either country. I think Nvidia just got caught in a no-win political situation, and in China causing near-term pain for long advantage does not have election ramifications. Just thinking about what Trump might do if that was the case in the US makes my head spin.

That sounds reasonable. Once you pull the rug out from under someone as the US did China, they tend to not step on the rug anymore. The US Government has been clamping down on China for many years and China has been preparing for it. They may be ready for semiconductor independence now. China may show the world that you do not need the latest and greatest semiconductor process technology to succeed. This may deflate a few egos but the race continues. Go TSMC! Go Intel! Go Nvidia! Go AMD! Go Broadcom! Go Qualcomm! Go Cadence! Go Synopsys! We really do have China outnumbered in this regard.
 
China may show the world that you do not need the latest and greatest semiconductor process technology to succeed. This may deflate a few egos but the race continues.
Having spent decades in computer engineering and software engineering, I can say with high confidence that innovative systems and software architecture and implementation can beat less scalable and efficient implementations that may have more advanced foundry technologies underlying them. US, Euro, and Israeli computing product companies (I'm adding Israel in this list because they have an out-sized impact on the computing industry compared to their population, much like Taiwan) tend to do what's best for economics rather than what's best in purely technical terms. In China, where market economics are clearly thrown out the window, and dictated national urgency overrides everything, it is actually IMO a more fertile environment for innovation.

Let's take a common example of how market-driven technology innovation works. I think PCI Express is a good one. One tenet of the PCIe architecture definition was that for marketplace acceptance it had to be PCI driver-compatible at the software level. How much better of an I/O interconnect could you design if you could dispense with PCI driver compatibility? I think a lot. How much more efficient can your CPUs be at a similar fab process level if you start with the RISC-V architecture specification and design a bunch of your own instruction set additions which would be advantageous to your software? Like, for example, instruction-level access to hardware state machine accelerators? (Intel has state-machine accelerators like that in Xeon 6, but I don't think the interface is instruction-level.) How much better can you make scale-up and scale-out networks if you don't care about building on IEEE 802.1/3 Ethernet (beyond using Ethernet PHYs) just for economics? If even Internet Protocol is unnecessary and a ground-up inter-networking layer is allowed?

There is so much in computer architecture that has compatibility roots in the 1980s and 1990s.

I think that if the need for profitability is thrown out the window, historically incompatible architecture and implementation innovations are possible that could be pretty amazing. The latest fab process is probably always better to have, everything else being equal, but if everything else isn't equal the best fab process may not be enough to ensure system-level superiority.
 
Having spent decades in computer engineering and software engineering, I can say with high confidence that innovative systems and software architecture and implementation can beat less scalable and efficient implementations that may have more advanced foundry technologies underlying them. US, Euro, and Israeli computing product companies (I'm adding Israel in this list because they have an out-sized impact on the computing industry compared to their population, much like Taiwan) tend to do what's best for economics rather than what's best in purely technical terms. In China, where market economics are clearly thrown out the window, and dictated national urgency overrides everything, it is actually IMO a more fertile environment for innovation.

Let's take a common example of how market-driven technology innovation works. I think PCI Express is a good one. One tenet of the PCIe architecture definition was that for marketplace acceptance it had to be PCI driver-compatible at the software level. How much better of an I/O interconnect could you design if you could dispense with PCI driver compatibility? I think a lot. How much more efficient can your CPUs be at a similar fab process level if you start with the RISC-V architecture specification and design a bunch of your own instruction set additions which would be advantageous to your software? Like, for example, instruction-level access to hardware state machine accelerators? (Intel has state-machine accelerators like that in Xeon 6, but I don't think the interface is instruction-level.) How much better can you make scale-up and scale-out networks if you don't care about building on IEEE 802.1/3 Ethernet (beyond using Ethernet PHYs) just for economics? If even Internet Protocol is unnecessary and a ground-up inter-networking layer is allowed?

There is so much in computer architecture that has compatibility roots in the 1980s and 1990s.

I think that if the need for profitability is thrown out the window, historically incompatible architecture and implementation innovations are possible that could be pretty amazing. The latest fab process is probably always better to have, everything else being equal, but if everything else isn't equal the best fab process may not be enough to ensure system-level superiority.
one can also site xSoviet Union engineers' innovative ideas abut efficient algorithm due to lack of access to advanced compute and memory. On the other hand, even in US, there have never been shortage of crazy entrepreneurs and startups attempting to compete with giants with limited resource. As long as US continues to attract the world wide talent, the eco-system in overall US will still be the best. We do need to get rid of crazy regulations, and grant green card for all foreign MS/PhD graduates
 
Nvidia's direct shipping to China may have dropped to 0 but what Jensen forgot to mention is that GPU shipping to Singapore and Malaysia have skyrocketed. One has to wonder where those GPUs are going.
 
Back
Top