Array
(
[content] =>
[params] => Array
(
[0] => /forum/index.php?threads/nvidia-is-starting-to-lose-share-to-ai-chip-startups.20956/
)
[addOns] => Array
(
[DL6/MLTP] => 13
[Hampel/TimeZoneDebug] => 1000070
[SV/ChangePostDate] => 2010200
[SemiWiki/Newsletter] => 1000010
[SemiWiki/WPMenu] => 1000010
[SemiWiki/XPressExtend] => 1000010
[ThemeHouse/XLink] => 1000970
[ThemeHouse/XPress] => 1010570
[XF] => 2021370
[XFI] => 1050270
)
[wordpress] => /var/www/html
)
You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please,
join our community today !
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser .
Nvidia is starting to lose share to AI chip startups?
x.com
I then to agree with that statement. In our lab, there is only one person who uses CUDA but only when it is necessary (not often).
I use Pytorch and Llama3.1 via the API.
View attachment 2271
x.com
I then to agree with that statement. In our lab, there is only one person who uses CUDA but only when it is necessary (not often).
I use Pytorch and Llama3.1 via the API.
Are you talking about CUDA market share vs other AI software/tools or the Nvidia GPU/AI chips market share?
Are you talking about CUDA market share vs other AI software/tools or the Nvidia GPU/AI chips market share?
I think he is talking about the importance of CUDA. The inference side is abstracted away from CUDA.
I think he is talking about the importance of CUDA. The inference side is abstracted away from CUDA.
Then, the title seems a bit strange.
Then, the title seems a bit strange.
That is his statement. Hearsay from recent AI conferences.
I am not sure that nvda is losing shares to ai startups, but inference is indeed much easier to be migrated away from nvda and cuda. On the other hand, large scale training is still nvda's stronghold.
CUDA is used for more than just AI inference. The available software base is immense and hard to compete against.
I expect the AI neural networks market long term to evolve towards specific NPUs.