Array ( [content] => [params] => Array ( [0] => /forum/index.php?threads/how-china%E2%80%99s-new-ai-model-deepseek-is-threatening-u-s-dominance.21956/ ) [addOns] => Array ( [DL6/MLTP] => 13 [Hampel/TimeZoneDebug] => 1000070 [SV/ChangePostDate] => 2010200 [SemiWiki/Newsletter] => 1000010 [SemiWiki/WPMenu] => 1000010 [SemiWiki/XPressExtend] => 1000010 [ThemeHouse/XLink] => 1000970 [ThemeHouse/XPress] => 1010570 [XF] => 2021770 [XFI] => 1050270 ) [wordpress] => /var/www/html )
I really hope Intel improves its efforts to promote their GPUs. I have many ideas they could consider, but I don't work for Intel, and they should strive to do better in this area.Excellent video!
You might consider trying posting your ideas on the intel.com community forum for graphics products, which includes the B580. Perhaps someone from Intel will see the value in them, and contact you.I really hope Intel improves its efforts to promote their GPUs. I have many ideas they could consider, but I don't work for Intel, and they should strive to do better in this area.
I’m not sure. I’ve made several videos about the B580 and tagged @Intel and @MJHolthaus, but I haven’t received any direct feedback from Intel.You might consider trying posting your ideas on the intel.com community forum for graphics products, which includes the B580. Perhaps someone from Intel will see the value in them, and contact you.
Graphics
Intel® graphics drivers and software, compatibility, troubleshooting, performance, and optimizationcommunity.intel.com
It contains a 30-minute interview with Perplexity CEO Aravind Srinivas, which I found to be very informative.
I’m not sure. I’ve made several videos about the B580 and tagged @Intel and @MJHolthaus, but I haven’t received any direct feedback from Intel.
One of my videos has thousands of views, clearly showing that many people are interested in using the B580 for machine learning and AI. I genuinely feel that, instead of sampling the cards to some YouTubers, Intel could consider sending samples to me or my school. We could test them and provide valuable feedback.
I really hope Intel's marketing team becomes more proactive in addressing the market and takes steps to actively prepare for Falcon Shores.
I work in a university robotic lab and we have and use quite a lot of GPUs.
Thank you. I’ll think about that. I believe the lab I work in is definitely open to collaborations.I don't understand why this is a threat? And who said the US is dominant? Because you read it on the internet?
Who at Intel did you contact? I may be able to help. Send me private email through SemiWiki.
The Ollama deepseek-r1 model is a distilled version, it's not the deepseek V3 R1. The name chosen by Ollama is very misleading.You can test model on Your computer. Download and install Ollama , then enter ollama run deepseek-r1 into command line. It should download and run 7B model.
You can also test different sizes or different model from library. https://ollama.com/library/deepseek-r1
I think this model is pretty good but size is still limiting factor at least for personal use (locally). I spend 30 minutes forcing it to fix one function (unsuccessfully). Copilot (o1) fixed same issue instantly (literally just "fixt it" prompt). But again, it is more issue with size and 600B+ is probably better...
I think it’s fine. My understanding is that R1 stands for Reasoning model 1. Depending on the parameter size, the base models vary. I used the benchmark table to select the model, which I discussed in my video.The Ollama deepseek-r1 model is a distilled version, it's not the deepseek V3 R1. The name chosen by Ollama is very misleading.
The repeated monologue responses wear on me quickly.You can test model on Your computer. Download and install Ollama , then enter ollama run deepseek-r1 into command line. It should download and run 7B model.
You can also test different sizes or different model from library. https://ollama.com/library/deepseek-r1
I think this model is pretty good but size is still limiting factor at least for personal use (locally). I spend 30 minutes forcing it to fix one function (unsuccessfully). Copilot (o1) fixed same issue instantly (literally just "fixt it" prompt). But again, it is more issue with size and 600B+ is probably better...