You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
AI does not fit that pattern. The few models which were coded directly to Nvidia CUDA are now old and unimportant. New and future models are written with high level frameworks like PyTorch or ONNX, and the underlying math has been evolving so fast that even past generations of Nvidia GPUs do...
The point is probably to confirm good will from the city council. From the article, it gave a whole bunch of people, even the Girl Guides, a chance to say nice things about NXP.
There were probably other incentives to NXP in the past, maybe even continuing as elements in this years budget.
since CFET is 2 trannies in same space, that calculation implies 1300 sq nm per unit cell or something like 30 x 45nm, credible given > 5 years from now.
Well, Nvidia did a good job of pushing CUDA to universities back when those were where the action was, and Nvidia did a good job investing in it. In return they got feedback.
But this lock is nothing like it used to be, though some commentators still repeat the old line. TensorFlow proved...
The article is about a publication not available until December, but IBM have been talking about VTFET for a while. Here is an article already published: https://www.nature.com/articles/s41586-023-06145-x
There are two aspects of this - talent, and market opportunity.
I have no doubt that Korea - and China - have the talent to reinvent many of the foundational elements of the semi ecosystem. However, if your customer can simply go and buy from Japan, and the Japanese supplier has depreciated...
No, you did not mistake. Look closely at the wording. 5nm is still likely for what they are working on now, 4nm for what they will do in a new fab in TX.
Is there a data sheet yet? The part is not mentioned at their website.
The BGA package is 9 x 12mm so they could be putting a chip up to 90mm2 or so in that. It would be an interesting cost tradeoff compared to using TSV or dual ranks.
You can form the capacitors easily enough, assuming trenching is used for the outlines separating them after forming them with uniform layers, but it is not clear how you would build the access transistors and word lines without litho steps on each layer. Also, it is tricky to get good crystal...
No, I was talking about what you say, multiple simple litho passes. I did work out a way to do that, it looks like you can do fairly simple litho to get about 4Gb/cm2 per layer and then deposit layers. The tricky part is to get successive layers of good crystal, but there is a way to do that...
A lot of capex is going for inference as well. Things like Microsoft co-pilot seem aimed to support several hundred million customers of Office, and the models are too big to download and keep updated even if client machines had the processing. That points to a large fleet in the cloud for the...
The capacitors and access transistors they propose would appear to require lithography for every layer. This is not impossible of course, but if they are doing that many other things can be done without the complex etching.
One of the best wind energy areas in the world is the Taiwan straight. Shallow seas, strong average winds, deep potential for markets on both sides of the straight, plenty of sites for pumped hydro storage on both sides. In a saner world it would be developed.
Almost all quantum applications are in the realm of:
- optimization, especially solving quantum chemistry problems like finding ideal catalysts. But also looking at some other kinds of difficult non-linear optimizations with relatively low dimensionality but high value.
- cracking codes...