You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
QC does not lend itself to anything that uses a lot of data, and the requirement for super-cooling it limits its use to server farms. QC does polynomial stuff well, AI is used for np-complete problems.
I doubt there will be much quantum computing, it's not the sort of thing that does video decompression or neural networks. I am, however, optimistic about (quasi) adiabatic logic as a way to get to lower power.
The problem with asynchronous logic in the past has been the lack of tools for design and verification.
These guys did it fairly recently - https://etacompute.com/# - but dropped it again (AFAIK)...
I'm sure lots of interesting stuff will be happening this decade, we can review at the end of it.
SYCL, OneAPI and CUDA are all work-arounds for not knowing how to write compilers for regular C/C++ for heterogeneous platforms. Altera & Xilinx had decades to work out how to do that and failed...
The RISC-V point I was making is that you can subsume RISC-V (along with X86 & ARM) into a VLIW approach. That's likely to happen at the compiler/runtime level because there is no standard RISC-V, so you want to move to an approach of interpreting (source or machine) code and (re) JiT'ing for...
Optane is a phase-change memory technology, it's fast and low power, it holds data indefinitely, is rad-hard and survives orders of magnitude more write cycles than Flash. However, Flash is significantly cheaper, so if you stick a cache in front of them both and give yourself enough Flash to...
(Explaining...)
The DRAM is essentially an extra level of cache for slow devices, if your devices are fast enough it isn't needed. If you have stalling issues because of the write latency from L2/L3 level you can expand at the level so that you don't miss cache at a rate where you'll stall...
Because of the Flash wear-leveling and DRAM buffering you get no advantage in using Optane, that's why Intel lost so much money on it. You need to switch architecture to something where you get the benefits of using it directly (ditching the DRAM and excess wear-levelling hardware), but Intel...
DRAM might be necessary as working memory, but not as cache for Optane, If you are doing things like AI training, where you are reading through a lot of data repeatedly the DRAM caching buys you nothing. HDDs needed DRAM for caching, Flash needs it because of the slow read/write times.
Flash...
In-memory computing is not quite the same as in-RAM computing where you avoid swapping, you can get that in general with Tidalscale -
https://www.tidalscale.com/300x-performance-gains-without-changing-a-line-of-code/
For proper in-memory computing you want a ratio of local memory to core of...
Relational databases are a software thing, not really processors, but since you bring it: up in-memory computing will beat other approaches at the end of the day, particularly for databases.
SSDs vs HDDs has been a decades long battle, I'm not sure what investment rule (if any ) was broken, but...
We'll probably live that long. The problem with the ISA approach is it doesn't handle heterogeneity or adaption well. RISC-V is an extensible ISA, which means you need a runtime system that can adapt, and a runtime system that can adapt, can do a lot more than RISC-V.
The VLIW approach wins...
Linux doesn't need ISAs, it's open source code, so you can interpret it on any processor. Linux was doing fine without RISC-V it will do equally well without it.