Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?members/simguru.238/recent-content
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Recent content by simguru

  1. simguru

    Google says its AI supercomputer is faster, greener than Nvidia A100 chip

    To prove what? Functional correctness should be by design or formally verifiable for anything digital.
  2. simguru

    Google says its AI supercomputer is faster, greener than Nvidia A100 chip

    What am I supposed to be doing? https://cameron-eda.com/2020/06/03/rolling-your-own-ams-simulator/
  3. simguru

    What Happens When Shrink Ends?

    QC does not lend itself to anything that uses a lot of data, and the requirement for super-cooling it limits its use to server farms. QC does polynomial stuff well, AI is used for np-complete problems.
  4. simguru

    What Happens When Shrink Ends?

    I doubt there will be much quantum computing, it's not the sort of thing that does video decompression or neural networks. I am, however, optimistic about (quasi) adiabatic logic as a way to get to lower power.
  5. simguru

    What Happens When Shrink Ends?

    The problem with asynchronous logic in the past has been the lack of tools for design and verification. These guys did it fairly recently - https://etacompute.com/# - but dropped it again (AFAIK)...
  6. simguru

    Will AMD, Nvidia, or Intel use RISC-V in the future?

    I'm sure lots of interesting stuff will be happening this decade, we can review at the end of it. SYCL, OneAPI and CUDA are all work-arounds for not knowing how to write compilers for regular C/C++ for heterogeneous platforms. Altera & Xilinx had decades to work out how to do that and failed...
  7. simguru

    Will AMD, Nvidia, or Intel use RISC-V in the future?

    The RISC-V point I was making is that you can subsume RISC-V (along with X86 & ARM) into a VLIW approach. That's likely to happen at the compiler/runtime level because there is no standard RISC-V, so you want to move to an approach of interpreting (source or machine) code and (re) JiT'ing for...
  8. simguru

    Will AMD, Nvidia, or Intel use RISC-V in the future?

    Optane is a phase-change memory technology, it's fast and low power, it holds data indefinitely, is rad-hard and survives orders of magnitude more write cycles than Flash. However, Flash is significantly cheaper, so if you stick a cache in front of them both and give yourself enough Flash to...
  9. simguru

    Will AMD, Nvidia, or Intel use RISC-V in the future?

    (Explaining...) The DRAM is essentially an extra level of cache for slow devices, if your devices are fast enough it isn't needed. If you have stalling issues because of the write latency from L2/L3 level you can expand at the level so that you don't miss cache at a rate where you'll stall...
  10. simguru

    Will AMD, Nvidia, or Intel use RISC-V in the future?

    Because of the Flash wear-leveling and DRAM buffering you get no advantage in using Optane, that's why Intel lost so much money on it. You need to switch architecture to something where you get the benefits of using it directly (ditching the DRAM and excess wear-levelling hardware), but Intel...
  11. simguru

    Will AMD, Nvidia, or Intel use RISC-V in the future?

    DRAM might be necessary as working memory, but not as cache for Optane, If you are doing things like AI training, where you are reading through a lot of data repeatedly the DRAM caching buys you nothing. HDDs needed DRAM for caching, Flash needs it because of the slow read/write times. Flash...
  12. simguru

    Will AMD, Nvidia, or Intel use RISC-V in the future?

    In-memory computing is not quite the same as in-RAM computing where you avoid swapping, you can get that in general with Tidalscale - https://www.tidalscale.com/300x-performance-gains-without-changing-a-line-of-code/ For proper in-memory computing you want a ratio of local memory to core of...
  13. simguru

    Will AMD, Nvidia, or Intel use RISC-V in the future?

    Relational databases are a software thing, not really processors, but since you bring it: up in-memory computing will beat other approaches at the end of the day, particularly for databases. SSDs vs HDDs has been a decades long battle, I'm not sure what investment rule (if any ) was broken, but...
  14. simguru

    Will AMD, Nvidia, or Intel use RISC-V in the future?

    We'll probably live that long. The problem with the ISA approach is it doesn't handle heterogeneity or adaption well. RISC-V is an extensible ISA, which means you need a runtime system that can adapt, and a runtime system that can adapt, can do a lot more than RISC-V. The VLIW approach wins...
  15. simguru

    Will AMD, Nvidia, or Intel use RISC-V in the future?

    Linux doesn't need ISAs, it's open source code, so you can interpret it on any processor. Linux was doing fine without RISC-V it will do equally well without it.
Back
Top