Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/near-memory-computing-untether-ai-intel-tsm-turing-machine-s.11275/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Near Memory Computing ? Untether AI, Intel, TSM, Turing Machine ?s

Arthur Hanson

Well-known member
Intel is taking a stake in Untether AI that has developed a chip that uses near memory computing to vastly speed up the computing process. This seems to me like a variation of Automata computing where you mix memory and computing. It seems this theory has been around since the dawn of computing with the Turing machine. Are their any opinions on whether its time is coming? Also Micron was involved in this until they spun it off as a separate company which I wonder if they still have a relationship with, which I believe they do. I have also heard TSM is looking at putting Crossbar memory on board some chips. Any opinions or thoughts on any of this would be appreciated.

Turing machine - Wikipedia
 
Not much info at that site. The data rates claimed would bind the memory closely to the processing, which then gives you the inverse problem of inflexible path for the flow through the processors. They are talking about moving ten million bits or more every nanosecond. The fastest flexible mesh fabrics on chips are 100 times slower, and that is for the high end routers at hundreds of watts.
So, at the moment, the claims seem magical. Until more information comes out.
 
When I was in Bing I evaluated the first generation of Automata. It was too small to be a general memory and the regex inspired processing is not a large part of most problems. It seemed modeled on deep packet inspection in network routers. I don't know if the current generation is still like that or if the processing has become more like the multistage signal processor with crossbar between stages which is used for NLP problems. Plus feature generating first stages, etc.

It is true that memory locality is a big deal. However, to really optimize the power per operation the memory needs to be organized within the pipeline of the algorithm. That is how you convert the 99% of energy wasted on memory in a classic CPU into more like a 50:50 you find in custom acceleration. Automata v1 seemed more like a big register set associated with each primitive core.
 
Since MU spun out the new company, I am not sure if there is a second generation coming. I don't have any information from the new company. The original concept was a co-processor on a big PCIe card. The actual Automata was never intended to be a memory all by itself.
Processing in memory is an idea that pops up, then fades away. How to write a program that will take advantage is the hard part. And making chips is hard.
 
Back
Top