Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/tsmc-open-to-memory-chip-acquisition.10754/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

TSMC Open to Memory Chip Acquisition

The major question is will the relationship between memory and processing structure change the whole game with something like Automata or an even more radical structure? TSM with their wide range of technologies and backing of Apple could change the whole processing/memory game in one bold move and they definitely have the skill, resources, equipment and above all the strongest veil of secrecy needed to pull this off and surprise everyone. Any thoughts or comments on this appreciated.

I don't know if Apple wants to foster an entity in TSMC that is basically vertically integrated like Samsung, just lacking the system design that Apple currently has. It won't be difficult for TSMC to get the designers once it has the vertically integrated system manufacturing.

It would still be difficult or at least a long time for TSMC to get there in any case. The 3D NAND technology that Apple mainly uses from Toshiba would need to be acquired by TSMC, for example. 3D NAND is also Samsung's biggest thing.
 
I think you will see more memory embedded in various accelerators, since a key reason accelerators win is by restructuring their pipelines to keep data flowing locally and whenever an algorithm needs multiple passes or random access, or when stages are scheduled independently and need buffers, the lowest power and lowest latency solution is generally embedded memory.

The reverse has been less interesting because memory is huge and often connections are scattered across many chips. 50 years of experience with data structures has produced clever ways to tie that all together and very little of the processing is local scanning at time of need. The processing which is done often requires a processor in a logic process, not a memory process. This area has been attempted at least since the 1970s to my knowledge, look up Goodyear* Staran, or ICL DAP, and it always comes to the same thing. Local processing just is too limited and literal scanning of mass data is slow. Modern data flows prep data on the input pipeline with everything from curation, to catching errors, to scanning for entities, building indexes, picking out structure, etc.

Automata was possibly best suited for things like deep packet inspection in networking. It was a memory-dominated accelerator, not bulk memory with processing embedded. I know of a few special cases of smart DRAM which would work but none where the world needs enough of them to keep a production line in business.

* Yes, that Goodyear. And the first computer with virtual memory was developed in the early 1960s by Tate and Lyle, a company best known for syrup and cookies. Early computing was strange.
 
As a side note on early computing, I find the history so fascinating. Many traditional manufacturing companies/conglomerates got into the computer industry, though most were unsuccessful. Most early applications were mainframe and industrial controls, so it makes sense from that standpoint that some of these companies like GE/Honeywell were in house users of the systems they developed.
 
I wonder why you don't hear people design on TSMC vs Samsung for DRAM needs. Nvidia always seems to use TSMC but get their memory from Samsung. I thought TSMC wasn't capable of doing memory. Is this true?
 
Back
Top