Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/isscc-2021-plenary-keynode-speech-from-tsmc-chairman-dr-mark-liu.13790/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

ISSCC 2021 Plenary Keynode Speech from tsmc Chairman Dr. Mark Liu

300 billion transistors on a chip/chiplet! Will that enable one super computer at every home, every school, every company, and on every car?
 
Last edited:
FWIW, the 350W looks like part of the ASML lab trend mentioned some time ago (SPIE reports are still 250W in the field).

Also, the patterning comparison is done at 40-50 nm pitch; can't be ~30 nm with that pattern.
 
TSMC ISSCC 2021 Keynote Discussion

I’m not sure everyone understands the possible ramifications of Intel outsourcing CPU/GPU designs to TSMC so let’s review:

Intel and AMD will be on the same process so architecture and design will be the focus. More direct comparisons can be made.​
Intel will have higher volumes than AMD so pricing might be an issue. TSMC wafers cost about 20% less than Intel if you want to do the margins math.​
Intel will have designs on both Intel 7nm and TSMC 3nm so direct PDK/process comparisons can be made.​

Bottom line: 2023 will be a watershed moment for Intel manufacturing, absolutely!
 
300 billion transistors on a chip/chiplet! Will that enable one super computer at every home, every school, every company, and on every car?
The industry will likelly beat the 1T mark by the end of the decade even without monolithic 3D, or moving to something else than CMOS.
 
Indeed. It will probably be a domain specific chip like an AI accelerator.
Why? I don't see anybody in that space having as much money as Apple, AMD, or Intel. AI guys are all tiny startups running off investor cash, and no revenue.

I'd say fabs will be even somewhat wary of taking big orders from them with the prospect of them having cash flow issues if they can't sell their chips.

That been happening a few times before in the industry. TSMC I knew declined to work with a lot of bitcoin kids unless they pay cash, despite eye watering order sizes.
 
It has already been done and yes it is an AI chip:

The Cerebras Wafer Scale Engine is 46,225 mm2 with 1.2 Trillion transistors and 400,000 AI-optimized cores. By comparison, the largest Graphics Processing Unit is 815 mm2 and has 21.1 Billion transistors.

 
It has already been done and yes it is an AI chip:

The Cerebras Wafer Scale Engine is 46,225 mm2 with 1.2 Trillion transistors and 400,000 AI-optimized cores. By comparison, the largest Graphics Processing Unit is 815 mm2 and has 21.1 Billion transistors.

Well, calling this a "micro"-chip? Maybe a macrochip?
 
It has already been done and yes it is an AI chip:

The Cerebras Wafer Scale Engine is 46,225 mm2 with 1.2 Trillion transistors and 400,000 AI-optimized cores. By comparison, the largest Graphics Processing Unit is 815 mm2 and has 21.1 Billion transistors.


And the more obvious fact is that Flash memory has long passed the 1 trillion mark with monolithic 3D process.
 
Back
Top