Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/new-type-of-micro-transistor-to-improve-ai-efficiency-dramatically.18962/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

New type of Micro Transistor to Improve AI efficiency dramatically?

Arthur Hanson

Well-known member

I would appreciate any feedback and views on this new transistor technology. This area is beyond my scope and I would like to know if this is a major advance and if so its potential impact. If so, how long will it take to really change the semi landscape? Any thoughts or comments would be appreciated.
 
Already on everyone's roadmap after stacked CMOS
 

I would appreciate any feedback and views on this new transistor technology. This area is beyond my scope and I would like to know if this is a major advance and if so its potential impact. If so, how long will it take to really change the semi landscape? Any thoughts or comments would be appreciated.
there are lots of ideas... this is one. If it takes off we will see products in 15 years. There will be a lot of forks in the road before then. The speed from initial university paper to production is much longer than people think.... 15 years minimum IMHO.
 
Perhaps a bit of a teaser - but +99% energy savings is already available with 10-year old 2D/2.5D technology. For detailed logic performance, I defer to those producing microcontroller designs, but for data transfer - and more than 50% of power goes into moving data between systems and logic units - the industry has everything it requires to cut power by 50% to 70% while still scaling.

I have been quiet since April and very busy putting in the work to prove my earlier claims. With working implementations in FPGA demonstrating routable 65ns cross-switch exchange using an off-the-shelf FPGA and +95% power savings versus comparable platforms, we will be demonstrating and publishing the results at the IEEE ICCE in Vegas in January. In exchange for a wider audience, we provided IEEE publishing rights (good for both), so I can only share a number of early 'prints' under limited access. I will post IEEE links as they become available. In this case, we went from university-type paper to functional FPGA in 5 years. All technologies are based on granted patents.

In the meantime, we are busy working on larger scale designs (256 to +2k ports), and expect to have a smaller scale (16x16 1-to-N fully nonblocking interconnect) MPW within a year. Preliminary SPICE results show 140ps in latency and 0.7mW per 100Gbps. What is exciting (and potentially concerning) is that we can quickly implement 16nm designs that outperform 4nm by 1000x. 256-port designs can immediately leverage existing CPO for 40.96 Tbps with zero tailing latency. We are currently investigating speed options that will support 1 PBps.

Net/Net - a lot will change in the industry when latency drops, data exchange power demands crater and scalability is immediately at hand. We have done the math, and know our current designs will allow global ICT energy demands in 2030 to actually drop below current levels while supporting expected data growth. Hopefully this analysis can be posted/published in coming months but is based on real-world results that will be released in January.

If there are interested senior members willing to review designs under NDA, I would be happy to prove the above. We are beginning to look for industry partnerships and are seeking investment. @cliff - Thanks for the offer to implement as an open source tech, however we owe our investors a return on their money.
 
Back
Top