Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/will-tsm-become-a-leader-in-inference-and-ai.23885/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Will TSM become a leader in inference and AI

Arthur Hanson

Well-known member
TSM has been dealing with inference issues at 2nm production. The question is will this experience help TSM in building advanced AI chips in the future? Any clarification or views on this would be greatly appreciated. Thanks
 
TSMC deals with manufacturing issues, and they don't care if the chip they are fabricating is for AI training or AI inferencing, it doesn't matter for them. TSMC takes orders to manufacture chips, but the chip function doesn't come into play. Yes, TSMC has a library of IP to offer chip design customers, but how the customer assembles those IP blocks is up to each customer. TSMC doesn't tell its customers what to design.
 
That is not true Arthur. Do you have a reference?
https://www.notebookcheck.net/Tesla...x-cheaper-than-Nvidia-AI-chips.1145221.0.html

Dan, I looked for my original reference and could not find it, but I did run across this and TSM working with inference. If you have any questions I will look further. I read about TSM running into inference at 2nm originally but could not find the same reference. Thanks for the heads up, it made me do some research of value to me.
 
Back
Top