Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/intel-is-industry%E2%80%99s-first-mover-on-high-na-euv-lithography-system.20059/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel is industry’s first mover on High NA EUV lithography system.

Intel and Pat Gelsinger like to promote the "AI PC" idea and try to convince Corporate CIOs that this is the most efficient and secured way to use AI.

I don't understand this assertion with only limited details published. For example, AI PC may help AI inference operations but I don't have a clear picture how AI PC can help AI training that needs a lot of data, memory, and storage. AI PCs don't have that kind of environment to begin with.

Also, companies do not like their data scattered to hundreds or thousands of PCs and locations. There are just too many potential data leaks or vulnerabilities for them to deal with. How can AI PC everywhere make AI and the data safer and more efficient as Pat Gelsinger claimed?

Additionally, Pat likes to tell the audience that AI servers consume too much electricity, generate too much heat. and need a lot of data center space and operation support. That's very true. But can some explain to me how 100 million or 200 million AI PCs will make the situation better?
What I have heard from software developer community, thanks to Apple M processor's UMA architecture, and with massive bandwidth and memory size, not a few AI developers migrate to MacBook as their preferred development platform. Seem Mac is replacing Intel as the true AI PC for development, and with its Metal framework, it could potentially threat Nvidia CUDA.
 
"High NA EUV is the next-generation lithography system developed by ASML following decades of collaboration with Intel."

A misleading statement from Intel, as if they co-owned it for decades.
Certainly some hyperbole here, but, in fairness, it was Intel that killed 157nm and said "we're going straight to EUV".
 
Intel and Pat Gelsinger like to promote the "AI PC" idea and try to convince Corporate CIOs that this is the most efficient and secured way to use AI.

I don't understand this assertion with only limited details published. For example, AI PC may help AI inference operations but I don't have a clear picture how AI PC can help AI training that needs a lot of data, memory, and storage. AI PCs don't have that kind of environment to begin with.

Also, companies do not like their data scattered to hundreds or thousands of PCs and locations. There are just too many potential data leaks or vulnerabilities for them to deal with. How can AI PC everywhere make AI and the data safer and more efficient as Pat Gelsinger claimed?

Additionally, Pat likes to tell the audience that AI servers consume too much electricity, generate too much heat and need a lot of data center space and operation support. That's very true. But can some explain to me how 100 million or 200 million AI PCs will make the situation better?

This topic probably requires its own thread, but I’ll take a quick stab. Note I’m still ‘jury out’ on this approach.

Air gapped networks are still a thing for various purposes; those air gaps definitely struggle for cost effectiveness for total available compute resources. Likewise, if you can AI process documents realtime in the future locally, you could argue there is more security in having AI on the device as sensitive documents would be processed on a single PC instead of a shared data center.

In addition, endpoint security tools like Crowdstrike are moving towards more ‘active’ threat modeling to secure that PC, rather than a fixed set of AntiVirus / threat definitions. AI processing may help with local immediate threat detection and resolution in the future on the PC. (And yes the mirror is true here too).

Local “AI/ML compute” is also similar to DSP functions, so this acceleration type will probably find some unexpected workloads (audio, video, transcoding) to improve battery life on laptops. Intel has a few good charts showing this already.

Looking at Microsot co-pilot; it appears the intent is to process both in the data center and on the local client. For large AI data center providers this effectively lowers their cost to provide services to end clients. This supports the ‘more total AI TOPs available anywhere, the better’.

Not every AI use case requires mega computing capability; the latest Google Pixel phones have some really interesting photo features that benefit from the local processing.

Finally, it’s cheap (relatively speaking) to stuff a PC with 64-128GB of RAM these days; allowing for larger ‘cheap’ model training than a GPU for cost sensitive applications.

Also related: https://chipsandcheese.com/2024/04/22/intel-meteor-lakes-npu/ - Technical deep dive into the Meteor Lake NPU

.. This is just my attempt to understand this trend. It’ll take at least 3 real generations of chips with this capability to see if the needle moves (just like raytracing on GPUs).

Edit: strategically for Intel this is all upside at this point. They'll be perceived as obsolete if they don't have NPUs embedded, and it also continues to shift the narrative away from Nvidia.
 
Last edited:
Back
Top