Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-may-sell-part-of-intel-foundry-in-the-future-intel-at-citi-2025-global-tmt-conference.23553/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel may sell part of Intel Foundry in the future - Intel at Citi 2025 Global TMT Conference

But I think we'll be seeing Intel 3 in many different situations in the future...
Intel 3 is used in CWF.

x3d? - "Compute dies are placed on top of Intel 3 base dies using 3D stacking. Base dies host the chip’s mesh interconnect and L3 slices. Placing the L3 on separate base tiles gives Intel the die area necessary to implement 8 MB L3 slices, which gives a chip 576 MB of last level cache."

 
Those are the current statistics. He said they want the figure to be below 30%. The reason for this is to maintain flexibility in terms of capital investments.


@XYang2023,

The first line of the first post in this thread already links to Intel’s official audio replay, which is freely available to the public.

Is there any particular reason you’re asking people to visit your own YouTube channel to listen to the same content you copied or recorded from Intel?
 
@XYang2023,

The first line of the first post in this thread already links to Intel’s official audio replay, which is freely available to the public.

Is there any particular reason you’re asking people to visit your own YouTube channel to listen to the same content you copied or recorded from Intel?
No, I think people should refer to the original source instead of a secondhand source that may contain added biases. The link I shared is my personal recording to keep track of Intel's communications. I've turned off all ads on such videos, and I'm not making any profit from them. The video also includes Youtube automated transcription to assist people with hearing difficulties.
 
Ideally, each event should be uploaded to YouTube by Intel's Investor Relations team, so that all stakeholders can review them. Once they do that, I won’t need to spend my time on it.
 
Another point is that people should refer to the actual voice recording instead of the transcript, as the tone of the conversation also conveys important information.
 
Another point is that people should refer to the actual voice recording instead of the transcript, as the tone of the conversation also conveys important information.

Again. Are you a robot or a computer bot? Did you read the very first line of the very first post in this thread?
 
Again. See my other reply.

Another point is that people should refer to the actual voice recording instead of the transcript, as the tone of the conversation also conveys important information.

Unless you’re not a real person, you should have noticed that the first line of the first post in this thread links to real Q&A conversations audio replay, which your YouTube channel copied. If you’re telling us not to trust that, then how are we supposed to trust yours?
 
Last edited:
By the way, in the recording, I also normalize the audio levels in DaVinci Resolve according to YouTube standards. As a result, I can clearly hear the conversation even in my car. Personally, I always listen this way rather than through their website.

That’s why I said Intel should publish those events on YouTube with better audio.
 
By the way, in the recording, I also normalize the audio levels in DaVinci Resolve according to YouTube standards. As a result, I can clearly hear the conversation even in my car. Personally, I always listen this way rather than through their website.

That’s why I said Intel should publish those events on YouTube with better audio.

Your reasoning keeps changing and shifting. Please read the first line of the first post in this thread. If you can’t, then ask a human to help you.
 
Unless you’re not a real person, you should have noticed that the first line of the first post in this thread links to real Q&A conversations, which your YouTube channel copied. If you’re telling us not to trust that, then how are we supposed to trust yours?”
... What you can do:
1. Download the Youtube Video
2. Stripe the audio
3. Apply Whisper
4. Apply diff to your transcription

Anyway, in the video, I didn’t include any extra personal or opinionated comments. It’s meant to help people have a better experience consuming relevant information about Intel.
 
Your reasoning keeps changing and shifting. Please read the first line of the first post in this thread. If you can’t, then ask a human to help you.
I am not changing my reasoning; I am simply explaining why I am doing it. It is evident from the statistics that people prefer consuming materials on YouTube. I have to agree because I prefer that approach too.

Once I recorded it in real-time, I enhanced the audio and then uploaded it to YouTube to leverage their live transcription. This way, I can carefully study the material. I then relistened it again while driving to work.
 
... What you can do:
1. Download the Youtube Video
2. Stripe the audio
3. Apply Whisper
4. Apply diff to your transcription

Anyway, in the video, I didn’t include any extra personal or opinionated comments. It’s meant to help people have a better experience consuming relevant information about Intel.

Please ask your human handler to intervene.
 
Please ask your human handler to intervene.
Or you could vibe-code the four points I listed to verify the content — I guess it could work. I recently uploaded a video demonstrating vibe coding for educational purposes:


By the way, I also highlight relevant Intel products that can be used for LLM inferencing in that video.
 
Back
Top