Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/pat-gelsinger-disappointed-with-sam-altman.23880/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Pat Gelsinger Disappointed with Sam Altman

Daniel Nenni

Admin
Staff member
View Pat Gelsinger’s  graphic link
Pat Gelsinger

Disappointing to see this from Sam Altman and the team at OpenAI. What we’re witnessing across AI right now is a test of values as much as it is a test of capability. And it’s a test of trust.

Parents, educators, CEOs, pastors -- all of us who feel the responsibility to use technologies to promote the well-being of those we care most about -- now face an even harder question: can we trust that the AI we put in their hands is good for them?

Technology on its own is neutral. But the decisions we make with it reflect what we value most.

At Gloo, we believe AI must be shaped for good. Grounded in trust, transparency, and values that protect the next generation. That’s why we’re investing in standards and benchmarks like Flourishing AI, to ensure innovation advances without sacrificing what matters most.

https://lnkd.in/gJZ3vaca

Sam Altman wants to 'treat adults like adults'—but can OpenAI keep ChatGPT safe after opening the door to erotica? | Fortune
fortune.com
 
This is the second example of dumb AI initiatives I've seen in the past few days.

First it was that silly statement on a superintelligence development pause or ban signed by technical geniuses like Steve Wozniak, Prince Harry, Meghan Markle, and Steve Bannon. I noticed that one of the AI experts who makes the most sense to me, Andrew Ng, thinks the fear of superintelligence is overblown, and I agree. Even assuming Andrew and I are dead wrong, talk about opening up a huge door to private groups and secret government projects to get there first.

And then there's Pat, who seems to forget that the latest statistics I've seen show porn sites to be about 4% of the entire internet. 4% is about 44 million sites, give or take a few hundred thousand. I know Pat is a devout Christian, but doesn't he have more productive things to spend his time on? I guess not. I suppose this is just his latest way to get peoples' attention.
 
OpenAI should not get into erotica but not for the reasons discussed here, but for the same reasons YouTube does not get into erotica - it's not advertiser friendly.

In my opinion, AI will not become profitable by selling tokens but by guiding decision making, including purchasing decisions.

Here is a tangible example. I know an attorney who is starting to get ChatGPT referrals. People are asking AI legal questions, and while AI is providing some answers, its referencing and linking back to attorney websites and advising people that they should not use ChatGPT for legal advice and to consult an attorney. There is an industry appearing for optimizing your marketing for AI referrals.

People are spending a lot, and I mean a lot of time with AI, in a way I don't think many people appriciate. We are all tech minding people here and think of AI as something people are going to use for coding and engineering and helping with work. That's about 10-15% of what AI is being used for and that percentage is dropping. Kids are using AI for everything, and I mean literally everything - for example a kid might ask ChatGPT "I'm bored what should I do today". People are getting used to outsourcing their entire critical thinking and decision making process to AI.

When someone asks ChatGPT "I'm bored what should I do today", if OpenAI wants to make money it should give recommendations like "Here is a local escape room" or "Here are some new releases at the movie theater" (with those businesses paying to get themselves more highly recommended) and not "Here is some porn"
 
Kids are using AI for everything, and I mean literally everything - for example a kid might ask ChatGPT "I'm bored what should I do today". People are getting used to outsourcing their entire critical thinking and decision making process to AI.
I agree, and not just kids. I've talked to a small number of college students who are completely dependent on LLMs. (I don't know a lot of college students, so my sample size is very small.) Curiously, not one I've talked to understands that LLMs are based on analyzing probabilities, not actual intelligence. And not one of them seemed to realize how LLMs can hallucinate and what that really means.
 
I agree, and not just kids. I've talked to a small number of college students who are completely dependent on LLMs. (I don't know a lot of college students, so my sample size is very small.) Curiously, not one I've talked to understands that LLMs are based on analyzing probabilities, not actual intelligence. And not one of them seemed to realize how LLMs can hallucinate and what that really means.
So you agree with Pat that AI could be ... Less than good for people, and maybe governance and awareness is something to investigate ? :)

(Fwiw I'm also fully in the camp that worries about superintelligence / AGI are overblown, at least for the next 15 years or so. If we do create AI with "super intelligence" then it might just mean we are much less "intelligent" as a species than we might accept).
 
So you agree with Pat that AI could be ... Less than good for people, and maybe governance and awareness is something to investigate ? :)
Nope. You can't effectively govern multi-national entities. In fact in the 25 years since I've met Pat, the only thing I can remember strongly agreeing with him about was doubling down on being a foundry after he became the Intel CEO. (Though nothing like the way he went about it.)
(Fwiw I'm also fully in the camp that worries about superintelligence / AGI are overblown, at least for the next 15 years or so. If we do create AI with "super intelligence" then it might just mean we are much less "intelligent" as a species than we might accept).
I think superintelligence and completely self-driving vehicles have a lot in common. They're promised to be just around the corner, but my impression is we're really a long way away from a general implementation. I think it would be helpful if we figured out how the human brain works before we start trying to build superintelligence, but I guess I'm just being silly and ignorant. As far as I can tell, theories about human cognition are still in the oven and not ready for general consumption yet.
 
I think superintelligence and completely self-driving vehicles have a lot in common. They're promised to be just around the corner, but my impression is we're really a long way away from a general implementation. I think it would be helpful if we figured out how the human brain works before we start trying to build superintelligence, but I guess I'm just being silly and ignorant. As far as I can tell, theories about human cognition are still in the oven and not ready for general consumption yet.

Self-driving at least has a known path to resolution - if roads were actually consistently and properly marked correctly.. I think it would be a lot easier to solve. You can take a Waymo already in a lot of places..

It will be interesting to see which way SI goes. I think there's a good chance that human "intelligence" is actually less complicated than we realize.. but I don't see LLMs or our current AI paths as a way to achieve SI.
 
Nope. You can't effectively govern multi-national entities. In fact in the 25 years since I've met Pat, the only thing I can remember strongly agreeing with him about was doubling down on being a foundry after he became the Intel CEO. (Though nothing like the way he went about it.)
If you were Pat, how would you have gone about pursuing the foundry business? What would you have done differently and what would you have done the same? Would you have gone for a split sooner?
 
This is the second example of dumb AI initiatives I've seen in the past few days.

First it was that silly statement on a superintelligence development pause or ban signed by technical geniuses like Steve Wozniak, Prince Harry, Meghan Markle, and Steve Bannon.
P.S. Thought of this post when I saw this on elsewhere: :)

1761393825372.png
 
If you were Pat, how would you have gone about pursuing the foundry business? What would you have done differently and what would you have done the same? Would you have gone for a split sooner?
Just a few top of mind thoughts.

Be more incremental about fab development, rather than dramatically over-spending on CAPEX.

Design a process for specific target customers, rather than try to force-fit them into processes designed for x86 CPU requirements. Build an internal team with deep (meaning multi-generational) experience using TSMC as a foundry (Intel has/had numerous people like this) to be a voice of the customer, and develop customer-driven capability requirements for IFS. This team should also develop the required IP porting strategy in phases, and recommend target customers based on the phasing of the IP strategy. (Perhaps IFS has done this, but I haven't seen/heard of a hint of it.)

Build the best PDK team possible based on requirements from the voice of the customer team. Develop a communicated PDK roadmap.

My thought for a while now is that I probably would have investigated starting with Intel 3 or Intel 4 rather than 18A for foundry customers. 18A looks too aspirational for the most likely early foundry customers, and Intel 3/4 is at least more mature.

Have IFS marketing pick out some potential small to medium scale foundry customers and make them offers they find difficult to refuse, to essentially buy foundry learning experiences for the Intel teams. Experience with real external projects is critical, IMO.

Not hire 20,000+ additional people.

Stop hating accelerator chips when that is where key customers are going (especially cloud customers). Place high priority on being a better chip development partner to cloud computing companies. (The IPU fiasco with Google being a bad example.) If you're not an indispensable partner to cloud computing companies designing their chips your future is limited.

One more little thing, not be such an egotistical embarrassment in the media.
 
Have IFS marketing pick out some potential small to medium scale foundry customers and make them offers they find difficult to refuse, to essentially buy foundry learning experiences for the Intel teams. Experience with real external projects is critical, IMO.
They tried it with ICF ... those companies that signed up became victims of a cruel joke. The best part is that basically the same mistakes were recommitted by IFS so no learning from ICF.
 
Just a few top of mind thoughts.

Be more incremental about fab development, rather than dramatically over-spending on CAPEX.

Design a process for specific target customers, rather than try to force-fit them into processes designed for x86 CPU requirements. Build an internal team with deep (meaning multi-generational) experience using TSMC as a foundry (Intel has/had numerous people like this) to be a voice of the customer, and develop customer-driven capability requirements for IFS. This team should also develop the required IP porting strategy in phases, and recommend target customers based on the phasing of the IP strategy. (Perhaps IFS has done this, but I haven't seen/heard of a hint of it.)

Build the best PDK team possible based on requirements from the voice of the customer team. Develop a communicated PDK roadmap.

My thought for a while now is that I probably would have investigated starting with Intel 3 or Intel 4 rather than 18A for foundry customers. 18A looks too aspirational for the most likely early foundry customers, and Intel 3/4 is at least more mature.

Have IFS marketing pick out some potential small to medium scale foundry customers and make them offers they find difficult to refuse, to essentially buy foundry learning experiences for the Intel teams. Experience with real external projects is critical, IMO.

Not hire 20,000+ additional people.

Stop hating accelerator chips when that is where key customers are going (especially cloud customers). Place high priority on being a better chip development partner to cloud computing companies. (The IPU fiasco with Google being a bad example.) If you're not an indispensable partner to cloud computing companies designing their chips your future is limited.

One more little thing, not be such an egotistical embarrassment in the media.
Thank you for the detailed and thoughtful answer. Do you think it’s too late now for Lip-Bu Tan to start engaging small foundry customers with Intel 18A? In terms of spending, I agree that it feels wasteful to have pursued high profile projects in the EU and then only to turn back and do damaging layoffs to core teams.
 
Personally, I can't imagine taking a Waymo. Trusting self-driving software?
Waymo does about 25% of the rideshare pickups in San Francisco with a fairly small number of cars. In my son’s neighborhood (Dolores Park) every 4th car going by seems to be a Waymo at certain times of the evening.
 
Do you think it’s too late now for Lip-Bu Tan to start engaging small foundry customers with Intel 18A?
I hope not, but I'm not convinced there are very many potential foundry customers who are interested in a process like 18A. They might think a new GAA process is taking too much risk for a new product launch. That's why I think one of the older FINFET processes might be a better initial foundry choice.
In terms of spending, I agree that it feels wasteful to have pursued high profile projects in the EU and then only to turn back and do damaging layoffs to core teams.
Agreed. I was specifically thinking of the US-Ohio location, but others are also valid concerns.
 
Last edited:
Waymo does about 25% of the rideshare pickups in San Francisco with a fairly small number of cars. In my son’s neighborhood (Dolores Park) every 4th car going by seems to be a Waymo at certain times of the evening.
I have no doubt what you're saying is true, but I've already admitted to being a dinosaur. I want someone/something controlling the vehicle with a sense of self-preservation, not just the equivalent of a cascade of IF-THEN-ELSE statements. (I'm sort of joking, but mostly not. ;))
 
Last edited:
I hope not, but I'm not convinced there are very many potential foundry customers who are interested in a process like 18A. They might think a new GAA process is taking too risk for a new product launch. That's why I think one of the older FINFET processes might be a better initial foundry choice.

Agreed. I was specifically thinking of the US-Ohio location, but others are also valid concerns.
Maybe the UMC/Intel 12nm node might be the answer. If that’s successful, there could be the potential for future UMC based EUV nodes as well.
 
Design a process for specific target customers, rather than try to force-fit them into processes designed for x86 CPU requirements
This is critical imo. I seriously wonder if Intel have figured this out even today. I fear they still don't get this. Maybe Lip Bu does but do those at the levers of IFS?
 
Back
Top