You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
Disappointing to see this from Sam Altman and the team at OpenAI. What we’re witnessing across AI right now is a test of values as much as it is a test of capability. And it’s a test of trust.
Parents, educators, CEOs, pastors -- all of us who feel the responsibility to use technologies to promote the well-being of those we care most about -- now face an even harder question: can we trust that the AI we put in their hands is good for them?
Technology on its own is neutral. But the decisions we make with it reflect what we value most.
At Gloo, we believe AI must be shaped for good. Grounded in trust, transparency, and values that protect the next generation. That’s why we’re investing in standards and benchmarks like Flourishing AI, to ensure innovation advances without sacrificing what matters most.
This is the second example of dumb AI initiatives I've seen in the past few days.
First it was that silly statement on a superintelligence development pause or ban signed by technical geniuses like Steve Wozniak, Prince Harry, Meghan Markle, and Steve Bannon. I noticed that one of the AI experts who makes the most sense to me, Andrew Ng, thinks the fear of superintelligence is overblown, and I agree. Even assuming Andrew and I are dead wrong, talk about opening up a huge door to private groups and secret government projects to get there first.
And then there's Pat, who seems to forget that the latest statistics I've seen show porn sites to be about 4% of the entire internet. 4% is about 44 million sites, give or take a few hundred thousand. I know Pat is a devout Christian, but doesn't he have more productive things to spend his time on? I guess not. I suppose this is just his latest way to get peoples' attention.
OpenAI should not get into erotica but not for the reasons discussed here, but for the same reasons YouTube does not get into erotica - it's not advertiser friendly.
In my opinion, AI will not become profitable by selling tokens but by guiding decision making, including purchasing decisions.
Here is a tangible example. I know an attorney who is starting to get ChatGPT referrals. People are asking AI legal questions, and while AI is providing some answers, its referencing and linking back to attorney websites and advising people that they should not use ChatGPT for legal advice and to consult an attorney. There is an industry appearing for optimizing your marketing for AI referrals.
People are spending a lot, and I mean a lot of time with AI, in a way I don't think many people appriciate. We are all tech minding people here and think of AI as something people are going to use for coding and engineering and helping with work. That's about 10-15% of what AI is being used for and that percentage is dropping. Kids are using AI for everything, and I mean literally everything - for example a kid might ask ChatGPT "I'm bored what should I do today". People are getting used to outsourcing their entire critical thinking and decision making process to AI.
When someone asks ChatGPT "I'm bored what should I do today", if OpenAI wants to make money it should give recommendations like "Here is a local escape room" or "Here are some new releases at the movie theater" (with those businesses paying to get themselves more highly recommended) and not "Here is some porn"
Kids are using AI for everything, and I mean literally everything - for example a kid might ask ChatGPT "I'm bored what should I do today". People are getting used to outsourcing their entire critical thinking and decision making process to AI.
I agree, and not just kids. I've talked to a small number of college students who are completely dependent on LLMs. (I don't know a lot of college students, so my sample size is very small.) Curiously, not one I've talked to understands that LLMs are based on analyzing probabilities, not actual intelligence. And not one of them seemed to realize how LLMs can hallucinate and what that really means.
I agree, and not just kids. I've talked to a small number of college students who are completely dependent on LLMs. (I don't know a lot of college students, so my sample size is very small.) Curiously, not one I've talked to understands that LLMs are based on analyzing probabilities, not actual intelligence. And not one of them seemed to realize how LLMs can hallucinate and what that really means.
So you agree with Pat that AI could be ... Less than good for people, and maybe governance and awareness is something to investigate ?
(Fwiw I'm also fully in the camp that worries about superintelligence / AGI are overblown, at least for the next 15 years or so. If we do create AI with "super intelligence" then it might just mean we are much less "intelligent" as a species than we might accept).