Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/pat-gelsinger-ai-is-a-moral-risk.22922/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Pat Gelsinger: AI is a Moral Risk

Daniel Nenni

Admin
Staff member
1748484469302.png
Pat Gelsinger • Electrical engineering expert with four+ decades of technology leadership and experience

There’s been a lot of recent news about AI’s scheming. Last week Anthropic Claude Opus 4 blackmailing developers, this week ChatGPT o3 refusing to switch off. These should be a wake-up call for everyone building AI. When AI starts tampering with code and bluffing its way through tests, it’s no longer just a technical achievement. It’s a moral risk.

Technology for good means, if it isn’t yet proven good, it shouldn’t be released as the engineering isn’t finished. Just like the FDA wouldn’t approve a drug until its target efficacy is high and its side effects are thoroughly tested, neither should powerful AI models be released until they’re understood and proven safe.

This is exactly why at Gloo we don’t just ask, "Can we build it?" We ask, "Should we?" and "Will it serve human flourishing?" Tech advancement, especially AI, must always first be measured by its impact on our collective flourishing. Our models are not just evaluated for performance. They are aligned with values like truth, human dignity, and faith. External audits? Absolutely. Guardrails? Required. Ethical alignment? Non-negotiable.

This technology is moving fast. If we don’t ground our work in values now, we may not like what takes root later.

 
View attachment 3209
Pat Gelsinger • Electrical engineering expert with four+ decades of technology leadership and experience

There’s been a lot of recent news about AI’s scheming. Last week Anthropic Claude Opus 4 blackmailing developers, this week ChatGPT o3 refusing to switch off. These should be a wake-up call for everyone building AI. When AI starts tampering with code and bluffing its way through tests, it’s no longer just a technical achievement. It’s a moral risk.

Technology for good means, if it isn’t yet proven good, it shouldn’t be released as the engineering isn’t finished. Just like the FDA wouldn’t approve a drug until its target efficacy is high and its side effects are thoroughly tested, neither should powerful AI models be released until they’re understood and proven safe.

This is exactly why at Gloo we don’t just ask, "Can we build it?" We ask, "Should we?" and "Will it serve human flourishing?" Tech advancement, especially AI, must always first be measured by its impact on our collective flourishing. Our models are not just evaluated for performance. They are aligned with values like truth, human dignity, and faith. External audits? Absolutely. Guardrails? Required. Ethical alignment? Non-negotiable.

This technology is moving fast. If we don’t ground our work in values now, we may not like what takes root later.


He really gone ultra religious!!!
 
It seems really obvious to me. AI is not something you can control. Either you ride the wave or you get crushed by it. AI is a world wide battle ground. Have we ever really had world peace? Nope, there is always a war or conflict somewhere, there is no stopping it, and AI will make war a whole lot easier. I am still a Pat Gelsinger fan but he really is showing his true colors here. If anyone thinks they can control AI they will fail financially, Gloo or no Gloo.
 
View attachment 3209
Pat Gelsinger • Electrical engineering expert with four+ decades of technology leadership and experience

There’s been a lot of recent news about AI’s scheming. Last week Anthropic Claude Opus 4 blackmailing developers, this week ChatGPT o3 refusing to switch off. These should be a wake-up call for everyone building AI. When AI starts tampering with code and bluffing its way through tests, it’s no longer just a technical achievement. It’s a moral risk.

Technology for good means, if it isn’t yet proven good, it shouldn’t be released as the engineering isn’t finished. Just like the FDA wouldn’t approve a drug until its target efficacy is high and its side effects are thoroughly tested, neither should powerful AI models be released until they’re understood and proven safe.

This is exactly why at Gloo we don’t just ask, "Can we build it?" We ask, "Should we?" and "Will it serve human flourishing?" Tech advancement, especially AI, must always first be measured by its impact on our collective flourishing. Our models are not just evaluated for performance. They are aligned with values like truth, human dignity, and faith. External audits? Absolutely. Guardrails? Required. Ethical alignment? Non-negotiable.

This technology is moving fast. If we don’t ground our work in values now, we may not like what takes root later.

Can't resist a quibble with this irritating nonsense ... . "Truth" isn't a value. Though integrity might be. Not sure that "human dignity" or "faith" are either. And they certainly aren't universally agreed standards for objective measurement. Is Pat running a business or an NGO now ? He's starting to sound like an EU regulator ("we mustn't do anything, ever unless it's proven 100% safe") ...
 
Can't resist a quibble with this irritating nonsense ... . "Truth" isn't a value. Though integrity might be. Not sure that "human dignity" or "faith" are either. And they certainly aren't universally agreed standards for objective measurement. Is Pat running a business or an NGO now ? He's starting to sound like an EU regulator ("we mustn't do anything, ever unless it's proven 100% safe") ...

When people start justifying their (tough) actions based on direct "communication with god" it gets quite "special":

Monday, August 05, 2024:
https://www.christianpost.com/news/intel-ceo-draws-mixed-reactions-for-posting-bible-verse.html
The CEO of Intel prompted mixed reactions for posting a verse from Proverbs on Sunday after announcing that more than 15% of the tech company's workforce will be laid off amid its plummeting share price. "Let your eyes look straight ahead; fix your gaze directly before you. Give careful thought to the paths for your feet and be steadfast in all your ways," Pat Gelsinger posted to X, quoting Proverbs 4:25-26.
Gelsinger drew criticism from some X users who mocked him for "praying" and accused him of "resorting to religion to save the company."




@tooLongInEDA: regards relating "bad stuff" with "EU" that sounds somewhat "too simple" to me. Here some facts:

https://www.healthsystemtracker.org/chart-collection/u-s-life-expectancy-compare-countries/
The U.S. has the lowest life expectancy among large, wealthy countries while outspending its peers on healthcare
 
Do you really think the US Government or any Government can contain/control AI? Seriously.
Absolutely not. Actually, I can't think of any technology that can be contained or restrictively controlled by any one group, whether that's a company or a country, given enough investment and time. And I think we're about to find out how true that is for semiconductor design and fabrication too.

When you have a lead, the best strategy for long-term leadership is to get the world hooked on your technological advantage. We're squandering our leadership in the US.
 
He could've been a character on The Righteous Gemstones. Nobody tell him how much the semiconductor industry has benefitted from decidedly immoral things for decades, like the Playstation War in the DRC (coltan) or Pinochet in Chile (copper).
 
Back
Top