Pat Gelsinger • Electrical engineering expert with four+ decades of technology leadership and experience
There’s been a lot of recent news about AI’s scheming. Last week Anthropic Claude Opus 4 blackmailing developers, this week ChatGPT o3 refusing to switch off. These should be a wake-up call for everyone building AI. When AI starts tampering with code and bluffing its way through tests, it’s no longer just a technical achievement. It’s a moral risk.
Technology for good means, if it isn’t yet proven good, it shouldn’t be released as the engineering isn’t finished. Just like the FDA wouldn’t approve a drug until its target efficacy is high and its side effects are thoroughly tested, neither should powerful AI models be released until they’re understood and proven safe.
This is exactly why at Gloo we don’t just ask, "Can we build it?" We ask, "Should we?" and "Will it serve human flourishing?" Tech advancement, especially AI, must always first be measured by its impact on our collective flourishing. Our models are not just evaluated for performance. They are aligned with values like truth, human dignity, and faith. External audits? Absolutely. Guardrails? Required. Ethical alignment? Non-negotiable.
This technology is moving fast. If we don’t ground our work in values now, we may not like what takes root later.

OpenAI software ignores explicit instruction to switch off
ChatGPT maker’s ‘most capable’ model sabotages shutdown mechanism
