Pat Gelsinger
Electrical engineering expert with four+ decades of technology leadership and experience
February 3, 2025
Building on my post from last week which generated considerable interest from many corners of the planet, I’d like to take a moment to add to each of the three elements I spoke about.
First – Computing obeys the gas law – AKA Jevons effect
There is a body of work over the last 150 years from economics that gives a more accurate perspective on this statement called Jevons paradox or Jevons effect. As we saw with computing, internet and so many other technological insights – dramatically decreasing their costs will radically increase their use. AI will be everywhere in our future world from my hearing aids to every light switch to health care to materials research to autonomous driving and so many more. No longer will we adapt to the computer’s interface, computing will adapt to the human interface and expand what we as people can do in unimaginable ways. Today’s AI is immature and costly, tomorrows will be robust, cheap and… pervasive. See https://en.wikipedia.org/wiki/Jevons_paradox
Second – Engineering is about constraints – necessity is the mother of invention
There have been a variety of critiques suggesting that the engineering breakthroughs by the DeepSeek team were not as significant as initially suggested.
Their base models demonstrate that architectural improvements like Multi-head Latent Attention (MLA) and Mixture-of-Experts (MoE) can drastically reduce computational costs. These advancements are not merely about scaling; they represent a fundamental shift toward more economical training and efficient inference. They showed that Reinforcement Learning with Verifiable Rewards (RLVR) on a strong base model can scale to a SoTA reasoning model with just 800K reasoning data samples. The right kind of Reinforcement Learning techniques can be cost efficient in post training unlike earlier data hungry and complex reinforcement learning approaches like Process Reward Modelling. This is crucial as the high computational demands of LLMs is a costly constraint. Further, DeepSeek's innovations show that software and algorithmic optimization are vital for realizing the full potential of available hardware. Good engineering builds upon and leads to more good engineering (In just a week Opensource Allen AI showed that their post training recipe could surpass DeepSeek V3 performance), and the race for ever more efficient AI is on.
Third – Open wins and … enables Trust.
Of the three points I made, to me this is the most critical. While search is simply connecting interested users to potential websites and sellers (with a powerful advertising technology and business model in the middle), AI models are far far more. The models themselves embed and synthesize knowledge and present it in most compelling fashions as truth. The question is – are the models right, good, legal, moral. Establishing trusted models built on an Open foundation is essential. AI must be open, transparent, verifiable, benchmarked and measured against standards as Howard Lutnick testified this week in his hearing for Department of Commerce Secretary. I was encouraged to see these comments from Sam Altman. AI is too important to our future – they must be open and we have proven models in the Open Source software community to follow as guides for its development.
While AI holds great promise, its role in some of the most critical areas of life is still in question – can I trust it? Trust in AI is twofold - we must know it aligns with our values and we must know it delivers accurate, reliable responses. That’s why establishing standards and benchmarks that go beyond basic performance metrics is critical; they should also measure values alignment and contributions to our collective flourishing. When we combine standards with an open approach, we can build the trust necessary to unlock AI’s full potential for the benefit of all society. With our tech platform at Gloo, you’ll see us standing up much more determinedly to help represent the faith community in the pursuit of open, trusted AI in the near future.

Wisdom – Learning the lessons I thought I already knew (part 2)
Building on my post from last week which generated considerable interest from many corners of the planet, I’d like to take a moment to add to each of the three elements I spoke about. First – Computing obeys the gas law – AKA Jevons effect There is a body of work over the last 150 years from economi