Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/openai-amends-pentagon-deal-as-sam-altman-admits-it-looks-%E2%80%98sloppy%E2%80%99.24659/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030970
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

OpenAI amends Pentagon deal as Sam Altman admits it looks ‘sloppy’

Daniel Nenni

Admin
Staff member
ChatGPT owner’s CEO says it will bar its technology being used for mass surveillance or by intelligence services

1772555687365.png

ChatGPT owner’s CEO says it will bar its technology being used for mass surveillance or by intelligence services
https://www.google.com/preferences/source?q=theguardian.com
OpenAI is amending its hastily arranged deal to supply artificial intelligence to the US Department of War (DoW) after the ChatGPT owner’s chief executive admitted it looked “opportunistic and sloppy”.

The contract prompted fears the San Francisco startup’s AI could be used for domestic mass surveillance but its boss, Sam Altman, said on Monday night the startup would explicitly bar its technology from being used for that purpose or being deployed by defence department intelligence agencies such as the National Security Agency (NSA).

OpenAI, which has more than 900 million users of ChatGPT, made the deal almost immediately after the Pentagon’s existing AI contractor, Anthropic, was dropped.
https://www.theguardian.com/comment...-using-ai-to-fight-wars-dangerous-us-military
Anthropic had insisted “using these systems for mass domestic surveillance is incompatible with democratic values”, leading the US president, Donald Trump, to call Anthropic “leftwing nut jobs” and directing the federal government to stop using its technology.

Despite denials from OpenAI that the agreement allowed for surveillance use, commentators raised the spectre of the Snowden scandal, which broke in 2013, when it emerged the NSA was engaged in mass harvesting of phone and internet communications.

The deal prompted an online backlash against OpenAI, with users of X and Reddit encouraging a “delete ChatGPT” campaign. One post read: “You’re now training a war machine. Let’s see proof of cancellation.”

Claude, the chatbot made by Anthropic, jumped to the top of Apple’s App Store charts, rising above ChatGPT, according to analysis by Sensor Tower.

In a message to employees reposted on X, the OpenAI CEO said the original deal announced on Friday had been struck too quickly after Anthropic was dropped.

“We shouldn’t have rushed to get this out on Friday,” Altman wrote. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

Upon announcing the deal, OpenAI had said the contract had “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s”.

However, the use of AI by the US military has alarmed nearly 900 employees at OpenAI and Google, also a leading power in the technology, who have signed an open letter calling on their bosses to refuse to let the DoW use their products for surveillance and autonomous killing.

Warning that the US government was trying to “divide each company with fear that the other will give in”, they wrote: “We hope our leaders will put aside their differences and stand together to continue to refuse the DoW’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”

Get set for the working day – we'll point you to all the business news and analysis you need every morning

The letter has been signed by 796 Google employees and 98 OpenAI staff. OpenAI said in a blogpost announcing the DoW deal that one of its red lines was “no use of OpenAI technology to direct autonomous weapons systems”.

However, observers including OpenAI’s former head of policy research, Miles Brundage, have queried how OpenAI has managed to secure a deal that assuages ethical concerns Anthropic believed were insurmountable. Posting on X, he wrote: “OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them.”

Brundage added: “To be clear, OAI is a complex org, and I think many people involved in this worked hard for what they consider a fair outcome. Some others I do not trust at all, particularly as it relates to dealings with government and politics.”

In his X post, he also wrote that he would “rather go to jail” than follow an unconstitutional order from the government.

“We want to work through democratic processes,” Brundage wrote. “It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty.”

Meanwhile, three more US cabinet-level agencies – the departments of state, Treasury and health and human services – have moved to cease use of Anthropic’s AI products after the DoW’s declaration of the company as a supply chain risk. Trump has ordered all US government agencies to phase out their use of Anthropic after secretary of defence Pete Hegseth’s decision.

 
Good question Mr Blue. Imo the armed forces have the same FOMO as civilian business leaders, driving their desires to use ai asap. They will leave all the messy details to those with boots on the ground, just like the civilian leaders do.
 
ChatGPT owner’s CEO says it will bar its technology being used for mass surveillance or by intelligence services

View attachment 4291

ChatGPT owner’s CEO says it will bar its technology being used for mass surveillance or by intelligence services
https://www.google.com/preferences/source?q=theguardian.com
OpenAI is amending its hastily arranged deal to supply artificial intelligence to the US Department of War (DoW) after the ChatGPT owner’s chief executive admitted it looked “opportunistic and sloppy”.

The contract prompted fears the San Francisco startup’s AI could be used for domestic mass surveillance but its boss, Sam Altman, said on Monday night the startup would explicitly bar its technology from being used for that purpose or being deployed by defence department intelligence agencies such as the National Security Agency (NSA).

OpenAI, which has more than 900 million users of ChatGPT, made the deal almost immediately after the Pentagon’s existing AI contractor, Anthropic, was dropped.
https://www.theguardian.com/comment...-using-ai-to-fight-wars-dangerous-us-military
Anthropic had insisted “using these systems for mass domestic surveillance is incompatible with democratic values”, leading the US president, Donald Trump, to call Anthropic “leftwing nut jobs” and directing the federal government to stop using its technology.

Despite denials from OpenAI that the agreement allowed for surveillance use, commentators raised the spectre of the Snowden scandal, which broke in 2013, when it emerged the NSA was engaged in mass harvesting of phone and internet communications.

The deal prompted an online backlash against OpenAI, with users of X and Reddit encouraging a “delete ChatGPT” campaign. One post read: “You’re now training a war machine. Let’s see proof of cancellation.”

Claude, the chatbot made by Anthropic, jumped to the top of Apple’s App Store charts, rising above ChatGPT, according to analysis by Sensor Tower.

In a message to employees reposted on X, the OpenAI CEO said the original deal announced on Friday had been struck too quickly after Anthropic was dropped.

“We shouldn’t have rushed to get this out on Friday,” Altman wrote. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

Upon announcing the deal, OpenAI had said the contract had “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s”.

However, the use of AI by the US military has alarmed nearly 900 employees at OpenAI and Google, also a leading power in the technology, who have signed an open letter calling on their bosses to refuse to let the DoW use their products for surveillance and autonomous killing.

Warning that the US government was trying to “divide each company with fear that the other will give in”, they wrote: “We hope our leaders will put aside their differences and stand together to continue to refuse the DoW’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”

Get set for the working day – we'll point you to all the business news and analysis you need every morning

The letter has been signed by 796 Google employees and 98 OpenAI staff. OpenAI said in a blogpost announcing the DoW deal that one of its red lines was “no use of OpenAI technology to direct autonomous weapons systems”.

However, observers including OpenAI’s former head of policy research, Miles Brundage, have queried how OpenAI has managed to secure a deal that assuages ethical concerns Anthropic believed were insurmountable. Posting on X, he wrote: “OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them.”

Brundage added: “To be clear, OAI is a complex org, and I think many people involved in this worked hard for what they consider a fair outcome. Some others I do not trust at all, particularly as it relates to dealings with government and politics.”

In his X post, he also wrote that he would “rather go to jail” than follow an unconstitutional order from the government.

“We want to work through democratic processes,” Brundage wrote. “It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty.”

Meanwhile, three more US cabinet-level agencies – the departments of state, Treasury and health and human services – have moved to cease use of Anthropic’s AI products after the DoW’s declaration of the company as a supply chain risk. Trump has ordered all US government agencies to phase out their use of Anthropic after secretary of defence Pete Hegseth’s decision.


The law of unintended consequences.
 
In his X post, he also wrote that he would “rather go to jail” than follow an unconstitutional order from the government.
To me this is the giveaway that he's just posturing to try to re-position himself, after all the media blowback from their move. He doesn't mean it, I think if following an unconstitutional order would increase his value (not shareholder value, his personal value) he would do it.
 
He's a baby brained imbecile and this is a deeply cynical response to the bump Anthropic got for "standing up" to the MIC. Pay me to buy a Cat D9 for shoveling cash into a burn bit and it would be more economically productive than OpenAI.
I wonder... how does one depend on a technology, LLMs, which are known to provide false information and analyses, in venues like a battlefield where human lives are at risk, or where evidence is collected for legal system processes?
Corporations are really good at displacing risk onto consumers. The way Tesla handled it is if you step on the brakes before your self-driving car crashes into the broadside of a semitruck then it's not their fault, you were in control. Best case scenario, AI provides 80% of the functionality for 20% of the cost, and most importantly it provides plausible deniability when things get FUBAR.
 
I wonder... how does one depend on a technology, LLMs, which are known to provide false information and analyses, in venues like a battlefield where human lives are at risk, or where evidence is collected for legal system processes?

In my experience the more you use AI the better it is from both what you input to what is output. I used to hate that term garbage in garbage out in regards to computer science but it really does apply here.

In regards to AI for military uses, the more experience we get the better AI will be. Given the conflicts going on in the world right now AI is truly getting battle hardened. Currently we are using billions of dollars of war hardware to battle thousands of dollar drones. There is no way AI does NOT win this battle in the future.
 
In my experience the more you use AI the better it is from both what you input to what is output. I used to hate that term garbage in garbage out in regards to computer science but it really does apply here.

In regards to AI for military uses, the more experience we get the better AI will be. Given the conflicts going on in the world right now AI is truly getting battle hardened. Currently we are using billions of dollars of war hardware to battle thousands of dollar drones. There is no way AI does NOT win this battle in the future.
That's true, but it doesn't get round the problem that all AI today -- however well trained and used -- is prone to occasional hallucinations, telling you what it might think you want to hear in a very convincing way, but which is complete garbage. And just like humans with Dunning-Kruger, it doesn't realize that it's wrong...

Don't get me wrong, most of the time it's great at giving clear and easily understandable answers to even quite difficult questions, far more so than a simple search and trawling through loads of links. But I've also had it come up with answers that are just plain wrong, and in areas where believing them would have bad consequences... :-(
 
That's true, but it doesn't get round the problem that all AI today -- however well trained and used -- is prone to occasional hallucinations, telling you what it might think you want to hear in a very convincing way, but which is complete garbage. And just like humans with Dunning-Kruger, it doesn't realize that it's wrong...

Don't get me wrong, most of the time it's great at giving clear and easily understandable answers to even quite difficult questions, far more so than a simple search and trawling through loads of links. But I've also had it come up with answers that are just plain wrong, and in areas where believing them would have bad consequences... :-(

I agree but I also hope that can be fixed. I have to believe hallucinations can be fixed. I started in computer science when very few things were possible. I wrote assembly code, RPG, COBOL, FTN77 etc... and look at where we are today. AI has certainly accelerated computer science and the type of chips we make today were unimaginable back then. I'm an optimist by nature but in regards to AI I see no hard limits. The world will be unrecognizable by 2050 and there is nothing we can do to stop it. The race is on, may the best AI technologists win.
 
I agree but I also hope that can be fixed. I have to believe hallucinations can be fixed. I started in computer science when very few things were possible. I wrote assembly code, RPG, COBOL, FTN77 etc... and look at where we are today. AI has certainly accelerated computer science and the type of chips we make today were unimaginable back then. I'm an optimist by nature but in regards to AI I see no hard limits. The world will be unrecognizable by 2050 and there is nothing we can do to stop it. The race is on, may the best AI technologists win.
The problem is that to fix hallucinations you have to know that you're having them, and the problem with AI is that nobody -- including the AI itself! -- can look at any algorithms and go "Aha -- *that's* what causing the problems!" because their *are* no analytical algorithms, only analysis of data and pattern matching.

No doubt this will improve, but given that the AI effectively invents its own algorithms/marks its own homework it's difficult to see how this can be eliminated entirely -- it's a "black box" with masses of data going in one end and answers coming out the other, with little or no visibility of *how* these are actually derived, other than knowing how the internal neural networks and updates are configured -- and that the AI thinks it's giving you "the best" answer.

Which it usually is -- but then it still thinks that even when it's hallucinating... :-(
 
Last edited:
The current proposal to fundamentally remedy LLM hallucinations is to move away from LLMs to "World Models". Yann Lecun thinks this way. That's why I focused my question on LLMs. LLMs are fun, but I wouldn't want my life or my reputation to depend on their answers.
 
The current proposal to fundamentally remedy LLM hallucinations is to move away from LLMs to "World Models". Yann Lecun thinks this way. That's why I focused my question on LLMs. LLMs are fun, but I wouldn't want my life or my reputation to depend on their answers.
But regardless of the model, the AI is still writing its own exam questions and marking its own answers -- because if humans were doing this, it wouldn't be AI, would it? :-)

If I understand correctly, all "World models" are doing is trying to add additional constraints based on reality -- all that means is that the responsibility for stopping hallucinations moves to the constraints, which is all fine until the AI finds a new corner case which falls through the gaps in the safety net. Yes this hole will then be patched -- perhaps after it kills someone, or loses someone a lot of money -- so that's fine, until the AI finds a new hole in the net... :-(
 
But regardless of the model, the AI is still writing its own exam questions and marking its own answers -- because if humans were doing this, it wouldn't be AI, would it? :-)

If I understand correctly, all "World models" are doing is trying to add additional constraints based on reality -- all that means is that the responsibility for stopping hallucinations moves to the constraints, which is all fine until the AI finds a new corner case which falls through the gaps in the safety net. Yes this hole will then be patched -- perhaps after it kills someone, or loses someone a lot of money -- so that's fine, until the AI finds a new hole in the net... :-(
World Models are fundamentally different. LLMs are based on probabilities (What's the next word?), World Models are based on knowledge. World Models seem unlikely to replace LLMs for casual text processing, but the current emerging view seems to be that World Models will augment LLM reasoning. Asking Gemini ( :ROFLMAO: ), "LLMs predict, while World Models understand". The next five years of AI development should be very interesting indeed.
 
World Models are fundamentally different. LLMs are based on probabilities (What's the next word?), World Models are based on knowledge. World Models seem unlikely to replace LLMs for casual text processing, but the current emerging view seems to be that World Models will augment LLM reasoning. Asking Gemini ( :ROFLMAO: ), "LLMs predict, while World Models understand". The next five years of AI development should be very interesting indeed.
All I can say is -- I'll believe it when I see it. Going by human beings, "understanding" the world is much more difficult than is often believed, and we've had millions of years trying... ;-)
 
All I can say is -- I'll believe it when I see it. Going by human beings, "understanding" the world is much more difficult than is often believed, and we've had millions of years trying... ;-)

It's worth noting that AI so far only has 'second hand' input to understand the world. No direct experience, and what it does "know", also comes from langauge that is fundmanetally imprecise, written by humans...

I'm curious how accuracy evolves going from pure LLMs to AI models that physically interact with the world -- such as self-driving cars, robots, or logistics management systems. Will the 'direct access to knowledge' allow them to have less "hallucinations" than the average human being?

P.S. The world model idea is interesting (thanks blueone - I hadn't read about that before). I'm curious if that and/or other "real" physically interacting AI can be augmented with neural networks for continuous (real) training.
 
All I can say is -- I'll believe it when I see it. Going by human beings, "understanding" the world is much more difficult than is often believed, and we've had millions of years trying... ;-)

I said the same thing when I programmed in LISP 45 years ago. No way would AI be able to do anything useful! And I was right for 40 years. :ROFLMAO:
 
I said the same thing when I programmed in LISP 45 years ago. No way would AI be able to do anything useful! And I was right for 40 years. :ROFLMAO:
AI today can do lots of things that are useful and correct. Unfortunately it occasionally does things that are useless and wrong, and neither the AI nor most humans can tell which is which... :-(
 
P.S. The world model idea is interesting (thanks blueone - I hadn't read about that before). I'm curious if that and/or other "real" physically interacting AI can be augmented with neural networks for continuous (real) training.
World Models, and there are multiple types, use neural networks. I haven't learned enough yet to understand if World Models can really advance knowledge, but it appears the basic tension is between using text or not to train models. Text is a human view of a physical phenomenon, always colored by language. World Models are said to learn by observation.

It took me months of study to think I had a rudimentary understanding of how LLMs worked, and I still think I'm at the grade-school level. I think World Models will take me significantly longer to grasp.
 
Back
Top