[content] => 
    [params] => Array
            [0] => /forum/index.php?threads/the-pause-giant-ai-experiments-open-letter.17684/

    [addOns] => Array
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270

    [wordpress] => /var/www/html

The Pause Giant AI Experiments Open Letter


Well-known member
In case you haven't read it yet:

I checked the calendar, and it's not April 1st yet, but we are close...

So, the countries who agree to this pause and the subsequent introspection pause that will undoubtably follow stop advancing, but the adversaries of these countries don't, and get a leg up?

The letter calls for "AI governance systems". I wonder how long they will take to develop? Years? A new IETF standard takes years, and protocol problems are a lot easier than the future of AI, so it might be decades. Certainly not the six months referenced in the letter. Where does the governance development proposal take place? The United Nations? Some other consortium that doesn't exist yet? And while this is going on are giant LLM system developments "illegal"? What does that even mean? What are the penalties? Who is the international enforcement body? Will you need a license and submit project plans to buy a large number of GPUs? Will projects get audited?

And Elon Musk, a man who has unleashed what are essentially uncertified self-driving cars to American public highways, is concerned about LLMs? Seriously? :)

I like this excerpt from the letter best:

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
Flooding the information channels with propaganda and untruth? Have these signatories read any of the major news websites lately, regardless of political agenda? Humans seem to be doing an awesome job of twisting information to suit their agendas. Even White House press releases. As for AI outsmarting, obsoleting, and replacing us, have these same experts forgotten computer systems have power switches?

GPT-4 has some very interesting capabilities. It also lies (excuse me, hallucinates) in unpredictable ways that makes me avoid it except for entertainment and exploration of the technology. I also haven't seen even one example yet where GPT-4 extrapolates beyond current human knowledge. (Are there any?) I think it is an awesome tool for assisting experts in fleshing out their thoughts, and even as an assistant for software and perhaps other technology development (assuming you're skilled enough to know when it's wrong). It is cool technology. I also think we have no choice but to allow AI development to move forward unfettered, because universally controlling it will be nearly impossible, and countries that don't agree to the limitations will get a competitive and potentially strategic advantage.