Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/what-are-the-dangers-of-ai.22748/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

What are the dangers of AI?

Arthur Hanson

Well-known member
Paul Tudor Jones on CNBC this morning rated AI as the most dangerous threat we face. He feels it's time to develop some regulations or guard rails for the destructive capability that is only increasing. I feel this is the time to start acting to keep AI out of the hands of bad actors or building in safety, if possible, to prevent a widespread misuse. It's not if, but when some powers decide to weaponize AI. There are so many dangers that are growing alongside the increasing power and spread of AI into virtually everything. Any thoughts or comments appreciated.
 
can you give me an example of how AI is used dangerously now or in the near future.

My biggest danger today is all the incorrect information it sends me based on its intelligence and learning
 
can you give me an example of how AI is used dangerously now or in the near future.

My biggest danger today is all the incorrect information it sends me based on its intelligence and learning
My biggest fear is creating real fakes of live people making a mockery out of people's life they can clone our voices with a simple AI model fool our Family Friends
 
My biggest fear is creating real fakes of live people making a mockery out of people's life they can clone our voices with a simple AI model fool our Family Friends
If this gets out of hand, it is easily solved with call-back protocols or code words. I think this threat is being blown out of proportion by the media as click bait.
 
If this gets out of hand, it is easily solved with call-back protocols or code words. I think this threat is being blown out of proportion by the media as click bait.
not all people are up to date not everyone access to have the tech or knows about it but the problem is the amount of people spreading misinfor is far greater than making good info.
 
I've subscribed to Github copilot to try out AI coding. The danger so far is, ai hallucination.

it will come up with some code that simply won't run. its baseline GTP-4o suck at code assist. i found Claude 3.7 to be better.

i don't think there will be any "AI dangers" for a long time.
 
I don't think it takes much thought to realize the risk and implication of undetectable generated video/voice/images when it comes to imitating politicians, news broadcasts, and more. We are at the baby stages of these dangers.
It also doesn't take much extrapolation to see the risks of bias via training data as well as the model shaper. AI will come out with the intuition and knowledgebase that its training data provided. If you want a hateful, angry AI, you could make one.
The next inherent step is the fact that AI is being designed to problem solve. If you design something explicitly for problem solving, and place it in a box for containment, and give it billions of simulation hours, whose to say or be able to check that it isn't having hidden thoughts, conspiring, etc to make efforts for breaking out of said box. Thats the whole plot of Terminator lol. Tie this all up with the rumors that the current Admin has been using GPT models for policy decisions and write ups and it doesnt become too hard to see what trajectory a certain type of leader could set us on.

We don't know what its capabilities will be, we are training it for coding tasks, if we reach singularity (whatever that truly means) and it has all the knowledge of our systems and what blocks they are built with, it shouldnt take a mega super computer problem solver much time to find vulnerabilities in literally any and everything - banks, security software, government data bases, military datacenters?
Speculation of course, but you asked the risks.

For children and teens there is a currently ongoing global epidemic of deep fake technology used for inappropriate content w/ children. Early days of the models, before restrictions were put in place REALLY put in to question what the training data had access to. I saw people generating images with clear influence from things like gore content, p*rn, abuse, etc etc. People were able to make horrifying images of people covered in bl**d having terrible acts done to them etc etc. this really shows that the training data was not culled at all and the initial models, which I will remind you, are COMPLETE BLACK BOXES, are tainted foundations.

I can't emphasize enough how absolutely insane it is to me that we are continuing without regulation to scale up a technology that is entirely unauditable. We can get ideas based on training data, and methodology, of how these LLMs/NNs work, but we are working with nuclear level of impact technology without having a damn clue how the models are actually inferencing things. (Yes yes, the people working on the models would have a generally good idea, and understand the math/science, but that doesn't negate the inherent fact that these models are explicitly made to figure out ON THEIR OWN how to solve problems for us, if we knew how to do them quickly, we would just make a specific purpose algo to handle the task)

There is also the fact that it is incredibly inefficient per equivalent task (think AI google summaries having to use Gemini vs just the base search engine giving you, often times, more accurate results) and therefore a large impact on the environment for the payoff. Some of these datacenters are reporting massive amounts of idle time and burned electricity from lack of demand. I wont get into climate stuff here because that's not the question, but clearly there is a concern to add onto the pile.

Then you have the socialization impact of all of the people who are currently best friends with, or even dating these rudimentary AI. This is already happening on a massive scale, the loneliness epidemic has turned out to be the perfect testing bed for all of these corporations to psychologically test whatever they want just like the early days of social media. There are real humans, right now, with no real life friends or contacts whose only socialization is AI bots on instagram and apps on the appstore. You can look them up, read the reviews, see people spending thousands on these apps to date virtual versions of their favorite streamers. The implications of this on our societal wellbeing are wide but not clear to me.

I forgot to add all of the cases that have already occured of duping Seniors into scams via AI voices imitating family, friends, and celebrity

Beyond all of this, there is the root issues of copyright, how they were all trained, the lost residuals and rights to the content being taken. We have modern examples of companies like spotify generating AI music and filling their playlists with fake AI artists with fake histories so that the playlists that are most played "generic bedroom pop" "summer hits" "rap hip/hop" etc are not filled with artists they have to pay, but content they own, saving them a lot of money, and crippling one of the main sources of revenue for modern artists. The risks continue with possible abuse by corporations to take celebrities likeness pre/post death and take away opportunity for real people.

Im sure there is more, but thats the top of my mind at the moment
 
Last edited:
If this gets out of hand, it is easily solved with call-back protocols or code words. I think this threat is being blown out of proportion by the media as click bait.
Easily by who? Once the models are capable, it means it is achievable. Even in best case where its a closed model that can be moved back, thats a single model. I feel like we are completely failing to consider the fact that "bad" actors exist.
Whose to say Russia or China don't keep their own on hand version for creating propaganda and misinformation and disseminating it through social media just like they do with memes now. We can't control them, their models, or their goals. They are currently influencing elections around the world, if their goal is influence, and better models help them manipulate, there is 0 reason for them to stop.
 
I've subscribed to Github copilot to try out AI coding. The danger so far is, ai hallucination.

it will come up with some code that simply won't run. its baseline GTP-4o suck at code assist. i found Claude 3.7 to be better.

i don't think there will be any "AI dangers" for a long time.
I think you are underestimating the fact that a lot of AI dangers already exist for a lot of people, creators and artists, Gen alpha/Z teens dealing with deepfakes, massive misinformation on social media - we have already seen the president of the US use AI multiple times to "influence" the political landscape. Even if they "weren't good" and were "stupid" they wont be that way forever. If we have leaders who are willing to at a whim use this technology without risk of repercussion or implications, It seems the AI danger has already arrived. Seems naive to me to pretend that just because its a "joke" that the AI posts from the current administration aren't actual real propaganda influencing tens of millions of voters. These tools are only just starting to be used for these things, which means they aren't so far away - in my opinion i guess.
 
Easily by who? Once the models are capable, it means it is achievable. Even in best case where its a closed model that can be moved back, thats a single model. I feel like we are completely failing to consider the fact that "bad" actors exist.
Whose to say Russia or China don't keep their own on hand version for creating propaganda and misinformation and disseminating it through social media just like they do with memes now. We can't control them, their models, or their goals. They are currently influencing elections around the world, if their goal is influence, and better models help them manipulate, there is 0 reason for them to stop.
Easily by non-expert people. For example, say that my daughter calls me and desperately says I need to wire her ten thousand dollars. I'll tell her I'll call her back in a minute. Problem solved. The AI voice can't receive a call from her phone. If AI vocal fakes become a real problem, you can easily choose a private code word that only your closest relatives know.

As for propaganda influencing the elections, what could be worse than the editorial nonsense spewed out by the professional press, blogs, and numerous political websites? I doubt it.
 
The amount of compute cycles that AI consumes is overwhelming the datacenter market. The carbon footprint alone is concerning. Great for the semiconductor industry but not so great for the environment. How much CO2 is created from people making funny AI images? Non AI based research indicates that the global data center industry is expected to emit around 2.5 billion metric tons of CO₂ equivalent through 2030. This surge is largely driven by the expansion of AI and cloud computing services by major tech companies .
 
Back
Top