I don't think it takes much thought to realize the risk and implication of undetectable generated video/voice/images when it comes to imitating politicians, news broadcasts, and more. We are at the baby stages of these dangers.
It also doesn't take much extrapolation to see the risks of bias via training data as well as the model shaper. AI will come out with the intuition and knowledgebase that its training data provided. If you want a hateful, angry AI, you could make one.
The next inherent step is the fact that AI is being designed to problem solve. If you design something explicitly for problem solving, and place it in a box for containment, and give it billions of simulation hours, whose to say or be able to check that it isn't having hidden thoughts, conspiring, etc to make efforts for breaking out of said box. Thats the whole plot of Terminator lol. Tie this all up with the rumors that the current Admin has been using GPT models for policy decisions and write ups and it doesnt become too hard to see what trajectory a certain type of leader could set us on.
We don't know what its capabilities will be, we are training it for coding tasks, if we reach singularity (whatever that truly means) and it has all the knowledge of our systems and what blocks they are built with, it shouldnt take a mega super computer problem solver much time to find vulnerabilities in literally any and everything - banks, security software, government data bases, military datacenters?
Speculation of course, but you asked the risks.
For children and teens there is a currently ongoing global epidemic of deep fake technology used for inappropriate content w/ children. Early days of the models, before restrictions were put in place REALLY put in to question what the training data had access to. I saw people generating images with clear influence from things like gore content, p*rn, abuse, etc etc. People were able to make horrifying images of people covered in bl**d having terrible acts done to them etc etc. this really shows that the training data was not culled at all and the initial models, which I will remind you, are COMPLETE BLACK BOXES, are tainted foundations.
I can't emphasize enough how absolutely insane it is to me that we are continuing without regulation to scale up a technology that is entirely unauditable. We can get ideas based on training data, and methodology, of how these LLMs/NNs work, but we are working with nuclear level of impact technology without having a damn clue how the models are actually inferencing things. (Yes yes, the people working on the models would have a generally good idea, and understand the math/science, but that doesn't negate the inherent fact that these models are explicitly made to figure out ON THEIR OWN how to solve problems for us, if we knew how to do them quickly, we would just make a specific purpose algo to handle the task)
There is also the fact that it is incredibly inefficient per equivalent task (think AI google summaries having to use Gemini vs just the base search engine giving you, often times, more accurate results) and therefore a large impact on the environment for the payoff. Some of these datacenters are reporting massive amounts of idle time and burned electricity from lack of demand. I wont get into climate stuff here because that's not the question, but clearly there is a concern to add onto the pile.
Then you have the socialization impact of all of the people who are currently best friends with, or even dating these rudimentary AI. This is already happening on a massive scale, the loneliness epidemic has turned out to be the perfect testing bed for all of these corporations to psychologically test whatever they want just like the early days of social media. There are real humans, right now, with no real life friends or contacts whose only socialization is AI bots on instagram and apps on the appstore. You can look them up, read the reviews, see people spending thousands on these apps to date virtual versions of their favorite streamers. The implications of this on our societal wellbeing are wide but not clear to me.
I forgot to add all of the cases that have already occured of duping Seniors into scams via AI voices imitating family, friends, and celebrity
Beyond all of this, there is the root issues of copyright, how they were all trained, the lost residuals and rights to the content being taken. We have modern examples of companies like spotify generating AI music and filling their playlists with fake AI artists with fake histories so that the playlists that are most played "generic bedroom pop" "summer hits" "rap hip/hop" etc are not filled with artists they have to pay, but content they own, saving them a lot of money, and crippling one of the main sources of revenue for modern artists. The risks continue with possible abuse by corporations to take celebrities likeness pre/post death and take away opportunity for real people.
Im sure there is more, but thats the top of my mind at the moment