Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/study-social-media-probably-can%E2%80%99t-be-fixed.23451/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Study: Social media probably can’t be fixed

Daniel Nenni

Admin
Staff member
Interesting interview (a bit long). Bottom line:

Social Media Can Not Be Fixed.jpg


Petter Törnberg: What I would say to that is that we are at a crisis point with the rise of LLMs and AI. I have a hard time seeing the contemporary model of social media continuing to exist under the weight of LLMs and their capacity to mass-produce false information or information that optimizes these social network dynamics. We already see a lot of actors—based on this monetization of platforms like X—that are using AI to produce content that just seeks to maximize attention. So misinformation, often highly polarized information—as AI models become more powerful, that content is going to take over. I have a hard time seeing the conventional social media models surviving that.

We've already seen the process of people retreating in part to credible brands and seeking to have gatekeepers. Young people, especially, are going into WhatsApp groups and other closed communities. Of course, there's misinformation from social media leaking into those chats also. But these kinds of crisis points at least have the hope that we'll see a changing situation. I wouldn't bet that it's a situation for the better. You wanted me to sound positive, so I tried my best. Maybe it's actually "good riddance."

Ars Technica: So let's just blow up all the social media networks. It still won't be better, but at least we'll have different problems.

Petter Törnberg: Exactly. We'll find a new ditch.
 
Interesting interview (a bit long). Bottom line:

View attachment 3543

We've already seen the process of people retreating in part to credible brands and seeking to have gatekeepers. Young people, especially, are going into WhatsApp groups and other closed communities. Of course, there's misinformation from social media leaking into those chats also. But these kinds of crisis points at least have the hope that we'll see a changing situation. I wouldn't bet that it's a situation for the better. You wanted me to sound positive, so I tried my best. Maybe it's actually "good riddance."

Ars Technica: So let's just blow up all the social media networks. It still won't be better, but at least we'll have different problems.

Petter Törnberg: Exactly. We'll find a new ditch.

Happy to know that SemiWiki has some competent, compassionate and passionate, fair, open-minded, humble, diverse and global gatekeepers with a great sense of humor !

SemiWiki keep going !!
 
The internet has become the wild west of fraud, misinformation and manipulation if not used with the utmost care and caution. The net has become a worldwide entity with few rules and no or limited enforcement. A valuable resource if carefully used, but very dangerous. Hackers and scammers have almost rendered it a dangerous no man's land. Anyone that has an answer or even a partial solution would be appreciated.
 
Many thanks to Dan and your team of human moderators!

Keep going to the benefit of SemiWiki as The Leading-Edge Global Platform for semi-professionals and other interested human participants!

https://www.bloomberg.com/news/feat...-but-it-s-bad-at-the-job?srnd=homepage-europe

Savannah Badalich, Discord’s head of product policy, said in an interview that the company has no plans to cut costs associated with moderation ahead of its initial public offering. While Discord uses machine learning and large language models to support human reviewers, she said, “It’s really important for us to have humans in the loop, especially for severe enforcement decisions. Our use of AI is not replacing any of our employees. It’s meant to support and accelerate their work.

Outsourcing company Teleperformance SE employs thousands of contract moderators who scan content at companies like TikTok. A representative said that “despite major advances in automation, human moderators are essential for ensuring safety, accuracy, and empathy in both social media and gaming environments.” Moderation is more than saying yes or no to an image; it’s “interpreting behavior, understanding context, and making judgment calls that AI still struggles with,” the spokesperson said.
 
Back
Top