Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/openai-finds-more-chinese-groups-using-chatgpt-for-malicious-purposes.22953/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

OpenAI finds more Chinese groups using ChatGPT for malicious purposes

Daniel Nenni

Admin
Staff member
d84428d0409bbca73d421dec03d5be78

FILE PHOTO: Illustration picture of ChatGPT

By Anna Tong
SAN FRANCISCO (Reuters) -OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized U.S. President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.

These findings underscore the evolving landscape of AI misuse and the importance of vigilance in detecting and mitigating such activities.

OpenAI's Response
OpenAI has taken proactive measures by banning accounts associated with these operations and continues to monitor and report malicious platform use. The company emphasizes its commitment to preventing the misuse of its AI tools and collaborating with industry peers and authorities to address these challenges.

 
Back
Top