Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/the-malicious-use-of-artificial-intelligence-forecasting-prevention-and-mitigation.10173/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

The Malicious Use of Artificial Intelligence Forecasting, Prevention, and Mitigation

Daniel Nenni

Admin
Staff member
I will read this over the weekend. Hopefully we can discuss is here. AI is surging on SemiWiki and throughout the semiconductor ecosystem so I think it would be an interesting conversation to have:

A new report authored by over two-dozen experts on the implications of emerging technologies is sounding the alarm bells on the ways artificial intelligence could enable new forms of cybercrime, physical attacks, and political disruption over the next five to ten years.

The 100-page report, titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” boasts 26 experts from 14 different institutions and organizations, including Oxford University’s Future of Humanity Institute, Cambridge University’s Centre for the Study of Existential Risk, Elon Musk’s OpenAI, and the Electronic Frontier Foundation. The report builds upon a two-day workshop held at Oxford University back in February of last year. In the report, the authors detail some of the ways AI could make things generally unpleasant in the next few years, focusing on three security domains of note—the digital, physical, and political arenas—and how the malicious use of AI could upset each of these.

Artificial intelligence and machine learning capabilities are growingat an unprecedented rate. These technologies have many widelybeneficial applications, ranging from machine translation to medicalimage analysis. Countless more such applications are beingdeveloped and can be expected over the long term. Less attentionhas historically been paid to the ways in which artificial intelligencecan be used maliciously. This report surveys the landscape ofpotential security threats from malicious uses of artificial intelligencetechnologies, and proposes ways to better forecast, prevent, andmitigate these threats. We analyze, but do not conclusively resolve,the question of what the long-term equilibrium between attackers anddefenders will be. We focus instead on what sorts of attacks we arelikely to see soon if adequate defenses are not developed...
 
Very thoughtful analysis. I suspect this is a much bigger problem than any of the concerns that we'll all be replaced by robots. In fact I'm even more concerned the rapid scalability of AI in attacking the structure of society than I am in use in weapons. Weapons can only kill a limited number of people. Scaled up phishing, DDOS attacks, spoofed images, voices and fake news can destroy trust and undermine society itself. Who needs to automate weapons if you can turn the society into a never-ending Mad Max episode?

We already see some of these threats today. We need to get beyond the already-established weak Zuckerberg view that the solution to problem tech is more tech, instead start thinking about multi-pronged defenses (of which tech will be only one part).
 
With AI attacks on water, power, transportation, financial, industrial and numerous other systems simultaneously, the damage could be far greater than standard warfare and far cheaper. It could be the equivalent of a first strike weapon so devastating the opponent couldn't launch an effective counter. Like I have stated many times, AI/ML will be the greatest turning point in the history of mankind that could go either very positive or very negative, wisdom required. The only event that I could see that would be greater would be interfacing with an intelligent, very advanced alien culture.
 
AI is now a way of life so let's hope it will be used for good because we all know it will be used for bad.

On the edge, my wife and I have iPhone Xs and are really impressed with the facial recognition feature. Not recommended for identical twins however. While security is interesting I believe this will open a whole new wave of health and wellness apps that can ultimately save lives.

In the cloud AI will rule the world, literally. Hopefully we will reallocate some military budget and focus on cyberwarfare. DARPA is definitely leading the way but will it be enough?
 
Back
Top