[content] => 
    [params] => Array
            [0] => /forum/index.php?threads/will-ai-become-a-weapon-of-war.18117/

    [addOns] => Array
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270

    [wordpress] => /var/www/html

Will AI become a weapon of war?

Arthur Hanson

Well-known member
There are numerous ways that AI/ML could be weaponized in the cyber and virtual world in which much of the world's commerce, resources, power, transport, systems and governance is managed. It is only a matter of time before the virtual world becomes the next new front in war. Any thoughts or comments on how to deal with this coming issue and conflict would be appreciated. Hopefully the world can keep things productive, but the failure rate of spending and costs of conflict have only been growing. Is there any way of building safeties in and setting standards to make sure AI/ML is used to its maximum benefit and not made dangerous. Can any of this be done at the semiconductor level, just like safeties on a gun or nuclear launch codes?
Maybe more analogous to nuclear inspections rather than nuclear launch codes, but this paper goes into some detail about hardware mechanisms that could be used to enable governments or NGOs to verify that ML training runs proceeded according to pre-defined guard rails. From the abstract:

This work analyzes one mechanism to achieve this, by monitoring the computing hardware used for large-scale NN training. The framework's primary goal is to provide governments high confidence that no actor uses large quantities of specialized ML chips to execute a training run in violation of agreed rules. At the same time, the system does not curtail the use of consumer computing devices, and maintains the privacy and confidentiality of ML practitioners' models, data, and hyperparameters. The system consists of interventions at three stages: (1) using on-chip firmware to occasionally save snapshots of the the neural network weights stored in device memory, in a form that an inspector could later retrieve; (2) saving sufficient information about each training run to prove to inspectors the details of the training run that had resulted in the snapshotted weights; and (3) monitoring the chip supply chain to ensure that no actor can avoid discovery by amassing a large quantity of un-tracked chips

To get a sense of what kind of guardrails this could be used for, from the body of the paper
The system is compatible with many different rules on training runs (see Section 2.1), including those based on the total chip-hours used to train a model, the type of data and algorithms used, and whether the produced model exceeds a performance threshold on selected benchmarks. To serve as a foundation for meaningful international coordination, the framework aspires to reliably detect violations of ML training rules even in the face of nation-state hackers attempting to circumvent it.

For more on how things like performance benchmarks might be defined, this seems to be the active work of ARC Evals (though again this is different from things like nuclear launch codes, but perhaps in the analogy learning how to figure out when a collection of nuclear material achieved critical mass).
The answer to the title is a resounding yes, I'd be surprised if they are not already working on it, especially in Russia.

What we need is the equivalent of a START agreement, but given the rampant interference that is already happening in the cross-boundary systems hacking, am not hopeful this will actually happen. What frightens me here is that this will be in the hands of our woefully slow politicians. We are already way behind scrutinizing A.I., let alone regulating it. Europe is perhaps a bit faster here, but still too slow to get in at the beginning.