There are numerous ways that AI/ML could be weaponized in the cyber and virtual world in which much of the world's commerce, resources, power, transport, systems and governance is managed. It is only a matter of time before the virtual world becomes the next new front in war. Any thoughts or comments on how to deal with this coming issue and conflict would be appreciated. Hopefully the world can keep things productive, but the failure rate of spending and costs of conflict have only been growing. Is there any way of building safeties in and setting standards to make sure AI/ML is used to its maximum benefit and not made dangerous. Can any of this be done at the semiconductor level, just like safeties on a gun or nuclear launch codes?