Events EDA2025 esig 2024 800X100
WP_Term Object
(
    [term_id] => 6435
    [name] => AI
    [slug] => artificial-intelligence
    [term_group] => 0
    [term_taxonomy_id] => 6435
    [taxonomy] => category
    [description] => Artificial Intelligence
    [parent] => 0
    [count] => 546
    [filter] => raw
    [cat_ID] => 6435
    [category_count] => 546
    [category_description] => Artificial Intelligence
    [cat_name] => AI
    [category_nicename] => artificial-intelligence
    [category_parent] => 0
)

Does Elon Musk Hate Artificial Intelligence?

Does Elon Musk Hate Artificial Intelligence?
by Matthew Rosenquist on 07-22-2017 at 7:00 am

 Elon Musk, the tech billionaire and CEO of Tesla, was quoted as saying Artificial Intelligence (AI) is the “Greatest Risk We Face as a Civilization”. He recently met with the National Governor’s Association and advocated for government involvement and regulations. This seems to be a far cry from the government-should-leave-the-market-alone position high-tech firms normally advocate. At first glance, it seems awkward. The head of Tesla, who has aggressively invested in AI for self-driving cars, is worried about AI and wants bureaucratic regulation?

Is Musk driven by unwarranted fear or possibly taking this brash position as part of a marketing stunt? What is he actually saying? Well, I think he is being rational.

Translating Technology Fear
Mr. Musk is a brilliant technologist, engineer, and visionary (I am a fan of his work). I have never sat down and had a chat with him, but from what I have understand, his concerns seem informed and grounded, as they would for any technology that has great power. AI will bring tremendous value and will extend computing beyond just analysis of data, to manifest in the manipulation of the physical world. Autonomous transportation is a great example where AI will enable vehicles to eventually be in total control. Therefore, life-safety of passengers and pedestrians will be in the balance.

History teaches many lessons. Alfred Nobel’s invention was revolutionary in fueling the global industrial and economic revolutions. It was designed to accelerate the mining of resources and building of infrastructure while improving the safety during transport and use. Ultimately, to Nobel’s displeasure, it was also used as the preferred compound for destruction and taking lives in wars across the globe.

More recently, advances in genetics emerged with the potential of medical breakthroughs and sweeping cures for afflictions that cause massive suffering. But again, such power could be misused and result in unintended consequences (destruction of our species, ravaging planetary ecosystems, etc.). Scientists and visionaries spoke up over a decade ago to support controls that throttled certain types of research. Such regulations and oversight has given the world time to understand certain ramifications and be more cautious as it moved forward with research.

Race to Destruction

Business competition is fierce and the race for innovation often casts aside safety. Government involvement can slow down the process, to allow more attention to avoid catastrophes and for society to debate the right level of ethical standards.

There was little need to argue for the regulations to be enacted to control the research and development of chemical, biological, and nuclear weapons. It was obvious. Nobody wants their neighbor to be brewing anthrax in their bathtub. But for cases where the risks are not apparent and potentially obscured by the great benefits, it becomes more problematic. Marie Curie, the famed chemist made great advances to modern medicine, with little regulatory oversight, and ultimately died from her discoveries. Nowadays, we don’t want just anyone playing around with radioactive isotopes. There is government oversight. The same is true for much of the medical and pharmaceutical world where research has boundaries to keep the population safe.

Artificial Intelligence, aside from science fiction movies where computers become self-aware and attempt to destroy mankind, is vague. It can encompass so much, but still be difficult to describe exactly what it can and cannot do. This is where technology visionaries play a role. Some have a keen insight to see the risks. Elon Musk, Stephen Hawking, and Bill Gates have also discussed publicly their concerns for runaway AI.

“AI’s a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, it’s too late,”– Elon Musk


Innovation and Caution

I believe Musk wants to raise awareness and establish guard-rails to make sure innovation does not recklessly run-away at the detriment of safety, security, and privacy. He is not saying AI is inherently bad. It is just a tool. One which can be used benevolently or with malice, and runs the risk of mistakenly being wielded in ways that create severe unintended consequences. Therefore, his message to legislators is that we must respect the power and move with more forethought as we improve our world.

Interested in more? Follow me on LinkedIn, Twitter (@Matt_Rosenquist), Information Security Strategy, and Steemit to hear insights and what is going on in cybersecurity.

Share this post via:

Comments

12 Replies to “Does Elon Musk Hate Artificial Intelligence?”

You must register or log in to view/post comments.