IC Mask SemiWiki Webinar Banner
WP_Term Object
(
    [term_id] => 151
    [name] => General
    [slug] => general
    [term_group] => 0
    [term_taxonomy_id] => 151
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 442
    [filter] => raw
    [cat_ID] => 151
    [category_count] => 442
    [category_description] => 
    [cat_name] => General
    [category_nicename] => general
    [category_parent] => 0
)

Good AI

Good AI
by Bernard Murphy on 09-27-2016 at 7:00 am

A hot debate recently, promoted notably by Elon Musk and Stephen Hawking, has explored whether we should  fear AI. A key question centers around the ethics of AI – how we can instill ethical values in intelligent systems we will build and how, I hope, we can ensure we use those systems ethically. This is not an academic question – autonomous cars have already been involved in crashes so it is reasonable to expect they will face challenges which require ethical decisions to be made. More concretely, Alphabet (Super-Google), Facebook, Microsoft, Amazon and IBM have been meeting periodically to discuss how to build ethics into AI.

However, this is a different class of problem from other applications of AI. When you think about image or speech recognition, for example, the class of images or words you want to recognize is well-defined and bounded. But ethical problem spaces are not so easily characterized. Philosophers and religious leaders have struggled for at least 2500 years with the difficult question of how to define what is “good”, and what guidance they provide is generally based more on beliefs than an evidence-rooted chain of reasoning. Which might be a workable starting point for automated ethics if we all shared the same beliefs, but unfortunately we don’t seem to be able to make that claim, even within a single cultural group.

Deep reasoning might be one way to approach the problem (within a common belief-system) on the grounds that perhaps you don’t have to understand the issues, you just have to train with sufficient examples. But how can you construct a representative set of training examples if you don’t understand the problem? Verification engineers should relate to this – you can’t develop a reasonable test plan if you don’t understand what you are testing; the same principle should apply to training.

Perhaps we could develop a taxonomy of ethical principles and build ethics systems to learn human behavior for each principle within a given cultural group. It seems that these are easiest to define by domain; one example I found develops a taxonomy for ethical behavior in crowdsourcing. In some contexts, a domain-specific set of ethical principles might be sufficient. But you could imagine that other contexts may require ethical choices to be more challenging. A commonly cited example here is in choosing between multiple options in collision-avoidance – hitting another car (potentially killing you and the other driver), swerving into a wall (killing just you) or swerving onto a sidewalk (killing pedestrians but saving you). A purely rational choice here, no matter what that choice might be, is unlikely to be acceptable from all points of view.

Another viewpoint considers not the basics of recognizing ethical behavior but instead the mechanics of policing such behavior. It starts from the assumption that it is possible to capture ethical guidelines in some manner and provides layers of oversight, similar to societal standards, to an AI system which must be monitored. This approach provides monitors to ensure AI behavior stays within legal bounds (eg obeying a speed limit), super-ethics decision-makers which look not just at the narrow legality of a situation but also larger human ethics (saving a life, minimizing risk) and enforcers (police) outside the control of the local system which can report violations of standards. Perhaps this discussion doesn’t take the society analogy far enough – if we’re going to have laws and police, shouldn’t we also have lawmakers and courts? And how do we manage conflicts between AI laws and human laws?

On which note, Stanford has a 100-year study on AI in which they look at many factors, including implications for public policy. One discussion is on implications for civil and criminal liability and questions of agency (can AI enter into a legally binding contract?) and how those should be used in prosecuting the consequences of unethical behavior. What is interesting here is that damage to another is not limited to bodily harm – it could be financial or other kinds of harm. So definitions of ethical behavior in some contexts can be quite broad (imagine an AI bot ruining your online reputation). And the agency question is very important. In the event that liable behavior is found, who or what should be liable? This area cannot be resolved purely with technology – our laws must advance also, which requires that our societal beliefs must advance.


I found another piece of research quite interesting in this context – a piece of AI designed to engage in debate. Outside of fundamentalist beliefs, many (perhaps most) ethical problems don’t resolve to black and white choices especially when not in full possession of all relevant data. In these cases, the process of arguing a case towards an outcome is at least as important as a bald set of facts in support of the outcome, especially where there is no clear best choice and yet a choice must be made quickly. For me this line of thinking may be at least as important as any taxonomy and deep-reasoning based work.

As you can see, this is a domain rife with questions and problems and much more sparsely populated with answers. This is not a comfortable place for engineers and technologists to operate – we like clean-cut, yes/no choices. But we’re going to have to learn to operate in this fuzzier, messier world if we want AI to grow. The big tech collaboration is discussed HERE, the societal/oversight article can be found HERE. The Stanford study is HERE and the debate video is HERE.

More articles by Bernard…

Share this post via:

Comments

0 Replies to “Good AI”

You must register or log in to view/post comments.