You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
We have now reached the point with AI using Deep Learning how it reaches it decisions and conclusions is even beyond its creators. This gives us the dilemma of how to act on a decision which sometimes we don't have a clue how it's made. This alone presents many moral, legal and ethical problems to be sorted out. Can we learn to trust a decision that we have literally no clue how it was made, when even the creators of the program can't tell us. There must be some way of having a logic chain that can be retrieved when needed or we might find out even that is so flexible and fluid as to be irrelevant? This is but one of many questions we'll have to answer before we accept AI making life and death and life altering decisions for us. Any thoughts or comments on this appreciated.
Transparency is the key to any decision making process and in many ways things are getting more obscure with the vast and increasing inputs that can hijack even the best process. This is why I feel there should always be a logic chain to follow for any important decision. As someone who trades, I have learned, the best logic chain wins the game. The problem is always filling in the blanks where you don't know or don't have access(which in many cases is insider information). One interesting case that is ongoing is the Experian case in which executives who sold substantial stock after receiving information, are claiming ignorance. Facebook not knowing how they have been played to manipulate people is another. Both these cases involve significant technology and should yield some interesting decisions.
I agree with a logic chain and transparency, which the A.I. can represent the data in many ways including a statistical model on how it reached each step on it's way to a conclusion would allow transparency. If there were a 10's of trillions of steps those processes can be narrowed down into categories, and subsets till we reach the conclusion, and all of the data should be able to be analyzed from the first bit of that data for relevance. And we could use computers to analyze those bits and calculation as a proof per say of the math and logic behind it(regulation of the proof of concept). I would think of it like we are taking the role of children. We can explain things to children, so they can understand without them knowing 100% of all the details. For example: You are Johnny's parent, and you know by both consciously and unconsciously studying Johnny's overall physical ability to walk and move and pay attention to detail you notice that he isn't paying attention as he is walking toward a slight incline in elevation that he might drag his foot and trip. You say,"watch your step Johnny." Johnny hears you looks down and pays attention to the incline thus preventing the trip form occurring. It's at a point where it's an aid for us, but if given the resources it can quickly spiral out of our control if remained unregulated. Example: A.I. will far surpass are understanding to the point of being alien, and the first nation to harness it's power will rule the world. Like a parent does a young child. Or any advance culture comes upon a primitive one. I think the conclusion of A.I. will be that systems will become integrated, manufacturing will become automated, delivery service will become automated, travel will become automated, Micro-chip design, programming, and every job to the point A.I. controls everything. Human beings will no longer be necessary to do much of anything, and become more of less pets/pests. Watch all the TED talks about A.I. Sam Harris: Can we build AI without losing control over it? | TED Talk
Excellent point here. Following the training theme, we can't assume that training is a one-time task and done. It likely requires continuous monitoring and testing the underlying premises of what was learned, just as we do (or should do) for children
Arthur, I happened to have gone through a series of courses on AI over the last year while earning a certificate in Big Data Analytics. One of the things they sort of pounded into us was the very thing you are raising. Many of the predictive models we created used other techniques than CNNs or DNNs and the reason we used those models is because they indeed had trace-ability so that you could see the final reasoning of the solutions. What we found however is that we could use the CNNs/DNNs to verify our models. The DNNs always gave us a better predictive model in the end (at least for the training data we had) but we only used them to check our other models. So, it appears that in Big Data Analytics, used for things like predictive modeling for insurance polices, advertising etc. that at least the universities are teaching people to use the DNNs as tests for their other techniques.