Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/ai-software-making-ai-software.8880/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

AI software making AI software

Arthur Hanson

Well-known member
We have finally arrived, Google is making AI software that makes AI software. This will be the final key in the "Great Acceleration". Just like anything in tech its advancement will pick up speed at an ever accelerating rate. If AI is making even simple AI now, in a few years it will be making software no human could write. Already sophisticated software and robots have made pharmaceutical discoveries on their own, as I have written about in these forums. It's past time to think about the social and economic impact. We can no longer blindly make advanced technology and not consider the ramifications, uses and inherent dangers. This could be the key to a new greater future or our destruction. Top minds are currently arguing both sides. This discussion and actions are far to important to be left to chance.

https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/?set=603397
 
I am working on my IACAP conference paper that will defend Lighthill's
falsification of AI (meets at Stanford summer 2017 anyone can attend) so let me
try an argument why AI can't write software. Writing software is
no different than any other writing (people taking ideas and expressing
them in their books or for computer software programs). What ideas
does the AI program have? Arguably the best programmers were
Ken Thompson and Denis Richie of Bell Labs. They were like journalists
who met rapid dead lines and produced Pulitzer prize winning code.
 
smeyer, there is very little totally original or creative thinking, just manipulating known factors to progress to the next step. Many, if not most times this is just random plug and play or a logical progression. Both these is what most programs and people do. In reality a computer can try so many options to a solution, it comes up with a solution, just as many people do. True and total creativity is very rare. I, personally am making six figure commitments that AI or near AI will proliferate within four years. This is but a part of the "Great Acceleration" that I have been writing about in a number of areas.
 
Thanks for the reply. I obviously disagree. Here is how John Von Neumann put the
AI problem ins the early 1950s.

The insight that a formal neuron network can do anything which you can describe
in words is a very important insight and simplifies matters enormously at low
complication levels. It is by no means certain that it is a simplification on high
complication levels. It is perfectly possible that on high complication levels the
value of the theorem is in the reverse direction, namely, that you can express
logics in terms of these efforts and the converse may not be true. (quoted in
W. Aspray "John von Neumann and the Origins of Modern Computing," note 94, p. 321).

I assume you also disagree with Roger Penrose argument using Godel's results
in "The Emperors New Mind" from 1989.

I am still working on my paper. Karl Popper's philosophy is discussed by Paul Nurse
in his September 2016 Popper lecture at London School of Economics. There is a oral
pod cast of the lecture where Nurse criticizes AI in spite of working with the Deep
Mind company iin London. Criticism is near the end of the Q&A part.

URLis:28 September: Sir Karl Popper Memorial Lecture with Paul Nurse | Philosophy, Logic and Scientific Method
[/I]
 
I am not sure of the etiquette of replying to one's own post.
My paper falsifying AI has been submitted to the IACAP
(Interanational Association of Computing and Philosoophy)
conference at Stanford end of June 2017. I think James Lighthill's
argument from the 1970s that AI is impossible because it runs into
the combinatorial explosion problem is incontrovertible. I need to give
the conference referees a few months before I can
upload the paper to the Cornell ArXiv archive. If there is any interest,
I can email it to people. Title is "A Popperian Falsification of
AI - Lighthill's Argument Defended".
 
My paper falsifying AI has been uploaded to the arXiv repository.

Title: "A Popperian Falsification of AI - Lighthill's Argument Defended".
Reference:: arXiv:1704.08111 on site https://xariv.org

Sadly AI is pseudo science no different than cold fusion. It took the
IACAP conference referee an extra month to reject it so I have not
posted the reference earlier. I keep running into people who can't
imagine AI algorithms don't work. For some reason my papers never get
sent to reviewers who study 20th century philosopher Karl Popper's philosophy.

I ran into a recent example that maybe people will understand. Deep learning
is just normal minimum or maximum search algorithms that goes back to Isaac
Newton's 17th method. It is actually worse because deep learning algorithms
pre assume the solution. Deep learning papers describe it as: Meta learning
can be understood as algorithms for choosing regimes of deep learning
architectures. or Program X uses Bayesian optimization to tune deep
learning regimes.
 
I'll look into the link when I have a bit of time. I haven't read the book you mention. I do find differences in thinking in people, even between two great engineers. At the two extremes I see some who only think in steps, although very fast and those that think in leaps. Some even think in large leaps, although they are rare. I don't think we will ever get AI/ML to think in leaps, but the steps may be so fast, it's almost the same thing.
 
Back
Top