Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/compounding-the-greatest-force-unleashed-by-semis.14155/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Compounding, The Greatest Force Unleashed by Semis

Arthur Hanson

Well-known member
The world is entering the greatest explosion of knowledge and its application in human history. With constantly accelerating advances in processing, memory, communications, instrumentation, and many other areas, the advances that semis of all types are bringing a compounding of knowledge at a rate society at large hasn't even started to adapt to. We are literally seeing the automation of everything become reality at an ever-increasing rate and are still trying to figure out how to harness this massive power in the best ways possible. Just like the extreme compounding of energy in a nuclear weapon, we are now seeing extreme compounding of knowledge and its applications brought on by the semi sector. It is going to take wisdom to harness this massive and growing power for just like nuclear power it can have massive benefits while having an equally massive downside.

All this is a "Brave New World" and how we handle it puts a very high responsibility on those working with and applying it to the world around us. Our knowledge and its application have reached critical mass and are now exploding changing everything. The upside and downside are equally great and like nuclear power the choices are ours. Unlike nuclear weapons where there is only one outcome, the semi world can change virtually everything in every way and this extreme complexity and diversity in and of itself and the infinite applications present among the greatest challenge in human history, with the highest stakes in human history. If handled properly, I see a massive increase in the world's GDP, with great economic and social benefits if handled properly and if not the world's greatest concentration of wealth with a very stratified society with many left behind from not having the means to keep up.

Any thoughts and comments on how we should deal with this solicited and appreciated. This has guided and will guide the application of my personal resources.
 
Last edited:
I think people know my view that AI is a scam. Developing better
problem specific application software is very positive, but
AI one general algorithm slows down or even stops progress.
AI programs run into something called combinatorial explosion
so people's intutiion will always be needed. If there were no
combinatorial explosion, there would be no cryptography.
Current in favor one "ldeep earning algorithm" seems to go back
to a 70s algorithm by Stuart Dreyfus that had terrible worst
case performance. There is a lot more to intelligence than
selling private data and facial recognition. Federico
Faggin has a new book explaining limits to computation:

"From the Invention of the Microprocessor to the New Science of
Consciousness"

My paper falsifying AI in on the arXiv respository.

"A Popperian Falsification of AI - Lighthill's Argument Defended"
URL: https://arxiv.org/abs/1704.08111
 
I don't think AI is a scam - I don't think anyone can argue how successful deep learning has been in image and speech recognition, but I also don't really believe in general AI quite yet.

A lot of classical statisticians and mathematicians had a hard time making sense of why deep learning works because it seems counterintuitive. We are always taught about the dangers of overfitting and combinatorial explosion as you pointed out. When the first deep learning models had such great success on ImageNet noone could really explain why. But in the last few years there has been a greater effort to really understand what's going on mathematically in a deep neural net, and that's lead to a lot of new research on overparametization and regularization which has converted most of the skeptics. So I'd suggest reading up on these topics.

That said, current deep learning models tend to be problem or task specific and the idea of artificial general intelligence still seems like science fiction for now.
 
I see the problem as lack of scientific growth because getting
papers criticizing AI published is impossible. Reality is
that as long as academics are dependent on Google
funding paid for by selling ads, refereeing will not be
objective. I hope some forward look ing university will
start a CS department that hires people to work on
proving inefficiencies in algorithms. Here are some
examples from the IACAP (International Association
of Computing and Philosoophy) refereeing of my
falsification paper. I presented it at the 2019 IACAP
conference and was invited by the Springer editors
to submit it. Referees report was negative with
no possibiity of making changes to resubmit.

Some quotes from the referees who I claim
all have conflicts fbecause of Google funding.

"Generally speaking, the author's view of AI appears to be
very limited, almost fictional."

"In short, this paper to a certain extent lacks a degree of
academic quality and integrity required for inclusion into
an IACAP collection."

"The paper is muddled and awkward making it difficult to
make any claims about originality."

None of these statements were accompanied by specific
references in the paper.
 
I see the problem as lack of scientific growth because getting
papers criticizing AI published is impossible. Reality is
that as long as academics are dependent on Google
funding paid for by selling ads, refereeing will not be
objective. I hope some forward look ing university will
start a CS department that hires people to work on
proving inefficiencies in algorithms. Here are some
examples from the IACAP (International Association
of Computing and Philosoophy) refereeing of my
falsification paper. I presented it at the 2019 IACAP
conference and was invited by the Springer editors
to submit it. Referees report was negative with
no possibiity of making changes to resubmit.

Some quotes from the referees who I claim
all have conflicts fbecause of Google funding.

"Generally speaking, the author's view of AI appears to be
very limited, almost fictional."

"In short, this paper to a certain extent lacks a degree of
academic quality and integrity required for inclusion into
an IACAP collection."

"The paper is muddled and awkward making it difficult to
make any claims about originality."

None of these statements were accompanied by specific
references in the paper.
This is a problem with academia in general, not just with respect to AI.
 
Back
Top