[content] => 
    [params] => Array
            [0] => /forum/index.php?threads/john-doerr-ai-to-replace-shrink.16289/

    [addOns] => Array
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270

    [wordpress] => /var/www/html

John Doerr, AI to replace shrink

Arthur Hanson

Well-known member
Any thoughts is this area and what companies will lead the way? I feel on the hardware end of AI it will be TSM and Micron building the chips for AMD and Nvidia. In the software, it will be Google, Apple, and Meta. Any additions or thoughts on this are appreciated. I'm especially interested in machine learning and any thoughts, comments, or observations in this area will be appreciated. I feel the "Automation of Everything" is coming and will come on fast. It will be politics that hold progress back as automation we have seen in EDA extends to almost every profession.
It's exciting, but we can solve way more of our problems by treating people better and fostering human intelligence and curiosity across the sprectrum. I'm a fan of Judea Pearl, he did Bayesian networks in the 80s when everyone was talking about AI just as much if not more than they are now.

His argument is that although advanced digital technology has turned on a fire hose of "information" to create labeled data sets for the training regressions, this data has been rendered almost useless by the socioeconomic corruption of our age. For example, in 2016 Microsoft released a simple AI chatbot onto the internet and within 16 hours it became a rabid Nazi and they had to shut it down. And while this deluge has lead to some incredibly powerful N-dimensional curve fitting with billions of parameters to accommodate increasingly complex functionality, there is no reason to believe this will lead to any kind of "causal intuition" on the part of the machine. He says this can be achieved by additionally processing relevant counterfactual scenarios, but that implies that we're gonna need probably another order of magnitude in processing power before "strong AI" is a thing.

And if you really wanna get into the weeds, a former colleague of Stephen Hawking named Roger Penrose argues that quantum wave functions (and their collapse) is at the core of human consciousness, and thus any machine with aspirations for strong AI will need a quantum core to provide the id. Not usually a fan of Freud but I suppose here the machine would act as the ego and the user provides the superego? Idk, fun stuff but we've got worse things coming our way.
The best falsification of AI is still the Lighthill Report commissioned by the
British Government in 1973 back when computer science was studied as
applied mathematics. Author was James Lighthill who was Newton
chair of applied mathematics at Cambridge University. Here is the
I have written a paper that shows why the Lighthill report still applies
to current AI.

Problem is combinatorial explosion - too many states no matter how fast
programs and hardware impletemented algorithms are.

Sadly AI is bad science believed for political reasons just as crypto-currencies
are being exposed as a scam believed because of politics recently
(libertarians do not like governments).