Definitely room for new flavors and approaches - three things to think about.
1) We’re on our 3rd generation of computer-based AI. The first was based on general pattern matching and substitution like the ELIZA program in the late 1960s, along with the precursor of all neural networks, the Perceptron of 1960s. Interestingly enough, we might have had some of the current technology a little earlier based on the Perceptron, if Minksy and Papert hadn’t put the academic kabosh on the technology, the developers had figured out back-propagation of parameters.
en.wikipedia.org
The second generation was rules-based technology and fuzzy logic of the 1980s-1990s, which had all the problems and limitations that go along with rules, and the limited applicability of fuzzy logic. I should also point out that back propagation for neural networks was pioneered during this time period. We’re now at Gen 3 thanks to the convergence of large datasets with embedded knowledge and the cpu power to digest and train / infer from that data.
2) You’ll see a number of implementation alternatives For AI/ML if you look at places like the HotChips proceedings. You’ll see approaches that stay analog for computation and storage, like our neurons/brains, optic computation that speeds calculation and slashes power, plus in memory computation approaches, all with tradeoffs today.
3) There is also plenty of room for improvement in the current ML/AI paradigm. The current state of the art can’t infer beyond what the incoming data has “taught it” like a small child could. And even though neural networks can learn, they are incredibly specialized and fixed for each task. I‘m hoping for some breakthroughs in neural networks that rewire and optimize themselves for better results. Much of the early years of a child’s life, the brain is essentially rewiring itself for written and spoken language, plus object identification.