Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/any-alternative-paths-to-ai-ml.19190/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Any Alternative Paths to AI/ML?

Arthur Hanson

Well-known member
Could there be an alternative path to artificial intelligence than the ones currently implemented? Could more forms of artificial intelligence for specific specialized tasks that are more efficient than the general approach now being used? Is any organization known to be working on alternatives? Thinking about it, I should say how many are developing or at least looking at aleternatives
 
Last edited:
Definitely room for new flavors and approaches - three things to think about.

1) We’re on our 3rd generation of computer-based AI. The first was based on general pattern matching and substitution like the ELIZA program in the late 1960s, along with the precursor of all neural networks, the Perceptron of 1960s. Interestingly enough, we might have had some of the current technology a little earlier based on the Perceptron, if Minksy and Papert hadn’t put the academic kabosh on the technology, the developers had figured out back-propagation of parameters.
The second generation was rules-based technology and fuzzy logic of the 1980s-1990s, which had all the problems and limitations that go along with rules, and the limited applicability of fuzzy logic. I should also point out that back propagation for neural networks was pioneered during this time period. We’re now at Gen 3 thanks to the convergence of large datasets with embedded knowledge and the cpu power to digest and train / infer from that data.

2) You’ll see a number of implementation alternatives For AI/ML if you look at places like the HotChips proceedings. You’ll see approaches that stay analog for computation and storage, like our neurons/brains, optic computation that speeds calculation and slashes power, plus in memory computation approaches, all with tradeoffs today.

3) There is also plenty of room for improvement in the current ML/AI paradigm. The current state of the art can’t infer beyond what the incoming data has “taught it” like a small child could. And even though neural networks can learn, they are incredibly specialized and fixed for each task. I‘m hoping for some breakthroughs in neural networks that rewire and optimize themselves for better results. Much of the early years of a child’s life, the brain is essentially rewiring itself for written and spoken language, plus object identification.
 
I do not disagree that there is room for the new.

But the excitement about AI in the last year is precisely because it is breaking out beyond the training set. I suggest you look at the "system card" for GPT4 to get some idea of how clever these systems are.

Sure there are holes in the bucket for reasoning, math, and fact checking. Holes which are rapidly being patched in interesting ways. There are LLMs now which have a core of logical reason built in (by training on a logically sound corpus first, before adding other data), and math and fact checking have been overlayed with APIs that use LLM capabilities to rewrite the input and send it into APIs which supplement it.

The rate of change is impressive, and the LLMs themselves are becoming a lever to make those upgrades.
 
Sure there are holes in the bucket for reasoning, math, and fact checking. Holes which are rapidly being patched in interesting ways. There are LLMs now which have a core of logical reason built in (by training on a logically sound corpus first, before adding other data), and math and fact checking have been overlayed with APIs that use LLM capabilities to rewrite the input and send it into APIs which supplement it.

These aren’t really the ”inference” holes I alluded to. It‘s tricky because infer and inference are overloaded terms. My main point is that infants and small children are born with some innate high-level inferencing skills that allow them to generalize well beyond the dataset they know, plus learn how to learn.

 
Could there be an alternative path to artificial intelligence than the ones currently implemented? Could more forms of artificial intelligence for specific specialized tasks that are more efficient than the general approach now being used? Is any organization known to be working on alternatives? Thinking about it, I should say how many are developing or at least looking at aleternatives
Yes, as I know IBM and its partners are doing this job, and it is called something like domain-knowledge based, or customized AI. Their concern is the general-purpose AI is too big and if this trend continues, the application of general-purpose AI is unbearable and unsustainable in terms of cost and power efficiency.

Actually even AI giant like google, meta and microsoft are also investigate the new algorithms to bring general-purpose AI like LLM models to the edge (low power and low cost), like prompts engineering, teacher-student model, expert-system aided AI etc...
 
Back
Top