You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
It looks like this new design or lack there of may be another path way to AI. From what I see, a totally different method to chip design. Comments on something this radically different and it's possible potential appreciated. I haven't read anything about an approach like this before.
Another pure research approach with very little practical application for real-world problems. I mean, they have a transistor with two inputs and 6 gate connections, where it takes them about an hour to figure out how to apply the correct voltage to all 6 gate connections to perform a function. Plus, this specific technology has to operate at the ultra-low temperature of 5 Kelvin.
The mysteries of how the human brain actually work are quite far removed from any man-made competition. Yes, it's a worthy goal to mimic the human brain but it won't be accomplished with this research approach.
The mysteries of how the human brain actually work are quite far removed from any man-made competition. Yes, it's a worthy goal to mimic the human brain but it won't be accomplished with this research approach.
I think IBM is much closer to a chip that mimics human brain (see here). If I remember correctly these chips have been used to mimic behavior of smaller animal brains closely.
I think IBM is much closer to a chip that mimics human brain (see here). If I remember correctly these chips have been used to mimic behavior of smaller animal brains closely.
Yes, using neurons and synapses in computer chips is akin to the human brain, however IBM would need to increase the number of neurons from their present 1 million to 100 billion in order to scale up to a human brain.
IBM has modeled 256 million synapses, while the human brain is at 100 trillion synapses, so let's see if technology can approach the exquisite design of biology.
IBM has modeled 256 million synapses, while the human brain is at 100 trillion synapses, so let's see if technology can approach the exquisite design of biology.
If Moore's law applies - doubling every 1.5 years - this is around 15 years to scale a factor of 1000.
But I don't think we really want to replicate a human exactly in a machine. Humans are very inefficient machines, they have moods, get bored, etc. Machines now already beat humans in complex games like chess. If in these brute force algorithms part is replaced by human brain like structures for intelligence the machines will only become more powerful and energy efficient even if the neuron part is not approaching the size of a human brain.
Stacking up the brain-like chips is not the biggest problem. The IBM design for example was very modular is the sense that each module is connected to its neighbors on left/right and up/down. They even had a mechanism to bypass a particular module and use it as a "wire" just to pass on the information. You could extend the idea relatively easily to connect individual chips either at board level or even wafer level and even extent to 3D to build a sizable system. The biggest problem to me is how to train the system once it is large enough. More than a decade ago I took a course on neural networks. When I complained that it takes a few hours to train my code to do a digit recognition task, my professor just smiled and said "Look how long it took you to learn stuff ... years". That's the nature of neural nets. They are robust, salable, and generic, but takes them long to train.
Stacking up the brain-like chips is not the biggest problem. The IBM design for example was very modular is the sense that each module is connected to its neighbors on left/right and up/down. They even had a mechanism to bypass a particular module and use it as a "wire" just to pass on the information. You could extend the idea relatively easily to connect individual chips either at board level or even wafer level and even extent to 3D to build a sizable system. The biggest problem to me is how to train the system once it is large enough. More than a decade ago I took a course on neural networks. When I complained that it takes a few hours to train my code to do a digit recognition task, my professor just smiled and said "Look how long it took you to learn stuff ... years". That's the nature of neural nets. They are robust, salable, and generic, but takes them long to train.
Our society perhaps is a bit obsessed with the AI portrayed in popular science fiction movies and books, however there remains a large gap between reality and Hollywood. Still, AI and learning-based software is a noble goal to pursue.
At the Linley Processor Conference last week there was a lot of discussion of Convolutional Neuro Networks (CNN). The systems can be trained to recognize any object. If you want them to learn to recognize a different object, no coding is required; you just retrain them. They are the leading candidate for ADAS vision systems that would be part of a self driving car. They are actually better than humans at recognizing faces, hitting the high 90% range. Humans tend to score around 89% to 92%.