Neural nets are a hot topic these days and encourage us to think of solutions to complex tasks like image recognition in terms of how the human brain handles that task. But our model today for this neuromorphic computing is several steps removed from how neurons actually work. We’re still using conventional digital computation at the heart of our models, albeit in a non-algorithmic way, training the system to encode weights and thresholds for feature recognition.
So other than in this abstract sense, we can’t claim this our neuron models are faithful representations. This may not be just semantic hair-splitting; the difference may be significant in how energy-efficient and robust neuromorphic models can be. Researchers at IBM announced very recently that they have constructed artificial neurons which more closely mimic characteristics of real neurons and which should be able to show improvements in both these areas.
To improve the energy component, they exploited a well-known trick – hardware is more energy-efficient than software. But these guys dug deep when thinking of hardware – all the way down to material physics, specifically phase-change materials. Use of phase-change effects has already been mentioned on this site in the context of fast memories developed by IBM. The researchers in this case are also from IBM, but are applying in a different way the ability of these materials to switch from amorphous to crystalline states.
The base material in both cases is a germanium antimony telluride alloy, which is encouraging because IBM have already demonstrated enough process expertise with this material to build a 64k-bit memory. In the memory case, the goal had been to build (3-state) bit cells, but for neurons they are using the phase transition in a more analog operation. As currents inputs progressively flow through a device, it switches from the amorphous phase to the crystalline phase, in effect integrating those currents over time.
This transition, when it happens, is sudden and is coupled with a corresponding transition in conductance which can be observed electrically or optically, after which a neuron must be reset with a higher voltage pulse. This is very similar to the way biological neurons integrate and fire. Energy required per neuron update has been measured at ~5 picojoules, although the cycle time for firing is disappointingly low at ~100Hz (presumably gated by time to melt back to the amorphous phase).
These neurons have another interesting property – firing is stochastic (not completely predictable) because the crystallization and melting stages necessarily cycle through random configurations. This stochastic property is thought to be important in real neurons as a method to avoid getting stuck in local minima (think of simulated annealing for example) and therefore to provide robustness in conclusions, especially in ensembles of neurons. However little is reported in results in this direction so far; the research report is quite recent (publication date May 2016).
You can read a summary of the research HERE and the more detailed paper (paid access) HERE.
Share this post via:
Next Generation of Systems Design at Siemens