Semiwiki 400x100 1 final
WP_Term Object
(
    [term_id] => 6435
    [name] => AI
    [slug] => artificial-intelligence
    [term_group] => 0
    [term_taxonomy_id] => 6435
    [taxonomy] => category
    [description] => Artificial Intelligence
    [parent] => 0
    [count] => 558
    [filter] => raw
    [cat_ID] => 6435
    [category_count] => 558
    [category_description] => Artificial Intelligence
    [cat_name] => AI
    [category_nicename] => artificial-intelligence
    [category_parent] => 0
)

What’s old is new again – Analog Computing

What’s old is new again – Analog Computing
by Bernard Murphy on 01-02-2018 at 7:00 am

Once in a while I like to write on a fun, off-beat topic. My muse today is analog computing, a domain that some of us antiques in the industry recall with fondness, though sadly in my case without hands-on experience. Analog computers exploit the continuous nature of analog signals together with a variety of transforms to represent operations to solve real-value problems. In the early days, certain problems of this type were beyond the capabilities of digital computers, a notable example being finding solutions for differential equations. If you have taken a basic analog design course, you already know of an important transform relevant to this domain; an op-amp with a capacitative feedback loop acts as an integrator.

20914-analog-computer-min.jpeg
Coming out of the Second World War and moving onto the Cold War, Korea, Vietnam and other potential and real engagements, there was high interest in improving accuracy in firing control. This required solving, guess what, lots of differential equations defined by the mechanics of projectiles (thrust, gravity, air resistance, et al). Analog computing became hot in defense and aerospace and remained that way until digital (and later DSP) techniques caught up and surpassed these systems. Even the general public could get in on the action. Heathkit (another name from the past) sold a hobbyist system as early as 1960, long before most of us were thinking of digital computers.

But that was then. Are analog computers now just an obscure footnote in the history of computing? Apparently not. One hint was an article appearing recently in IEEE Spectrum. A team at Columbia University has been building integrated analog computers, where connectivity between analog components is controlled digitally. They are now on their third-generation chip.

These computers can solve problems (within their scope) within the order of a millisecond, though the solutions are accurate only to within a few percent, thanks to noise. The Columbia team view this as a good way to provide an approximate solution as input to a digital solver which can finish the job. Since finding an approximate solution is often the hardest part of solving / optimizing, the hybrid combination of analog and digital could be quite valuable. That said, there are plenty of challenges to overcome. One example is bounded connectivity in a 2-dimensional implementation. Functions can easily be constructed between neighboring components but connecting to other more distant functionality is generally fraught with problems for analog signals. Still, you could imagine that solutions might be found to this problem.

A more interesting (for me) possibility for analog/mixed-signal systems is around neuromorphic computing. What we are most familiar with in neural modeling is neural nets (NN) used for recognition applications, modeled using GPUs or DSPs or specialized hardware. But neural nets such as these are very simple models of how neurons really work. Real neurons are analog so any model has to mimic analog behavior at some level of accuracy (which is why DSPs are so good at this job). However, neuron behavior is more complex than the basic NN model (sum inputs, apply a threshold function, generate an output). For example, some inputs may reinforce or suppress other inputs (sharpening, which is related to remembering and forgetting).

More generally, inputs to a real neuron are not undifferentiated connections. Output(s) from a neuron to other neurons can be mediated by any one of multiple possible neurotransmitters with different functions, including the sharpening functions mentioned above. And all of this can be bathed in hormones secreted from various glands which further modulate the behavior of neurons. Who cares, you say? If one goal in building intelligent systems is to more closely mimic the behavior of the brain, then stopping at present day neural nets seems to be throwing in the towel rather too quickly, given real neuron complexity.

Which is why the Human Brain Project in Europe and the BRAIN Initiative in the US are working to jointly advance neuroscience and related computing. This has driven quite a bit of development in neuromorphic compute systems, such as the Neurogrid developed at Stanford. What is especially interesting about many of these systems is the significant use they make of analog computation together with digital methods. Here, differential equations play no part (as far as I know). The motivation seems much more around low-power operation (Stanford cite a 10[SUP]5[/SUP] reduction in power over an equivalent supercomputer implementation) and a tolerance to analog noise-related inaccuracies in this application. After all, real neurons aren’t hyper-accurate and NN implementations for inferencing are already talking about 1- or 2-bit accuracy being sufficient for image recognition.

The constraints faced by the Columbia work don’t play such a big role here. In using analog to model neuron behaviors, 2D bounds on a chip reflect physical bounds in the brain (and if you need to go 3D, presumably that would be possible too with stacking). So maybe the big comeback for analog computing will be as a close partner with digital in neuromorphic computing. Perhaps someday this approach may even replace neural nets?

Share this post via:

Comments

3 Replies to “What’s old is new again – Analog Computing”

You must register or log in to view/post comments.