Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/is-it-worth-digging-deeper-in-asynchronous.8574/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Is it worth digging deeper in asynchronous?

It is not that a '1' bit has any energy, rather that it takes a flow of current through a conductor on the chip to change the voltage of the wire. That current flows also through a transistor. Both have resistance and current flowing through a resistor generates heat, duh! Then adding a second wire that changes to a '1' from a '0' doubles the heat, OMG!

Clock distribution does generate considerable heat and as I mentioned before the wiring delay ratio to switching speed probably makes async possible. P&R/placement/STA modification along with an event driven simulator are needed although I don't see the need for continuous event timing that was mentioned before.
 
ASYNC featuring low power long battery life ROW solar "smartphone"

Thanks for any help. Request your input. Developing OPEN (free to anyone) Tech Spec for M-KOPA ASYNC featuring low power long battery life ROW, 7B ppl TAM, inexpensive smartphone, and a less expensive W3C RTC version, i.e. without cellular.
Any ball park cost figure? 28M$? at 28nm? Currently - "...
M-KOPA has also sold over 9,000 Huawei and Samsung smartphones in the $50 to $100 price range. It is now shifting over 1,000 smartphones per month..." If it's possible with ASYNC to get 10X (or more?) battery life at the same price .... that would be good. Noob to semiwik ... so hope this is ok topic. For "product spec" need both the tech ASYNC knowledge and the cost knowledge ... (also posted on cost/transistor thread I saw recently )
8/18/2016 article sweet spot cost/transistor ...
"... At 5nm, it will cost $500 million or more to design a “reasonably complex SoC,” Johnson said. In comparison, it will cost $271 million to design a 7nm SoC, which is about 9 times the cost for a 28nm planar device, according to Gartner...."
 
Art,
I have no proof but it seems Philips Xenium phones are async inside. My old Xenium X1560 (not smartphone) requires charging every 2-3 weeks. The same refers to every Xenium phone (even for smartphones) - just look into their specs.
 
Power Management in theAmulet Microprocessors circa 2001

Amulet2e "power-efficiency of 280 MIPS/W" ...
I'd like to know/work out/calculate power-efficiency for an updated ASYNC ARM design ...
Call is Amulet28 in honor of the early work, and 28 for the sweet spot in cost/transistor fab today ...

Classic ...
" ... Amulet2e, shown in Figure 6, has been fabricatedand successfully runs standard ARMcode. It was produced on a 0.5-µm, three-metallayerprocess. It uses 454,000 transistors (93,000of which are in the processor core) on a diethat is 6.4 mm square. In the fastest availableconfiguration, the device delivers 42 Dhrystone2.1 MIPS (million instructions per second) witha power consumption of about 150 mW in thecore logic—which includes the processor coreand cache memory, but excludes the I/Opads—running at 3.3 V. This is faster than theolder ARM 710 but slower than the ARM 810,which was fabricated at about the same time.It represents a power-efficiency of 280 MIPS/W,which is as good as either of the clocked CPUs. ..."
 
@Art Scott: Somehow I missed the notification of your reply, just came across it.

Before Verilog hardware design was called logic design because nets were defined using Boolean Algebra. which is the only good way.

Apparently it was assumed that simulation to create waveforms was adequate for verification, and on and on...

Digital hardware is quite simple as there are gates, flip-flops, and pins. (registers and memories are equivalent to arrays of flip-flops)

The essence of design is define the functional sequences to produce meaningful outputs determined by sequences of inputs.

Of course you already know this. The two other necessary thing is events that occur when inputs change or internal states change.

The time for nets to resolve is critical and rather than using a clock period that is long enough for the longest path to resolve, a delay time can be
generated that is matched to the path delay for the currently active path. The delay is triggered when the change is enabled, but the change occurs at the end of the delay.
 
4 years have passed and I am at a crossroads again (looking for work). Has anything changed on the Async field? Any new commercial async-involved projects appeared?

Recently I saw a German project where AI (reconfigurable SNN) chip is based on purely async neurons and a serving NoC that operate exactly at the threshold voltage supply. A very smart approach to reduce the consumption and to maximize the power efficiency.
From the other hand I heard news about Wave Computing. I remember they declared that Async is their key technology when they were just founded, but .. the time has shown that Async didn't bring them any fame and/or profit. Sadly.
 
4 years have passed and I am at a crossroads again (looking for work). Has anything changed on the Async field? Any new commercial async-involved projects appeared?

Recently I saw a German project where AI (reconfigurable SNN) chip is based on purely async neurons and a serving NoC that operate exactly at the threshold voltage supply. A very smart approach to reduce the consumption and to maximize the power efficiency.
From the other hand I heard news about Wave Computing. I remember they declared that Async is their key technology when they were just founded, but .. the time has shown that Async didn't bring them any fame and/or profit. Sadly.

Intel/Altera are on track to make asynch work. [they don't know it] Providing a way to generate a time interval to allow for nets to resolve/settling time is fundamental. The time for a signal to propagate is proportional to the distance from the driver to the receiver. [on the order of 1.2 ns/ft]
The StratixV FPGAs choose the destination flop/reg for a signal based on that time. [unfortunately garden variety STA/synthesis does not]

My approach is to include the event time/delay with algebraic and Boolean expressions for the data flow and control logic respectively. Given the input values and times, the evaluation times and and values can be calculated for internal nets and outputs can be calculated.

The next step is to design a Stratix V FPGA to get the actual wave forms and timing.
 
Thanks for sharing! A quite interesting news. At the first glance they are making just another Amulet/HS chip, but with dynamic power control. Hope I am wrong

Intel/Altera are on track to make asynch work. [they don't know it] Providing a way to generate a time interval to allow for nets to resolve/settling time is fundamental. The time for a signal to propagate is proportional to the distance from the driver to the receiver. [on the order of 1.2 ns/ft]
The StratixV FPGAs choose the destination flop/reg for a signal based on that time. [unfortunately garden variety STA/synthesis does not]

My approach is to include the event time/delay with algebraic and Boolean expressions for the data flow and control logic respectively. Given the input values and times, the evaluation times and and values can be calculated for internal nets and outputs can be calculated.

The next step is to design a Stratix V FPGA to get the actual wave forms and timing.
Do you mean, you are experimenting with Async automata? And, you are planning to implement it in FPGA using bundled delay (BD) approach? I think this is well interesting. I suggest you to read Igor Lemberski's articles, he is experimenting with Async automata in FPGAs for years. Are you doing a scientific research, or is it just a hobby maybe?
 
Thanks for sharing! A quite interesting news. At the first glance they are making just another Amulet/HS chip, but with dynamic power control. Hope I am wrong


Do you mean, you are experimenting with Async automata? And, you are planning to implement it in FPGA using bundled delay (BD) approach? I think this is well interesting. I suggest you to read Igor Lemberski's articles, he is experimenting with Async automata in FPGAs for years. Are you doing a scientific research, or is it just a hobby maybe?
Thanks for sharing! A quite interesting news. At the first glance they are making just another Amulet/HS chip, but with dynamic power control. Hope I am wrong


Do you mean, you are experimenting with Async automata? And, you are planning to implement it in FPGA using bundled delay (BD) approach? I think this is well interesting. I suggest you to read Igor Lemberski's articles, he is experimenting with Async automata in FPGAs for years. Are you doing a scientific research, or is it just a hobby maybe?
It is a hobby[mission] prompted by the poor quality and short comings of the current design tools. It goes back to the time when "Verilog can be simulated, therefore it must be used for design entry." Verilog is source for synthesis tools, definitely NOT a good choice for design. I am frustrated because there is so much talk about getting away from RTL and so little understanding of how chips work.

Object Oriented Programming is about defining classes that perform either logical or arithmetic functions and connecting them together in a meaningful way. C Sharp C# also supports Event Handlers and events. It is the concept of generating an event that will trigger an event handler
that is important.

By using the propagation delay of the FPGA signals as the event propagation time and then adding additional wiring delay as needed can replace the synchronous clocking. This is essentially what Stratix V by selecting the appropriate flip-flop to capture the signal.

It is not a research topic, just good old tried and true logic design. Yes for now Verilog is useful because synthesis is necessary for timing analysis.

BUT events and event handler concepts allow for logic simulation before synthesis.
 
It is a hobby[mission] prompted by the poor quality and short comings of the current design tools. It goes back to the time when "Verilog can be simulated, therefore it must be used for design entry."
I wouldn't say that verilog is poor as an entry language. Do you know how to design complex async automata (not a Moore/Mealy FSM)? the entry is just a wave form. A sequence of signal transitions on the paper. You need to take this waveform as an entry, than to draw a corresponding Petri net, and then to compile it into gate netlist using special compilers like Petrify. Looks like nightmare, but this is the best known way.

Verilog is source for synthesis tools, definitely NOT a good choice for design. I am frustrated because there is so much talk about getting away from RTL and so little understanding of how chips work.
I totally agree with the second part. Concerning Verilog .. I am using it for 20+ years, and still haven't seen anything better. Unfortunately.

Object Oriented Programming is about defining classes that perform either logical or arithmetic functions and connecting them together in a meaningful way. C Sharp C# also supports Event Handlers and events. It is the concept of generating an event that will trigger an event handler that is important.
All known simulation and STA tools are based on synchronous approach to design. Synchronous means that all transitions are finite and must be completed before the next clock will arrive. This means that simulation/STA only handle a concurrent processes in simplest way, I mean - simultaneously, dealing with small and finite parts only - and this is how they gain a performance. There is no need in a true concurrency to simulate synchronous circuit.
From the other hand, I have no idea how to simulate a really concurrent processes. Mainly because of async arbiters and their exiting metastability time. This time isn't determined and behave like Gaussian "bell-shape" distribution. So, concurrent simulation must operate with statistical data. This is a big problem, I'd say - a blocker to invent a true concurrent simulator.

It is not a research topic, just good old tried and true logic design. Yes for now Verilog is useful because synthesis is necessary for timing analysis.
Cool! I wish you luck in this interesting topic.

Btw, when was the last time when I thought about the asynchronous CPU concept, I formed two basic blockers for this task: the first one is async interface, and the second is async RAM. The problem with SRAM is well known - it cant operate below 300-400mV (by many reasons). Concerning async interfaces .. there are no of them. Except of good old VME, perhaps. But the interface must be robust enough to prevent a latch-up of async circuit (all handshake-type async circuits are very vulnerable to latch-ups). So, what I want to say - to design a async circuit isn't a problem nowadays. The real problem is - to build the surrounding for Async to allow Async to work and to keep it safe.
 
I wouldn't say that verilog is poor as an entry language. Do you know how to design complex async automata (not a Moore/Mealy FSM)? the entry is just a wave form. A sequence of signal transitions on the paper. You need to take this waveform as an entry, than to draw a corresponding Petri net, and then to compile it into gate netlist using special compilers like Petrify. Looks like nightmare, but this is the best known way.


I totally agree with the second part. Concerning Verilog .. I am using it for 20+ years, and still haven't seen anything better. Unfortunately.


All known simulation and STA tools are based on synchronous approach to design. Synchronous means that all transitions are finite and must be completed before the next clock will arrive. This means that simulation/STA only handle a concurrent processes in simplest way, I mean - simultaneously, dealing with small and finite parts only - and this is how they gain a performance. There is no need in a true concurrency to simulate synchronous circuit.
From the other hand, I have no idea how to simulate a really concurrent processes. Mainly because of async arbiters and their exiting metastability time. This time isn't determined and behave like Gaussian "bell-shape" distribution. So, concurrent simulation must operate with statistical data. This is a big problem, I'd say - a blocker to invent a true concurrent simulator.


Cool! I wish you luck in this interesting topic.

Btw, when was the last time when I thought about the asynchronous CPU concept, I formed two basic blockers for this task: the first one is async interface, and the second is async RAM. The problem with SRAM is well known - it cant operate below 300-400mV (by many reasons). Concerning async interfaces .. there are no of them. Except of good old VME, perhaps. But the interface must be robust enough to prevent a latch-up of async circuit (all handshake-type async circuits are very vulnerable to latch-ups). So, what I want to say - to design a async circuit isn't a problem nowadays. The real problem is - to build the surrounding for Async to allow Async to work and to keep it safe.
 
I was designing, debugging, trouble-shooting computers and control units 20 years before Verilog was invented. There was no timing analysis, and programming was done in assembler. ANFSQ7 and IBMSystem360 including IO devices, for reference.

First was the data-flow including registers, buses, and interface controls. Then decoders, ALU, counters, etc. And the interfaces sequencing controls.

The key to making it work was the system interconnections with data inputs/outputs and control signals. THIS WAS THE FIRST THING THAT VERILOG IGNORED!

I truly do not understand how it is desirable or even possible to use wave-forms as design entry. It is the logic and sequencing to produce the waveform that is required.

I am considering posting syntax, prototype parser/simulator either on DropBox or GIT open source. Windows Visual Studio is required.
 
pmat,
I agree, there were many startups, who tried to use async logic. But where are they now?
Big companies like Intel sometimes take over little startups just to obtain new technologies, not to use them. I heard about Intel's "elastic" circuits - asynchronous-like spin off from classical synchronous design. But I have never heard about practical usage of elastic circuits in Intel's processors. The same about Philips spin off "Handshake Solutions - HS". They made first commercially-successful asynchronous mc's based on ARM996 and i8051 architectures. And where is HS now? I may only conclude, that all async startups disappear for unknown reasons. And I am very curious about this reason.

I worked with Ad Peeters and Kees Van Berkel at Philips on Handshake Circuits in the early nineties. It was a joint project with Steve Furber's AMULET group, the prescursor to Spinnaker. I can tell you a lot of stories 😉
 
Back
Top