While at DVCon I talked to Apurva Kalia (VP R&D in the System and Verification group at Cadence). He introduced me to the ultimate benchmark test for self-driving – an autonomous 3-wheeler driving in Delhi traffic. If you’ve never visited India, the traffic there is quite an experience. Vehicles of every type pack the roads and each driver advances by whatever means they can, including driving against oncoming traffic if necessary. 3-wheelers, the small green and yellow vehicles in the picture below, function very effectively as taxis; they’re small so can zip in and out of spaces that cars and trucks couldn’t attempt. The whole thing resembles a sort of directed Brownian motion, seemingly random but with forward progress.
India city traffic
Making autonomy work in Western traffic flows seems trivially simple compared to this. But that’s exactly what an IIT research group in Delhi have been working on. Apurva said he saw a live example, in Delhi traffic, on a recent visit. We should maybe watch these folks more closely than the Googles and their kind.
After sharing Delhi traffic experiences, Apurva and I mostly talked about functional safety, a topic of great interest to me since I just bought a new car loaded with most of the ADAS gizmos I’ve heard of, including even some (minimal) autonomy. To start, he noted that safety isn’t new – we’ve had it for years in defense, space and medical applications. What is new is economical safety (or, if you prefer, safety at scale). If you add $4k of electronics to a $20k car, you have a $24k total cost. If you duplicate all of that electronics for safety, you now have a $28k total cost, less attractive for a lot of buyers. The trick to safety in cars is to add just enough to meet important safety goals without adding cost for redundancy in non-critical features.
For Apurva, ensuring this boils down to two parts:
- Analysis to build a safety claim for the design
- Verification to justify that safety claim is supportable
In my understanding, the first step is analysis plus failure mitigation. You start with failure mode effect and diagnostic analysis (FMEDA), decomposing the design hierarchically and entering (as in the table above) expected failure rates (FIT), planned safety mechanisms to mitigate (dual-core lockstep in this example) and the diagnostic coverage (DC) of failures expected from that mechanism. I don’t know how much automation can be found today in support of building these tables; I would guess that this is currently a largely a manual and judgement-based task, though no doubt supported by a lot of spreadsheets and Visual Basic. Out of this exercise comes the overall safety-scoring/analysis Apurva refers to in his first step.
Functional safety mechanisms are by now quite well-known. Among these, there’s the dual-core lock-step methods I mentioned above – run two CPUs in lock-step to detect potential discrepancies. Triple modular redundancy is another common technique – triplicate logic with a voting mechanism to pick the majority vote; or even just duplicate (as in DCLS) as a method to detect and warn of errors. Logic BIST is becoming very popular to test random logic, as is ECC for checking around memories. Also in support of duplication/triplication methods, it is becoming important to floorplan carefully. Systematic or transient faults (manufacturing defects or neutron-induced ionization for example) can equally impact adjacent replicates, defeating the diagnostic objective; mitigation requires ensuring these are reasonably separated in the floorplan.
The verification part of Apurva’s objectives is where you most commonly think of design tools, particularly the fault simulation-centric aspect. Cadence has been in the fault-sim business for a long, long time (as I remember, Gateway – the originator of Verilog – started in fault sim before they took off in logic sim). Fault sim is used in safety verification to determine if injected faults, corresponding to systematic or transient errors, are corrected or detected by mitigation logic. Therefore the goal of a safety verification flow, such as the functional safety flow from Cadence, is to inject faults in critical areas (carefully selected/filtered to minimize wasted effort), run fault simulation to report those errors then roll up results to report diagnostic coverage in whatever mechanism is suitable for the Tier-1/OEM consumers of the device.
So when you next step into a 3-wheeler (or indeed any other vehicle) in Delhi, remember the importance of safety verification. In that very challenging traffic, autonomy will ultimately make your journey through the city less harrowing and very probably faster as more vehicles become autonomous (thanks to laminar versus turbulent traffic flow). But that can only happen if the autonomy is functionally safe. Making that happen in Delhi traffic will likely set a new benchmark for ultimate safety. You can learn more about the Cadence functional safety solution HERE.