Banner Electrical Verification The invisible bottleneck in IC design updated 1

Monitoring Process, Voltage and Temperature in SoCs, webinar recap

Monitoring Process, Voltage and Temperature in SoCs, webinar recap
by Daniel Payne on 04-26-2018 at 4:00 pm

Have you ever wondered how process variation, thermal self-heating and Vdd levels affect the timing and yield of your SoC design? If you’re clock specification calls for 3GHz, while your silicon is only yielding at 2.4GHz, then you have a big problem on your hands. Such are the concerns of many modern day chip designers. To learn more about this topic I attended the 45 minute webinar from Moortec, titled “The Importance of Monitoring Process & Voltage in Advanced Node SoCs“. Ramsay Allan provided the introduction and overview, then Stephen Crosher, CEO presented about 20 slides on the challenges for IC design at 40nm down to 7nm, along with their semiconductor IP used for in-chip PVT monitoring.

Here’s a summary of physical effects that adversely effect chip timing and reliability:

  • Thermal hot-spots
  • Device and process variability
  • Increased resistance of interconnect from 40nm to 7nm
  • Power Delivery Network
  • Lower Vdd trends from 40nm to 7nm, lower design margins
  • Ageing that changes Vt
  • Self-heating accelerates BTI and HCI
  • Delays increasing from interconnect resistance
  • Delays increasing from Vdd variations

The webinar was chocked full of diagrams and charts to point out all of the effects that make you worried at night, for example the increase in interconnect resistance as you progress from 40nm to 7nm nodes:

Total delays within a chip are the combination of gate delays and wire delays, so as we used smaller process nodes the percentage of total delay caused by interconnect is now approaching 50%:

Process variation now shows it’s ugly side when on the same chip you can see different process corners, making it especially challenging to do timing analysis and reach timing closure.

To mitigate these issues clever IC designers have come up with several approaches that use in-chip monitors:

  • Voltage scaling optimization per chip by finding the lowest center functional voltages to meet frequency
  • Adaptive Voltage Scaling (AVS) as a closed-loop system using on-chip monitors
  • Self-adaptive tuning
  • Embedded chip monitoring to minimize power consumption at the enterprise datacenter level
  • Using AVS to do speed binning of parts

The British chaps from Moortec founded the company back in 2005 are are experts at applying in-chip monitoring to a wide range of commercial ICs. Their semiconductor IP has both hard blocks and soft blocks combined to create a subsystem for in-chip monitoring.

Placing multiple PVT sensors on a single chip makes sense, but how many of them should you add, and exactly where should they be placed for optimum impact? Good questions, so rely upon the technical support that comes along with their service. Placement of IP is really application specific.

For me the most powerful bit of information was saved for near the end when they unveiled a list of customers using their in-chip monitoring.

I plan to visit Moortec at DAC in San Francisco, however they’re also having multiple events across the globe to support their unique IP.

Summary
Having a plan to meet critical timing by having in-chip PVT monitors as part of a subsystem makes more sense as you reach the 40nm process node and going to ever smaller geometries. Yes, you could cobble together something proprietary in-house if you have lots of spare engineering resources and the time to design, verify, fabricate and test a one-off system. My hunch is that your product schedule and budget would be better served by looking at something off the shelf from Moortec, because that is their sole focus and the proof is in their ever-expanding list of adopters. They’ve setup distributors in all of the high-tech centers around the world and that would be a good starting point to learn more about their technology, approach and benefits.

Related Articles


Open-Silicon, Credo and IQ-Analog Provide Complete End-to-End Networking ASIC Solutions

Open-Silicon, Credo and IQ-Analog Provide Complete End-to-End Networking ASIC Solutions
by Camille Kokozaki on 04-26-2018 at 12:00 pm

The end-to-end principle as defined by Wikipedia is a design framework in computer networking. In networks designed according to this principle, application-specific features reside in the communicating end nodes of the network, rather than in intermediary nodes, such as gateways and routers, that exist to establish the network. [1]There are usually tradeoffs between reliability, latency, and throughput. High-reliability networks usually negatively impact the other components of the parameters of this data transmission triad, namely latency and throughput. This is particularly important for applications that value predictable throughput and low latency over reliability – the classic example being interactive real-time voice applications.

Another example of end-to-end application is the network that handles on-demand content delivery from preparing, packaging the audio/video assets adding metadata, transcoding them and then sending them to distributors. It goes without saying that end-to-end networking solutions need to satisfactorily address these requirements for fast reliable transfer and delivery and it is thus not surprising to see that ASIC standardized solutions are best suited for the task. Those solutions are needed by leading-edge networking applications, such as long-haul, metro and core, broadband access, optical, carrier IP and data center interconnect use cases.

Open-Silicon, Credo, and IQ-Analog have put together a complete end-to-end ASIC solution for leading-edge networking applications, such as long-haul, metro, and core, broadband access, optical, carrier IP and data center interconnect which they have showcased at OFC 2018 last month.[2]Open-Silicona comprehensive Networking IP Subsystem Solution, which includes high-speed chip-to-chip interface Interlaken IP, Ethernet Physical Coding Sublayer (PCS) IP, FlexE IP compliant to OIF Flex Ethernet standard v1.0 and will be compliant to the upcoming v2.0, and Multi-Channel Multi-Rate Forward Error Correction (MCMR FEC) IP. Open-Silicon complements this with its High Bandwidth Memory (HBM2) IP Subsystem Solution.

Credo has its high-speed 56Gbps PAM4 LR Multi-Rate SerDes solution and 112Gbps PAM4 SR/LR SerDes targeted for next-generation networking ASICs. IQ-Analog rounds up the solution with its high-performance, patented TPWQ hyper-speed 90Gsps ADC/DAC IPs analog-to-digital converters (ADCs) and digital-to-analog converters (DACs). This teaming up gives the three companies the opportunity to demonstrate the power of complete solutions for the next generation of high-performance networking applications.

Open-Silicon has a comprehensive Networking IP Subsystem Solution portfolio and includes:

1.High Speed Chip-to-Chip Interface Interlaken IP – Open Silicon’s 8th generation Interlaken IP core supports up to 1.2 Tbps high-bandwidth performance and up to 56 Gbps SerDes rates with Forward Error Correction (FEC). This high-speed chip-to-chip interface IP features an architecture that is fully flexible, configurable and scalable, making it ideal for high-bandwidth networking applications, such as routers, switches, Framer/MAC, OTN switch, packet processors, traffic managers, look aside processors/memories, data center applications, and several other high-end networking and data processing applications.
http://www.open-silicon.com/open-silicon-ips/interlaken-controller-ip/

2.
Ethernet Physical Coding Sublayer (PCS) IP – Open-Silicon’s Ethernet PCS core is compatible with different MII interfaces for connecting to the MAC and is uniquely built to work with off-the-shelf MAC and SerDes from leading technology vendors. It supports 64b/66b encoding/decoding for transmit and receive, and various data rates, ranging from 10G to 400G. The Ethernet PCS IP complies with the IEEE 802.3 standard and supports Ethernet and Flex Ethernet interfaces, making it ideal for high-bandwidth Ethernet endpoint and Ethernet transport applications. https://www.open-silicon.com/networking-ip-subsystem/

3.
Flex Ethernet (FlexE) IP – Open-Silicon’s FlexE IP core features a generic mechanism that supports various Ethernet MAC rates, and is uniquely built to work with Open-Silicon’s packet interface and OTN client interface or off-the-shelf MACs. The FlexE IP supports the Optical Internetworking Forum (OIF) Flex Ethernet standard 1.0 and will be compliant with the upcoming v2.0. The IP supports FlexE aware, FlexE unaware, and FlexE terminate modes of mapping over the transport network, making it ideal for high-bandwidth Ethernet transport applications. https://www.open-silicon.com/networking-ip-subsystem/

4.
Forward Error Correction (FEC) IP – Open-Silicon’s FEC IP core is capable of multi-channel multi-rate forward error correction in applications where the bit error rate is very high, such as high-speed SerDes 30G and above, and significantly improves bandwidth by enabling 56G PAM4 SerDes integration. This single-instance IP core is compatible with off-the-shelf SerDes from leading technology vendors and supports bandwidths up to 400G with the ability to connect 32 SerDes lanes. It can easily achieve a Bit Error Rate (BER) of 10-6, which is required by most electrical interface standards using PAM4 SerDes. The FEC IP core supports the Interlaken and Ethernet standards and significantly improves bandwidth by enabling high speed, multi-channel SerDes integration, making it ideal for high-bandwidth networking applications. https://www.open-silicon.com/networking-ip-subsystem/

5.
High Bandwidth Memory (HBM2) IP Subsystem Solution – Comprehensive High Bandwidth Memory (HBM2) IP Subsystem Solution for 2.5D ASICs in FinFET Technologies – This solution is now available for 2.5D ASIC design starts and also as licensable Intellectual Property (IP). The IP includes the controller, PHY and custom die-to-die I/O needed to drive the interface between the logic-die and the memory die-stack on the 2.5D interposer. Open-Silicon’s HBM2 IP subsystem is silicon proven on a 2.5D HBM2 ASIC SiP (System-in-Package) platform. The platform is used to demonstrate the high bandwidth data transfer rates of >2Gbps and interoperability between Open-Silicon’s HBM2 IP subsystem and HBM2 memory die-stack. http://www.open-silicon.com/high-bandwidth-memory-ip/

To summarize:

Open-SiliconNetworking IP Subsystem Solution

  • High-speed chip-to-chip interface Interlaken IP
  • Ethernet Physical Coding Sublayer (PCS) IP
  • FlexE IP compliant to OIF Flex Ethernet standard v1.0 and will be compliant with the upcoming v2.0
  • Multi-Channel Multi-Rate Forward Error Correction (MCMR FEC) IP
  • High Bandwidth Memory (HBM2) IP Subsystem Solution

Credo

  • High-speed 56Gbps PAM4 LR Multi-Rate SerDes solution
  • 112Gbps PAM4 SR/LR SerDes targeted for next-generation networking ASICs

IQ-Analog

  • High-performance analog-to-digital converters (ADCs)
  • High-performance digital-to-analog converters (DACs).

[1]End-to-end principle- Wikipedia

[2]Open-Silicon, Credo and IQ-Analog Showcase Complete End-to-End Networking ASIC Solutions at OFC 2018


Safety in the Interconnect

Safety in the Interconnect
by Bernard Murphy on 04-26-2018 at 7:00 am

Safety is a big deal these days, not only in automotive applications, but also in critical infrastructure and industrial applications (the power grid, nuclear reactors and spacecraft, to name just a few compelling examples). We generally understand that functional blocks like CPUs and GPUs have to be safe, but what about the interconnect? To some among us, interconnect just means wires; what’s the big deal, outside of manufacturing defects and electromigration? No joke – Kurt Shuler (VP Marketing at Arteris IP) tells me he still struggles with this perception among some people he talks to (maybe physical design teams and small block designers?).


If you’re familiar with SoC architecture, you know why interconnect is just as important as endpoint IP in functional verification, safety, security and other domains. Just as the Internet is only possible because connections are virtualized by routing through networks of traffic managers, so in modern SoCs, at least at the top-level, traffic is managed through one or more networks-on-chip (NoCs, though the implementation is quite different than the Internet). That means there’s a lot of logic in this interconnect IP, managing interfaces between endpoint IPs (such as ARM cores, hardware accelerators, memory controllers and peripherals) and managing those routing networks. Moreover, since the interconnect mediates safety-critical operations between these IPs, it is inextricably linked with system safety assurance.


Completing failure mode effects analysis (FMEA) and failure mode effects and diagnostic analysis (FMEDA) is particularly difficult for interconnect IP, first, because they are safety elements out of context(SEooC, as are all IP and generally chips), which therefore have to be validated against agreed assumptions of use. Secondly, interconnect IP are highly configurable, which makes reaching agreement on the assumptions of use, defining potential failure modes and effects, and validating safety mechanisms even more challenging than for other IP, as safety assurance must be determined on the configuration built by the integrator.

To Arteris IP, the right way to handle this complexity is to start with a qualitative, configurable FMEA analysis, which can then guide the creation of a configuration-specific FMEDA. Naturally, this requires hierarchy around modular components with safety mechanisms tied to those components, so that as you configure the IP, those safety mechanisms are automatically implemented in a manner that ensures safety for the function. For example, in the Arteris IP NoCs, the network interface unit (NIU) responsible for packetization can be duplicated. Then in operation, results from these units are continually compared, looking for variances which would signal a failure. You can also have ECC or parity checks in the transport. Following the FMEA/FMEDA philosophy, these safety mechanisms are designed to counter the various potential failure modes identified by the IP vendor. And because of the modularity, functional safety diagnostic coverage of the entire NoC can be calculated based on individual module coverage metrics. Kurt also stressed that it is important for the IP provider to work with functional safety experts (in Arteris IP’s case, ResilTech) to ensure maximum objectivity in this analysis, and naturally, to suggest opportunities to further enhance safety solutions.

Who guards the guardians? Arteris IP provides BIST functions to check their compare logic for safety mechanisms at reset and during run-time. All of this safety status rolls up into a safety controller (part of the Arteris IP deliverable), which can be monitored at the system level.


A critical part of safety analysis is deciding how to meaningfully distribute failure modes for verification. This is an area where the IP provider (with guidance from an independent safety expert) can generate a template to drive FMEDA estimation based on the specific configuration and estimated area of sub-components. Having a modular and hierarchal interconnect IP architecture is key to calculating these safety metrics required for FMEDA. In other words, the user of the IP should be able to expect a largely self-contained solution for this part of their safety validation task.

Another important advantage for an automatically generated IP when it comes to safety is that it becomes possible to automate the generation of the fault-injection campaign for verification and the rolling up of results to the FMEDA table. Rather than taking a shotgun approach to faulting, the campaign can be more precisely targeted in this manner:

  • Define where a fault must be inserted to generate one of the failure modes
  • Define traffic patterns that ensure faults don’t appear as safe faults
  • Define which safety mechanism shall detect the fault (but ensure all possible safety mechanisms detect the failure mode)
  • Define observation points as close as possible to the sub-part being tested

Again, the fault campaign is largely automated for the integrator.

Overall, this seems like a reasonable strategy, meeting requirements for functional safety and the expectation that an IP should still be a self-contained solution, even when configured as extensively as is possible with a NoC. You can learn more about the Arteris IP safety solutions HERE.


Electrical Reliability Verification – Now At FullChip

Electrical Reliability Verification – Now At FullChip
by Alex Tan on 04-25-2018 at 12:00 pm

Advanced process technology offers both device and interconnect scaling for increased design density and higher performance while invoking also significant implementation complexities. Aside from the performance, power and area (PPA) aspects, designer is getting entrenched with the need of tackling more reliability issues such as Electrostatic Discharge (ESD), Latch-Up (LUP) and Time-Dependent Dielectric Breakdown (TDDB). Traditionally these issues are observed during the cell library and technology development stage under as-designed operation voltage ranges. However, foundries or integrated device manufacturers (IDMs) are encouraging full-chip reliability verification to prevent chip reliability failures either during burn-in stage or in silicon.

Process and Reliability Design Rules
In the traditional RTL2GDS2 design flow, the standard cell library is initially targeted as the test vehicle for new process role-out as they are easier to be implemented compared with the other macro or custom blocks. Both timing and power attributes are normally captured in the library and synced-up with the foundry provided SPICE model version. Subsequently, the library could also embed process variations parameters through the Liberty Variation Format (LVF) extension.

During the 0.18 micron technology day, we were accustomed to the notion DFM (Design For Manufacturability). Since then, frequent collaborations between foundries and designers were taken place to minimize surprises from the manufacturing field. Designers would attempt to incorporate all known critical process parameters prior to tape-out. Foundry would provide PDK (Process Design Kit) and techfile releases early and maintain close-collaboration with customers’ technology and library teams to ensure proper adoption. Customer subsequently will pipe-clean and apply changes using foundry pre-approved reference flow.

Similar to PDK, a Reliability Design Kit (RDK) may be provided by foundries or IDMs, but with some critical design rules being presented as guidelines. For example, wire width threshold requirement is posted as criteria for nets in design to survive an ESD event. This rule can not be directly applied on the traditional DRC tool as it is lacking of the capability to identify electrical current directionality among others. In other examples, applying DRC driven approach to identify both LUP and TDDB is impractical as it has heavy reliance on using physical markers to identify polygons under analysis. This approach either does not work well at full-layout or may present loophole that deems it to be ineffective. Recent foundries/IDM proposed methodology is moving towards non-physical marker approach, allowing a more versatile logic-driven-layout (LDL) based checking. The combination of topological data and static design rules enable better reliability check coverage.

Calibre PERC and Reliability Bottlenecks
Mentor Calibre product family has been the industry leader for IC physical verification. Calibre PERC reliability platform provides complex reliability verification using both foundry’s standard rules and project custom rules. It employs topological constraints to verify correct circuit structures are in place as specified by circuit design rules. It has access and ability to concurrently use netist and layout information to perform electrical checks utilizing parameters in both domains.

The combination of Calibre PERL LDL flow along with static simulation and static voltage propagation features has enabled foundries and IDMs defining robust reliability rules. These rules can be implemented and automatically verified at full-chip ensuring a full coverage. Analogous to static timing approach used for resolving timing the latency and the scalability of dynamic-based circuit simulation, using the above approach cuts down the complexity and allows both block and full-chip reliability verifications.

Let’s review on how Calibre PERC utilizes static simulation and voltage propagation to address ESD, LUP and TBBD compliances.

For ESD prevention, ESD or power clamping devices
(diode, transistor, resistor) with enough strength must be connected to IO, P/G and cross-power domain paths. As shown in figure 2, the types of checks Calibre performs on this type of condition include:
– Verify if the required circuits (ESD, power clamping, back-to-back diodes) exist.
– Check device parameters of the corresponding circuits have sufficient strength for ESD.
– Additional checks for for advanced nodes such as effective resistance and current density.

The second reliability issue is latch-up (LUP). IC’s LUP can be described as a short-circuit type event occuring in the parasitic-bipolar-pair equivalent structures and is triggered when disruptive excitation causes significant current or overcurrent looping in the positive feedback network (refer to figure 3a).

For a LUP prevention, the guard-ring or strap insertion is normally recommended. In addition, it requires the spacing among polygons involved with latchup (operate at different potentials) should be equal or larger than potential difference as shown in figure 3b. Calibre PERC handles LUP check by traversing through extracted layout netlist and propagate
external voltage values into internal nets based on user-defined constraints. It has mechanism to annotate to physical polygons attributes to signify aggressor or victim devices and voltage value for DRC checks.

For interconnect TDDB checks on block or full-chip, spacing checks among polygons of the same layer but different nets are executed against criteria dependent on delta-voltage range. Voltage propagation of the external nets into the internal nets are done until all targeted nets for potential TDDB have the appropriate voltage values to annotate to the corresponding polygons. Designer could apply constraint to control the static voltage propagation across multi voltage domains. For example, in static propagation shown in figure 4: the top port has a 3.3 v while nets A, B, C can be assigned with voltage 2.5v, 1.8v and 2.2v, respectively. A voltage shift is done by user-defined subcircuit pattern or through simulation. Subsequent static voltage propagation is done, followed by annotation of the voltage values to polygons in nets of concern for DRC checking.

Increased reliability issues stemming from the advanced process technologies are introducing additional complex sign-off requirements as they are not easily resolved and scaled through the use of dynamic simulation or traditional DRC checks. Such limitation is no longer the case with the availability of Mentor Calibre PERC reliability platform. It has both capacity and fast performance to allow an accurate full-chip verification for a full design-for-reliability (DFR) compliance. For more detailed discussions on Mentor PERC related to solving reliability issues, check this LINK.


SiFive’s Design Democratization Drive

SiFive’s Design Democratization Drive
by Camille Kokozaki on 04-25-2018 at 7:00 am

There is something endearing and refreshing in seeing a novel approach unfold in our Semi-IP-EDA ecosystem currently settled in its efficient yet, let us say it, unexciting ‘going through the motions’, constantly comparing, matching, competitively and selfishly sub-optimizing what the art of the possible can be.

Enter a new breed of technologists, industry veterans, academics and evangelists articulating and embarking on testing and applying a new business model, building on agility, collaboration and continuous delivery and improvement, emulating moves from the playbook of the widely successful parallel industry of Software, DevOps, IT and Social Media. I am talking about Open Source initiatives like the RISC-V movement, crowd-sourced and building on standardization of an instruction set architecture (ISA) while allowing differentiation for each company with a set of extensions built on top of those common constructs and testing new fresh and promising approaches to business as usual. What is also admirable is the well-intentioned strive to make a better world allowing contributions from parts of the world where opportunity and access to funds and technology are lacking. SiFive is now articulating and executing on this vision.

At the just held GSA Silicon Summit in San Jose, Naveed Sherwani, SiFive’s CEO outlined in his Closing Keynote the elements of this vision and practice. He challenged the audience to explain how Instagram, a 13-employee startup, ended up being a $1B acquisition. The answer was that it provided a minimum viable product (MVP) on top of an existing stack of tools, infrastructure, and technologies that did not need developing from scratch. He posited that in our industry MVPs cost too much ($1M-$7M+), Design takes too long (9-24 Months + Fab Time) and too many experts are needed (14+ disciplines from architects to package and test and all the expert steps in between). He challenged the industry to set the following goals:

  • Reduce Cost by 90%
  • Reduce Time to 1 Month + Fab cycle
  • Reduce needed expertise to System Level

Experts need not panic that they will be made obsolete, they just need to move to a higher level of abstraction (such as moving from writing assembly code to developing application code).

The options for the industry are, moving forward, the following:

  • Automation
  • Reduction of Options
  • Cloud leveraging
  • Deployment of new business models

SiFive’s approach to realizing this has 4 key components:

  • Freedom Designer (Core/Subsystem/Chip)
  • Cloud Platform offering
  • DesignShare
  • Operations (Fabrication/Package/Test/Logistics)

By allowing the definition and reuse of templates in the Freedom Designer, the individual core blocks can be specified, documented and incorporated into the design fabric. By providing a Cloud Platform Offering an IP can be verified while allowing the protection and security of the crown jewels and intellectual property of the offerors. With DesignShare, 3[SUP]rd[/SUP] Party providers supply their IP to SiFive’s Platform at zero cost, increasing design starts, and collecting NRE and royalties when production starts. This also allows interoperation of their IP with other vendors’ and customer internal IP at no initial cost for use and verification by the target customer, deferring the IP charge to a later stage and reducing development and verification time. The initial DesignShare participants were … shared with the audience.

While not explicitly addressed by Naveed, my take on this is that it requires a paradigm shift with some resilience to concerns about ‘Now my competitors can compare my metrics to theirs’ and other reservations about this business model. I view it instead as sharpening the edge and leveling the playing field while concurrently stepping up to match all (if I can mix my metaphors with abandon). Another question from IP providers might be ‘Is this a race to the bottom, selling wise?’. I think not, as selling more design starts is more revenue, albeit deferred, and there is nothing like emerging momentum to make all join with a fear of missing out on the wave.

SiFive is proposing to build a Core, Subsystem, and Chip Design Factory Software Platform with predesigned blocks and components, bringing down design cost 10x from a typical $7.5M down to $750K.

Naveed then walked the talk by showcasing HiFive Unleashed, the World’s first multi-core RISC-V development board booting Linux

Naveed summarized the dramatic benefits of this business model reducing prototyping cost, allowing more startups, design starts, and IP providers, reducing the needed expertise and allowing design contributions by excited young technologists from the abstracted software and hardware worlds. A pledge to offer his software free to all universities and to the fifteen poorest countries was admirable in terms of commitment and smart in setting the stage for a new generation of millennial contributors and entrepreneurs to innovate and prosper. Not specifying which millennial helps preserve inclusiveness in this ongoing revolution, count me in.

Disclosure: I am an active participant in the RISC-V ecosystem which includes SiFive

Related Article


Functional Safety in Delhi Traffic

Functional Safety in Delhi Traffic
by Bernard Murphy on 04-24-2018 at 7:00 am

While at DVCon I talked to Apurva Kalia (VP R&D in the System and Verification group at Cadence). He introduced me to the ultimate benchmark test for self-driving – an autonomous 3-wheeler driving in Delhi traffic. If you’ve never visited India, the traffic there is quite an experience. Vehicles of every type pack the roads and each driver advances by whatever means they can, including driving against oncoming traffic if necessary. 3-wheelers, the small green and yellow vehicles in the picture below, function very effectively as taxis; they’re small so can zip in and out of spaces that cars and trucks couldn’t attempt. The whole thing resembles a sort of directed Brownian motion, seemingly random but with forward progress.

India city traffic

Making autonomy work in Western traffic flows seems trivially simple compared to this. But that’s exactly what an IIT research group in Delhi have been working on. Apurva said he saw a live example, in Delhi traffic, on a recent visit. We should maybe watch these folks more closely than the Googles and their kind.

After sharing Delhi traffic experiences, Apurva and I mostly talked about functional safety, a topic of great interest to me since I just bought a new car loaded with most of the ADAS gizmos I’ve heard of, including even some (minimal) autonomy. To start, he noted that safety isn’t new – we’ve had it for years in defense, space and medical applications. What is new is economical safety (or, if you prefer, safety at scale). If you add $4k of electronics to a $20k car, you have a $24k total cost. If you duplicate all of that electronics for safety, you now have a $28k total cost, less attractive for a lot of buyers. The trick to safety in cars is to add just enough to meet important safety goals without adding cost for redundancy in non-critical features.

A sample FMEDA table showing potential failure modes with expected failure rates, mitigating safety mechanisms and expected diagnostic coverage

For Apurva, ensuring this boils down to two parts:

  • Analysis to build a safety claim for the design
  • Verification to justify that safety claim is supportable

In my understanding, the first step is analysis plus failure mitigation. You start with failure mode effect and diagnostic analysis (FMEDA), decomposing the design hierarchically and entering (as in the table above) expected failure rates (FIT), planned safety mechanisms to mitigate (dual-core lockstep in this example) and the diagnostic coverage (DC) of failures expected from that mechanism. I don’t know how much automation can be found today in support of building these tables; I would guess that this is currently a largely a manual and judgement-based task, though no doubt supported by a lot of spreadsheets and Visual Basic. Out of this exercise comes the overall safety-scoring/analysis Apurva refers to in his first step.

Functional safety mechanisms are by now quite well-known. Among these, there’s the dual-core lock-step methods I mentioned above – run two CPUs in lock-step to detect potential discrepancies. Triple modular redundancy is another common technique – triplicate logic with a voting mechanism to pick the majority vote; or even just duplicate (as in DCLS) as a method to detect and warn of errors. Logic BIST is becoming very popular to test random logic, as is ECC for checking around memories. Also in support of duplication/triplication methods, it is becoming important to floorplan carefully. Systematic or transient faults (manufacturing defects or neutron-induced ionization for example) can equally impact adjacent replicates, defeating the diagnostic objective; mitigation requires ensuring these are reasonably separated in the floorplan.

Cadence functional safety analysis flow

The verification part of Apurva’s objectives is where you most commonly think of design tools, particularly the fault simulation-centric aspect. Cadence has been in the fault-sim business for a long, long time (as I remember, Gateway – the originator of Verilog – started in fault sim before they took off in logic sim). Fault sim is used in safety verification to determine if injected faults, corresponding to systematic or transient errors, are corrected or detected by mitigation logic. Therefore the goal of a safety verification flow, such as the functional safety flow from Cadence, is to inject faults in critical areas (carefully selected/filtered to minimize wasted effort), run fault simulation to report those errors then roll up results to report diagnostic coverage in whatever mechanism is suitable for the Tier-1/OEM consumers of the device.

So when you next step into a 3-wheeler (or indeed any other vehicle) in Delhi, remember the importance of safety verification. In that very challenging traffic, autonomy will ultimately make your journey through the city less harrowing and very probably faster as more vehicles become autonomous (thanks to laminar versus turbulent traffic flow). But that can only happen if the autonomy is functionally safe. Making that happen in Delhi traffic will likely set a new benchmark for ultimate safety. You can learn more about the Cadence functional safety solution HERE.


Mentor’s Approach to Automotive Electrical Design

Mentor’s Approach to Automotive Electrical Design
by Daniel Payne on 04-23-2018 at 12:00 pm

Most of us continue to drive cars and for me there’s always been a fascination with all things electrical that go into the actual design of a car. I’ve done typical maintenance tasks on my cars over the years like changing the battery, installing a new radio, replacing bulbs, changing a fuse, swapping out dashboard lights, and even putting in new power window assemblies. The automotive engineers that get to do the actual design work face a lot of unique challenges because of the rapid changes in the electrification of modern cars by providing passengers with GPS, WiFi and Bluetooth connectivity. Would you believe that we could soon see 50% of our car cost coming from the vehicle electrical systems?


Source: IHS Markit

In the consumer electronics market we expect new smart phones every year or so, which in turn places demands on what we expect our automobiles to provide us. It’s clear that auto makers have to become more nimble jn order to address the growing trends of connectivity, autonomous vehicles and electrification. One promising path is to use both new software tools and services combined to met these challenges.

On the autonomous vehicle front we hear from experts at Toyota that say it will take 14.2 billion miles of testing to reach SAE level 5 safety standards, so that would take too much time to do physically so it makes sense to add some virtual verification in the mix as well.


Source: Toyota

I spoke with two automotive experts at Mentor over the phone to get an update on design tools and services that are helping automotive systems designs, Scott Majdecki is part of the Capital support and consulting services, and Andrew Macleod is Director of automotive marketing. The Capital tool is built for both electrical and wire harness design.

Q: What challenges do you see for automotive systems designs?

Scott Majdecki – Customers bought tools then struggled getting new designs into production, because they needed help to pilot the use, so we came up with a structured deployment called PROVEN through a cycle of evaluating current design, through a pilot project. For wire harnesses there is an issue with legacy data. The Capital tool has online, on-demand training, or instructor led training for basics, and then continued learning as needed on-demand. Vehicle wiring architecture is the modern approach versus manual wiring.

Q: What automotive design trends do you see?

Andy – There are so many new issues, like: connectivity, sensors, autonomous driving, multiple voltages in the vehicle, complexity, zero defects. Virtual design trends are emerging where we have high-level design first, simulate first mechanical, then simulate electrical. It’s all about Time To Revenue.

Q: Where is virtual design at for automotive?

Scott – Mentor has a virtual testing environment for vehicles that includes pedestrians, weather, sensors and verification platforms.

Services
Companies can get to market more quickly by adopting Mentor Automotive Services for many thing, like: pilot, production rollout, legacy data migration, PLM integration, support and training services. Here’s a quick look at the services delivery model:

Simulation and Modeling
Platform design includes many parts like sensors, batteries, motor drivers, power electronics, AUTOSAR software and ECUs. Here are six types of simulation that can be done by modeling sensors as part of a system:

The Big Picture
There are four mega-trends in automotive systems design today, and those companies that excel at meeting these trends will grow in market share despite turbulent market conditions:

  • Connectivity – connecting car, driver and the external world
  • Autonomous – self-driving and driver-assisted systems
  • Electrification – EV (Electric Vehicles), hybrids and supporting technology
  • Architecture – both EE architecture and system implementation

By adding new software tools along with automotive-specific design services will certainly decrease the inherent risks for new designs, shorten design cycles, and help ensure that demanding safety standards are met. The team at Mentor has been serving the automotive market for years in both tools and services, so it makes sense to look into their approach to see how it could benefit your project teams.

White Paper
To read the full six page white paper, follow this link and fill out a brief request form.

Related Articles


Will the Rise of Digital Media Forgery be the End of Trust?

Will the Rise of Digital Media Forgery be the End of Trust?
by Matthew Rosenquist on 04-22-2018 at 12:00 pm

Technology is reaching a point where it can nearly create fake video and audio content in real-time from handheld devices like smartphones.

In the near future, you will be able to Facetime someone and have a real-time conversation with more than just bunny ears or some other cartoon overlay. Imagine appearing as a person of your choosing, living or deceased, with a forged voice and facial expressions to match.

It could be very entertaining for innocent use. I can only image the number of fake Elvis calls. Conversely, it could be a devastating tool for those with malicious intent. A call to accounting from the ‘boss’ demanding a check be immediately cut to an overseas company as part of a CEO Fraud or your manager calling to get your password for access to sensitive files needed for an urgent customer meeting.

Will digital media forgery undermine trust, amplify ‘fake’ news, be leverage for massive fraud, and shake the pillars of digital evidence? Will there be a trust crisis?

Tools are being developed to identify fakes, but like all cybersecurity endeavors it is a constant race between the forgers who strive for realism and those attempting to detect counterfeits. Giorgio Patrini has some interesting thoughts on the matter in his blog Commoditization of AI, digital forgery and the end of trust: how we can fix it. I recommend reading it.

Although I don’t share the same concerns as the author, I do think we will see two advancements which will lead to a shift in expectations.

Technical Advancements
1. The fidelity of fake voice + video will increase to the point that humans will not be able to discern the difference between authentic and real. We are getting much closer. The algorithms are making leaps forward at an accelerating pace to forge the interactive identity of someone else.


2. The ability to create such fakes in real-time will allow complete interaction between a masquerading attacker and the victims. If holding a conversation becomes possible across broadly available devices, like smartphones, then we would have an effective tool on a massive scale for potential misuse.

Three dimensional facial models can be created with just a few pictures of someone. Advanced technologies are overlaying digital faces, replacing those of the people in videos. These clips, dubbed “Deep Fakes” are cropping up to face swap famous people into less-than-flattering videos. Recent research is showing how AI systems can mimic voices with just a small amount of sampling. Facial expressions can be aligned with audio, combining for a more seamless experience. Quality can be superb for major motion pictures, where this is painstakingly accomplished in post-production. But what if this can be done on everyone’s smartphone at a quality sufficient to fool victims?

Expectations Shift
Continuation along this trajectory of these two technical capabilities will result in a loss of confidence for voice/video conversations. As people learn not to trust what they see and hear, they will require other means of assurance. This is a natural response and a good adaptation. In situations where it is truly important to validate who you are conversing with, it will require additional authentication steps. Various options will span across technical, process, behavioral, or a combination thereof to provide multiple factors of verification, similar to how account logins can use 2-factor authentication.

As those methods become commonplace and a barrier to attackers, then systems and techniques will be developed to undermine those controls as well. The race never ends. Successful attacks lead to a loss in confidence, which results in a response to institute more controls to restore trust and the game begins anew.

Trust is always in jeopardy, both in the real and digital worlds. Finding ways to verify and authenticate people is part of the expected reaction to situations where confidence is undermined. Impersonation has been around since before antiquity. Society will evolve to these new digital trust challenges with better tools and processes, but the question remains: how fast.

Interested in more? Follow me on your favorite social sites for insights and what is going on in cybersecurity: LinkedIn, Twitter (@Matt_Rosenquist), YouTube, InfoSecurity Strategy blog, Medium, and Steemit


Maybe it is time to #DeleteWhatsApp

Maybe it is time to #DeleteWhatsApp
by Vivek Wadhwa on 04-22-2018 at 7:00 am

WhatsApp differentiates itself from Facebook by touting its privacy and end-to-end encryption. “Some of your most personal moments are shared with WhatsApp”, it says, so “your messages, photos, videos, voice messages, documents, and calls are secured from falling into the wrong hands”. A WhatsApp founder expressed outrage at Facebook’s policies by tweeting “It is time. #deletefacebook”.

But WhatsApp may need to look into the mirror. Its members may not be aware that when using WhatsApp’s “group chat” feature, they are also susceptible to data harvesting and profiling. What’s worse is that WhatsApp makes available mobile-phone numbers, which can be used to accurately identify and locate group members.

WhatsApp groups are designed to enable discussions between family and friends. Businesses also use them to provide information and support. The originators of groups can add contacts from their phones or create links enabling anyone to opt-in. These groups, which can be found through web searches, discuss topics as diverse as agriculture, politics, pornography, sports, and technology.

Researchers in Europe demonstrated that any tech-savvy person can obtain treasure troves of data from WhatsApp groups by using nothing more than an old Samsung smartphone running scripts and off-the-shelf applications.

Kiran Garimella, of École Polytechnique Fédérale de Lausanne, in Switzerland sent me a draft of a paper he coauthored with Gareth Tyson, of Queen Mary University, U.K. titled “WhatsApp, doc? A first look at WhatsApp public group data”. It details how they were able to obtain data from nearly half a million messages exchanged between 45,754 WhatsApp users in 178 public groups over a six-month period, including their mobile numbers and the images, videos, and web links that they had shared. The groups had titles such as “funny”, “love vs. life”, “XXX”, “nude”, and “box office movies”, as well as the names of political parties and sports teams.

The researchers obtained lists of public WhatsApp groups through web searches and used a browser automation tool to join a few of the roughly 2000 groups they found—a process requiring little human intervention and easily applicable to a larger set of groups. Their smartphone began to receive large streams of messages, which WhatsApp stored in a local database. The data are encrypted, but the cipher key is stored inside the RAM of the mobile device itself. This allowed the researchers to decrypt the data using a technique developed by Indian researchers L.P. Gudipaty and K.Y. Jhala. It was no harder than using a key hidden atop a door to enter a home.

The researchers’ goal was to determine how WhatsApp could be used for social-science research. They plan to make their dataset and tools publicly available after they anonymize the data. Their intentions are good, but their paper illustrates how easily marketers, hackers, and governments can take advantage of the WhatsApp platform.

Indeed, The New York Timesrecently published a story on the Chinese Government’s detention of human-rights activist Zhang Guanghong after monitoring a WhatsApp group of Guanghong’s friends, with whom he had shared an article that criticized China’s president. The Times speculated that the Government had hacked his phone or had a spy in his group chat; but gathering such information is easy for anyone with a group hyperlink.

This is not the only fly in the WhatsApp ointment that this year has revealed. Wiredreportedthat researchers from Ruhr-University Bochum, in Germany, found series of flaws in encrypted messaging applications that enable anyone who controls a WhatsApp server to “effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation”. Gaining access to a computer server requires sophisticated hacking skills or the type of access that governments can only gain. But as Wired wrote, “the premise of so-called end-to-end encryption has always been that even a compromised server shouldn’t expose secrets”.

Researcher Paul Rösler reportedly said: “The confidentiality of the group is broken as soon as the uninvited member can obtain all the new messages and read them… If I hear there’s end-to-end encryption for both groups and two-party communications, that means adding of new members should be protected against. And if not, the value of encryption is very little”.

WhatsApp also announced in 2016 that it would be sharing user data, including phone numbers, with Facebook. In an exchange of emails, the company told me that it does not track location within a country and does not share contacts or messages, which are encrypted, with Facebook. But it did confirm that it is sharing users’ phone numbers, device identifiers, operating-system information, control choices, and usage information with the “Facebook family of companies”. That leaves open the question as to whether Facebook could then track those users in greater detail even if WhatsApp doesn’t.

Facebook and its “family of companies” are being much too casual about privacy, as we have seen from the Cambridge Analytica revelations, harming freedom and democracy. It is time to hold them all accountable for the bad design of their products and the massive breaches of our privacy that they enable.

For more, you will want to read my forthcoming book, Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain–and How to Fight Back