RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Innovation in a Commodity Market

Innovation in a Commodity Market
by Bernard Murphy on 05-29-2018 at 7:00 am

Logic simulation is a victim of its own success. It has been around for at least 40 years, has evolved through multiple language standards and has seen significant advances in performance and major innovations in testbench standards. All that standardization and performance improvement has been great for customers but can present more of a challenge for suppliers. How do you continue to differentiate when seemingly everything is locked down by those standards? Some may be excited the potential for freeware alternatives; however, serious product companies continue to depend on a track-record in reliability and support, while also expecting continuing improvements. For them and for the suppliers, where do opportunities for evolution remain?

Performance will always be hot. Progress has been made on a bunch of fronts, from parallelism in the main engine (e.g. Xcelium) to co-modeling with virtual prototyping on one side (for CPU+SW) and emulation on the other (for simulation acceleration). However, I was struck by a couple of points Cadence raised in an SoC verification tutorial at DVCon 2018, which I would summarize as raw simulator performance only delivers if you use it effectively. Some of this comes down to algorithms, especially in testbenches. It’s easy to write correct but inefficient code; we’ve all done it. Being intelligent about limiting complex calculations, and using faster algorithms and better data structures, these are all performance optimizations under our control. Coding for multi-core is another area where we really shouldn’t assume tools will rescue us from ourselves. (You can check out the tutorial when these are posted by DVCon).

We can optimize what we have to repeat on each run. I’ve written before about incremental elaboration – rebuilding the simulation run-time image as fast as possible given design changes. Incremental compile is easy, but elaboration (where modules and connections are instantiated) has always been the bottleneck. Incremental elaboration allows for large chunks of the elaborated image to remain untouched while rebuilding just those parts that must be changed. Save/Restart is another widely used feature to minimize rework, since getting through setup can often take 80% of the run-time. However, this capability has historically been limited to understanding only the simulation model state. Now that we have test environments reading and writing files and working with external code (C/C++/SystemC), that basic understanding has limited checkpointing to “clean” states, which can be very restrictive. The obvious refinement is to save total model state in the run, including read and write pointers and the state of those external sims. Which you now can.

An obvious area for continued innovation is around AMS support, and one especially interesting domain here is power modeling in mixed-signal environments. This gets a little more complicated than in a digital UPF view since now you have to map between voltage values in the analog and power states in the UPF, among other things. The basics are covered in the standards (UPF and Verilog-AMS) but there’s plenty of room to shine in implementation. After all, (a) there aren’t too many industry-hardened mixed-signal simulators out there and (b) imagine how much power you could waste in mixed-signal circuitry if you don’t get it right. Cadence has notes on a few updates in this domain here, here and here.

X-propagation is another area related to power. Perhaps you thought this was all wrapped up in formal checks? Formal is indeed helpful in X-prop, but it can only go so far. Deep-sequence checks are obviously much more challenging, potentially unreachable in many cases. These problems are particularly problematic between (switched) power state functions. Missing isolation on outputs from such a function should be caught in static checks, but checking that isolation remains enabled until the block is fully powered up and ready to communicate, this ultimately requires dynamic verification.

However, there’s room to be clever in how this is done. Simulation can be pessimistic (always X when possible) or somewhat more optimistic, propagating only the cases that seem probable. Maybe this seems unnecessary; why not just code X’s into the RTL for unexpected cases? It seems the LRM can be overly optimistic (in at least some cases?), whereas X-prop handling through the simulator (no need to change the RTL) gives you more control over optimism versus pessimism. You can learn more about how Cadence handles X-prop in simulation here.

So yes, the innovation beat goes on, even in simulation, a true veteran of EDA. Which is just as well since it still dominates functional verification and is likely to do so for a long time yet 😎


China Chips Taiwan and Technology

China Chips Taiwan and Technology
by Robert Maire on 05-28-2018 at 12:00 pm

Three critical China issues; Trade, Taiwan & Technology. China is a “double edge sword” of risk & opportunity. These issues greatly impact stock valuations. We have recently given a presentation at both the SEMI ASMC conference in Saratoga Springs and The Confab conference in Las Vegas. Both conferences include senior management of the semiconductor industry covering a wide variety of topics.


For those who read our newsletter, you know we have opined on China and trade many times and of late the subject has come to the forefront of general news so this has turned out to be a very timely topic.

In our view it is very clear that the issue of China trade has at the very heart of it the semiconductor industry and can either negatively or positively impact the industry in a huge way. Investors and industry participants must pay particular attention to this issue as it has come to a head and the stocks and fortunes of the companies will be greatly impacted.

Right now we see more downside beta than upside. Just the mere threat of a trade war has likely changed the momentum in the relationship between China and the US for the negative. Over the last several weeks we have seen a rollercoaster ride of reversing directions in trade that has left everyone spinning and confused.

Technology is also at the heart of trade as who has the technology and who wants it and how they get it have huge implications. We have already seen some early warning signs of technology ownership issues.

Finally Taiwan has not been mentioned much but everyone seems to forget that Taiwan is a short missile flight away from China that has recently raised the Taiwan issue again by forcing airlines to name Taiwan as part of China. While this may seem petty, it is a more ominous message sent by China about the future of Taiwan and with it TSMC and all the semiconductor operations on the runaway island.

Below is a link to the slide deck of the presentation we have given as we think it will be of interest to investors and industry participants alike…

China Chips- Trade Taiwan & Technology

Conclusion: Resistance is Futile- Join the Movement!

*Much like Japan, Taiwan & Korea before them, China entering the semiconductor industry is a normal progression of modernization

*The US will also need alternative suppliers like Micron & GloFo

*The US can participate and profit in China – A huuuuge market

*Everyone must participate with eyes wide open to risks

*The US government can help level the playing field of trade & IP concerns

*China will likely be faster than Japan, Korea or Taiwan in build up

*US must promote & protect & invest in new tech – AI, VR, IOT etc…

*China remains a very sharp double edged sword that cuts both ways


Should EDA Follow a Foundry Model?

Should EDA Follow a Foundry Model?
by Daniel Nenni on 05-28-2018 at 7:00 am

There is an interesting discussion in the SemiWiki forum about EDA and the foundry business model which got me to thinking about the next disruptive move for the semiconductor industry. First let’s look at some of the other disruptive EDA events that I experienced firsthand throughout my 30+ year career.

When I started in 1984 EDA was dominated by what we called DMV (Daisey, Mentor, Valid). Before that it was Calma running on Data General Minicomputers. Back then EDA was a systems business where software was bundled with hardware. SUN Microsystems and Cadence changed that by putting minicomputers on engineer’s desks allowing them to pick and choose the software tools they used. EDA then became a software centric business selling perpetual licenses with yearly maintenance contracts. Software subscriptions soon followed which caused a bit of financial indigestion for EDA companies but clearly it was disruption for the greater good.

The most recent EDA disruption is Siemens acquiring Mentor. We are now seeing the effect it is having on the ecosystem, a very positive effect. We now have three VERY competitive EDA companies going upstream from chip to software development to complete systems. It really is an exciting time to be in EDA!

Meanwhile, back at the castle, the majority of commercial software is now in the cloud via an SaS business model resulting in gold mines of data and analytics, except of course EDA software.

The forum discussion Should EDA Follow a Foundry Model? was started by long time SemiWiki member Arthur Hanson. Arthur is a hardcore investor who came to SemiWiki looking for semiconductor knowledge to supplement his stock portfolio. Arthur and I have met, we talk on the phone and email. I was just starting to work with Wall Street at the time and found his investor insight quite helpful. Remember, when an outsider asks a question you need to understand what he is asking and why he is asking it.

“Just like asemifoundry takes knowledge in executing making chips for a variety of customers and shares it, yet keeps each customers information separate and private, should not an EDA firm be set up in its own cloud to share the expertise that they develop from monitoring a large number of separate process for different be used to improve the processes for all their customers? TSM has done an excellent job of keeping individual customer IP separate and private but uses the improvement in process information to the benefit of all. Would not this process if applied to EDA speed up the evolution of the design process to the benefit of all through the use of big data. If TSM can keep proprietary information separate and confidential while spreading process improvements, couldn’t EDA firms use the same structure to benefit their customers as well. Auditing the process on a real time basis could assure security while giving the customer the best practices on a real time basis. This could also be done on a virtual machine bases with most of the process done at the customerssite, although this would be unwieldy and cumbersome compared to a private cloud. Any thoughts, comments or observations on this appreciated and solicited.”

The resulting discussion is quite interesting so check it out when you have time. More than five thousand people have viewed it thus far which is a pretty big discussion if you think about it, and I have. SemiWiki is made up of all levels of semiconductor professionals from A to C level and we know who reads what, when, and where, so I can tell you this discussion is resonating at all levels of the ecosystem, absolutely.

My personal opinion is that disruption is again coming to EDA and that disruption will be in the cloud. We did a “Do you want your EDA Tools in the cloud” poll and again the interesting part was who voted and where they were in the ecosystem. The $10B question is: Who is trusted enough to implement EDA in the cloud? The answer is towards the end of the forum discussion:

Originally Posted by count
I think it would be interesting if the foundries, ie TSMC, got into the EDA game and charged a wafer royalty on it as you said. Better yet, a cloud based EDA tool that could also be used for ordering after designs are validated. If it could be integrated in a sort of design to manufacturing workflow, that would be amazing. Especially for smaller customers who are designing IoT chips and are focused on time to market, something like that seems like it could be valuable.

Originally Posted by KevinK
Why would a TSMC or Samsung even consider this option given Cadence’s or Synopsys’ current market caps and revenues ? Given the stock premium that an acquisition would cost, either foundry could build two leading edge fabs for the same price. I don’t know Samsung’s internal economics, but TSMC’s typical return on invested capital (ROIC) runs around 30-40%. Even though an EDA acquisition wouldn’t be “capital” per se, I’m sure that the foundries would use their ROIC as a hurdle rate for other major uses of money. Neither EDA company offers close to that rate, even before considering the revenue haircut an EDA/IP company would suffer, once tied to a single foundry.

Originally Posted by Daniel Nenni
One word: Disruption
Do you actually think Intel Foundry or Samsung Foundry or any other IDM foundry for that matter has a chance at catching up with TSMC while playing by TSMC’s rules? Much less beating them? It’s not gonna happen. Intel or Samsung could buy Cadence or make a significant investment and cut a wafer royalty deal in the cloud exclusive to their customers. Foundries, better than EDA companies, could pull of EDA in the cloud, absolutely.

Just my opinion of course…


Dear Toyota

Dear Toyota
by Roger C. Lanctot on 05-27-2018 at 7:00 am

Toyota Motor North America CEO James Lentz got a letter from the U.S. Federal Communications Commission (FCC) last week recognizing Toyota’s announced plan to deploy Dedicated Short Range Communications (DSRC) technology on Toyota and Lexus vehicles sold in the U.S. beginning with MY21. The extraordinary letter notes that Toyota’s decision comes 20 years after the FCC allocated spectrum for DSRC technology, but cautions that Toyota ought to weigh such significant capital investments against the emergence of superior competing solutions, most notably cellular-based C-V2X.
Continue reading “Dear Toyota”


China Semiconductor Equipment China Sales at Risk

China Semiconductor Equipment China Sales at Risk
by Robert Maire on 05-27-2018 at 7:00 am

We have been on a roller coaster ride of on again off again trade talk between China and the US. It is unclear where we are on a day by day basis but of late it appears that we are not seeing a lot of progress and some progress we thought we had made may not have actually happened.
Continue reading “China Semiconductor Equipment China Sales at Risk”


Webinar: IP Quality is a VERY Serious Problem

Webinar: IP Quality is a VERY Serious Problem
by Daniel Nenni on 05-25-2018 at 12:00 pm

We just completed a run through of the upcoming IP & Library QA webinar that I am moderating with Fractal and let me tell you it is a must see for management level Semiconductor Design and Semiconductor IP companies as well as the Foundries. Seriously, if you are an IP company you had better be up on the latest QA checks if you want to do business with the leading edge foundries and semiconductor companies, absolutely.


The secret weapon here is presenter Felipe Schneider, Director of Field Operations at Fractal. Felipe will take us through the agenda followed by a demonstration of Crossfire, ending with questions and answers. Felipe is the frontline interface to Fractal customers in North America which includes many of the top semiconductor companies and IP providers so he knows IP QA. Felipe also knows what QA checks are being done at the different process nodes down to 7nm and what new checks are coming at 5nm (crowdsourcing). This alone is worth an hour of your day.

IP & Library QA with Crossfire: Wed, Jun 6, 2018 9:00 AM – 10:00 AM PDT

There is no industry where the need for early bug detection is more paramount than in SoC design. Consequences like design re-spins, missed tape outs and hence missed market opportunities make the cost of late bug detection prohibitive. Where earlier generations of SoC designs could be crafted by a team of limited size that could oversee the entire design process, design in the latest process nodes requires a different strategy.

Designer productivity is lagging behind Moore’s law which drives the increase of transistor density. Thus design teams are becoming larger and are comprised of multiple groups spread over the globe. Outsourcing of design tasks by integrating third-party IP is mandatory to get the job done, but it reduces oversight of the SoC design process and leaves companies at the mercy of the quality strategy implemented by their suppliers. At the same time, modelling of new physical effects using current-driver, variation and electro-migration models paired with an increased amount of PVT corners generate an explosion of data to be analyzed prior to sign-off.

It is clear that QA needs to be a shared responsibility by all partners in the SoC design flow, from library and IP providers to foundry and SoC integrators. Each of these partners needs an integrated QA solution in their part of the design flow. Never should QA be an afterthought to be checked off right before IP delivery. This webinar intends to cover how Fractal Technologies Crossfire solution addresses these QA challenges from both backend and frontend perspectives and why its standardized and scalable QA methodology is superior to homebrew validation solutions.

About Crossfire
Mismatches or modelling errors for Libraries or IP can seriously delay an IC design project. Because of still increasing number of different views required to support a state of the art deep submicron design flow, as well as the complexity of the views themselves, Library and IP integrity checking has become a mandatory step before the actual design can start.

Crossfire helps CADS teams and IC designers in performing integrity validation for Libraries and IP. Crossfire makes sure that the information represented in the various views is consistent across these views. Crossfire improves Quality of your Design Formats.

About Fractal Technologies
Fractal Technologies is a privately held company with offices in San Jose, California and Eindhoven, the Netherlands. The company was founded by a small group of highly recognized EDA professionals.


Welcome DDR5 and Thanks to Cadence IP and Test Chip

Welcome DDR5 and Thanks to Cadence IP and Test Chip
by Eric Esteve on 05-25-2018 at 7:00 am

Will we see DDR5 memory (device) and memory controller (IP) in the near future? According with Cadence who has released the first test chip in the industry integrating DDR5 memory controller IP, fabricated in TSMC’s 7nm process and achieving a 4400 megatransfers per second (MT/sec) data rate, the answer is clearly YES !

Let’s come back to DDR5, in fact a preliminary version of the DDR5 standard being developed in JEDEC, and the memory controller achieving a 4400 megatransfers per second. This means that the DDR5 PHY IP is running at 4.4 Gb/s or quite close to 5 Gb/s, the speed achieved by the PCIe 2.0 PHY 10 years ago in 2008. At that time, it was the state of the art for a SerDes, even if engineering teams were already working to develop faster SerDes (8 Gb/s for SATA 3 and 10 G for Ethernet). Today, the DDR5 PHY will be integrated in multiple SoC, at the beginning in these targeting enterprise market, in servers, storage or data center applications.

These applications are known to always require more data bandwidth and larger memories. But we know that, in data center, the power consumption has become the #1 cost source leading to incredibly high electricity bill and more complex cooling systems. If you increase the data width for the memory controller while increasing the speed at the same time (the case with DDR5) but with no power optimization, you may come to an unmanageable system!
This is not the case with this new DDR5 protocol, as the energy per bit (pJ/b) has decreased. But the need for much higher bandwidth translates into larger data bus width (128-bit wide) and the net result is to keep the power consumption the same as it was for the previous protocol (DDR4). In summary: larger data bus x faster PHY is compensated by lower energy/bit to keep the power constant. The net result is higher bandwidth!

You have probably heard about other emerging memory interface protocols, like High Bandwidth Memory 2 (HBM2) or GraphicDDR5 (GDDR5) and may wonder why would the industry need another protocol like DDR5?
The answer is complexity, cost of ownership and wide adoption. It’s clear that all the DDRn protocols, as well as the LPDDRn, have been dominant and saw the largest adoption since their introduction. Why will DDR5 have the same future as a memory standard?

If you look at HBM2, this is a very smart protocol, as the data bus is incredibly wide, but keeping the clock rate pretty low (1024 bit wide bus gives 256 Gbyte/s B/W)… Except that you need to implement 2.5D Silicon technology, by the means of an interposer. This is a much more complex technology leading to much higher cost, due to the packaging overcost to build 2.5D, and also because of the lower production volume for the devices which theoretically lead to higher ASP.

GDDR5X (standardized in 2016 by JEDEC) targets a transfer rate of 10 to 14 Gbit/s per pin, which is clearly an higher speed than DDR5, but requires a re-engineering of the PCB compared with the other protocols. Sounds more complex and certainly more expansive. Last point, if HBM2 has been adopted for systems where the bandwidth need is such than you can afford an extra cost, GDDR5X is filling a gap between HBM2 and DDR5, this sounds like the definition of a niche market!

If your system allows you to avoid it, you shouldn’t select a protocol seen as a niche. Because the lower the adoption, the lower the production volume, and the lower the competition pressure on ASP device cost… the risk of paying higher price for the DRAM Megabyte is real.

If you have to integrate DDR5 in your system, most probably because your need for higher bandwidth is crucial, Cadence memory controller DDR5 IP will offer you two very important benefits: low risk and fast TTM. Considering that early adopters have already integrated Cadence IP in TSMC 7nm, the risk is becoming much lower. Marketing a system faster than your competitors is clearly a strong advantage and Cadence is offering this TTM benefit. Last point, Cadence memory controller IP has been designed to offer high configurability, to stick with your application needs.

From Eric Esteve (IPnest)

For more information, please visit: www.cadence.com/go/ddr5iptestchip.


Managing Your Ballooning Network Storage

Managing Your Ballooning Network Storage
by Alex Tan on 05-24-2018 at 12:00 pm

As companies scale by adding more engineers, there is a tendency to spread across multiple design sites as they strive to hire the best available talent. Multi-site development also impacts startups as they try to minimize their burn rate by having an offsite design center such as India, China or Vietnam.

Both the IoT and automotive companies are becoming dependent on 5G and AI as their key drivers, fueling trend for more heterogeneous design projects. At the heart of this increased design collaborations of multiple companies across different sites is the necessity of addressing how design data creations, sync-ups and handoffs are done. We will look into some critical success factors in this area and discuss their available solutions.

EDA flows and design teams
A typical design flow begins when RTL is developed for a specific design specification and then synthesized to gate level, followed by placement optimization, clocking, routing and parasitic extraction intended for timing signoff. It is common to have many process owners across these implementation stages. Once design development started, each designer tends to copy the entire project data into their workspace and annotated their works to propagate further the design realization closer to layout.

The process gets iterated multiple times, some with smaller loops such as in performing localized placement changes and redoing the route, while others may cover many steps such as doing ECOs, or even may trigger a loop-back to the starting point (for example when upstream input files such as RTL version, tool version or run settings change). During each of these events, more design data get duplicated, ballooning up the design data usage (refer to figure 1a).

In addition, early design targets such as the libraries, developed foundation and design IPs that absorb new technology specifications and precede a system design implementation are subjected to frequent version refresh, triggering downstream updates of design implementation cycle and eventually more large data duplication.

Many instances, temporary measures such as local compression with gzip or tarz commands is attempted as designer has limited time to wrestle with the growing design data. This could become a nuisance and still does not prevent data duplication issue.

A hardware configuration management solution such as ClioSoft SOS7 resolves some of the issues faced with the ballooning network storage by greatly simplifying access to the real time database and efficiently managing the different revisions of the design data during design development as illustrated in figure 1b.

It enables distributed reference and reuse through the use of primary and cache servers. Both servers provide controls for a comprehensive data management across sites, enabling effective team collaboration by allowing data transparency and reducing data duplications. Any design team has several team members who need constant access to certain parts of the design data. SOS7 enables better automation by providing for the notion of a configuration wherein designers can access the necessary files based on the role they play in the team. While this improves the disk space usage by preventing copying unwanted files, it also adds a layer of security by limiting access to all files. It has seamless integration with many other EDA flows and allows GUI driven customization for designers to browse libraries and design hierarchies, examine the status of cells, and perform revision control operations without leaving the design environment or learning a new interface.

Usage of network storage

Design development usually starts with one or two shared disk partitions typically ranging from over hundred to 250Gb. Over several project integration snapshots, new additions to the team and several implementation stages, the network storage assigned to the project could easily reach to several terabytes size. At this stage, design teams often resort to segregating team data into cluster of disks such as for front-end, backend, verification –which may address ownership or usage tracking but does not address the main issue of managing the ballooning network storage usage (figure 2a).

Network storage also grows because of the reluctance of the engineers to discard what is perceived as unwanted data. Most data has a lifespan and becomes irrelevant after some time but both project manager and designers tend to retain most of the generated data including those intermediate ones, until a stable results could be achieved.

The cost of maintaining project data footprint is not limited to supporting the given workarea capacity, but also in providing redundant builds by IT for backup purposes. The growing number of physical disk partitions also affects the overall data access performance and IT maintenance efforts as mountable disks recommended threshold could be exceeded.

The solution
ClioSoft’s SOS7 design management platform addresses the issue of managing the ballooning network by using smartlinks-to-cache to link files which have populated the workareas. All directories and files are essentially treated as symbolic links which ensures that the workspace population is fastly accessible and with minimal disk space usage (illustrated in figure 2b, 2c).

This becomes especially useful when the binaries are large and coupled with a big design hierarchy. It is a key solution metric as other SCM solutions that do not take care of this has to deal with text files –and solutions built on top of these SCMs have to use either hardware solutions or make modifications to the file system to achieve network storage space savings

SOS7 has been architected to ensure high performance between the main server and the remote caches. Any time a user wants to modify the file or a directory, it is quickly dowloaded from the local cache or the main repository with minimal download time. SOS7 also provides for the notion of a composite object which manages a set of files as one entity.

When running the EDA software, a number of files are generated, some of which change with each run. Designers often like to keep the generated data from multiple runs as it enables them to compare the results and revert back to the previous design state if needed. SOS7 manages these files efficiently by treating the generated files as a composite object –without duplicating any files which do not change with each run of the EDA tool– thereby providing the designers with the flexibility they desire and at the same time minimizing the network usage space.

Disk space cost
Measuring disk space costs in dollars per gigabyte only covers half of the equation. One needs to consider additional cost drivers and dispel the common myth about storage total cost of ownership based on total capacity. For example, overhead attributed to reduced capacity due to derating, array-based redundancy and the file system coupled with cost needed for power, cooling and floorspace –all could add between 40-70 percent more on the cost.

Other challenges include the need to provision separate storage systems for data backup. All of these disk capacity demand is becoming prohibitive as limited budget constraint is usually imposed on a project. Hence, resolving network space usage at its root, that is an efficient data creation and management should help ensure adequate project storage capacity and performance.

ClioSoft SOS7 delivers features addressing such a need through enabling design dependencies reuse (such as flexible project partitioning including IP and PDKs), a comprehensive version control mechanisms (such as support to project derivative creation through branching) and previously discussed composite-object referencing.

For more info on SOS7 case studies or whitepaper, click HERE.

Also Read

HCM Is More Than Data Management

ClioSoft and SemiWiki Winning

IoT SoCs Demand Good Data Management and Design Collaboration


ISO 26262: My IP Supplier Checks the Boxes, So That’s Covered, Right?

ISO 26262: My IP Supplier Checks the Boxes, So That’s Covered, Right?
by Bernard Murphy on 05-24-2018 at 7:00 am

Everyone up and down the electronics supply chain is jumping on the ISO 26262 bandwagon and naturally they all want to show that whatever they sell is compliant or ready for compliance. We probably all know the basics here – a product certification from one of the assessment organizations, a designated safety manager and a few other safety folks and some form of safety process. Sound like easy boxes to check? Unfortunately for you the integrator, the basics may no longer be enough to complete your deliverables for your customers with respect to that IP.

Kurt Shuler, VP of Marketing at Arteris IP, sits on the ISO 26262 working group (since 2013), along with partner ResilTech (functional safety consultants, on the WG since 2008), so they’re pretty well tuned to industry expectations. What Kurt observes is that default assumptions about what level of investment is required to fully meet the standard can often fall short.

The ISO standard ultimately wants to validate the safety of systems, not bits of systems, because there’s no guarantee in safety that the whole will be the sum of the parts. But waiting until the car is designed before checking safety would be unmanageable, so a lot of requirements are projected back onto earlier steps in the chain. Each supplier in the chain must demonstrate compliance not only in their product, including all aspects of configuring and implementing safety-critical components, but also in their process and their people. And proof of these safety-related activities must be communicated up the supply chain.

Further ramping expectations, as ADAS capabilities extend and as autonomy continues to advance, the automotive industry is apparently increasingly pushing for ASIL D (the highest level) compliance at the SoC level, which is less expensive in total system cost than using ASIL-B SoCs (the common expectation today) to reach ASIL-D at the system level through duplication. ASIL-D requirements are naturally much more rigorous, tightening the screws not only on the SoC integrators but also on their suppliers.

Kurt calls out three areas where default expectations fall short in what is good enough for compliance. The first is in people alignment with safety. A common view is that you can train or hire a functional safety manager (FSM) and a small number of safety engineers whose role is to ensure that everyone else stays on track. While this may meet the letter of the standard, apparently it doesn’t meet the spirit (a strong safety culture) and the spirit is likely to be a better guide as expectations rise. A more robust approach drives safety training more extensively through the organization, in addition to having an experienced FSM.

The second area is in process where quality management, change management, verification and traceability are all important. This is partly about tools (we all love tools) but more importantly about consistent and continual use of the processes defined around these areas. It is certainly helpful to use certified tools or to get certification for internal tools, but that doesn’t mean you’re done. Your customers will perform independent audits of your processes because they’re on the hook to prove that their suppliers follow the standard in all relevant aspects. Of course you could all learn in real-time and correct as needed but obviously it will be less painful and less expensive to work with partners who are already experienced and proven in prior audits.

The third area is around product compliance. From my understanding, general alignment on expectations may be better here, simply because we have tended to focus primarily on product objectives. The expectations of who is responsible for what between supplier and customer, in terms of assumptions of use, configuring a function, implementation and so on are documented in the Development Interface Agreement (DIA); this needs to be demonstrated as a part of compliance. Assuming these steps are followed, there is one area Kurt feels product teams need to ramp up their investment – in development of the initial failure mode effects analysis (FMEA). I hope to write more on this topic in a subsequent blog; for now, here’s a quick teaser. We engineers are biased to jumping straight to design and verification – concrete and quantitative steps. But FMEA is a qualitative analysis, assessing possible ways in which a design (or sub-design) could fail and the possible effects of those failures. This is the grounding for where you then decide to insert safety/ diagnostic mechanisms and how you measure the effectiveness of those mechanisms through fault analysis.

Kurt also stresses what I think is an important consideration for any SoC supplier when considering an IP solution. Ultimately the SoC company owns responsibility for demonstrating compliance to the standard to their customers. That’s a lot of work and expense so they can reasonably question the IP supplier on how they are going to make that job easier. One aspect is in people training – is the supplier meeting the letter of compliance or comfortably aligned with the spirit? One is in process – using approved tools and flows certainly but also being experienced in audits and already being proven on multiple other engagements. And finally in product, again that the supplier has not only done what the standard requires but goes beyond to simplify the SoC team’s configuration and safety analysis through templates for failure modes and effects and fault distribution guidance.

Food for thought. It seems like some ISO 26262 investments need future-proofing, If you want to read more, check out this link.


Functional Safety is a Driving Topic for ISO 26262

Functional Safety is a Driving Topic for ISO 26262
by Tom Simon on 05-23-2018 at 12:00 pm

When I was young, functional safety for automobiles consisted of checking tread depth and replacing belts and hoses before long trips. I’ll confess that this was a long time ago. Though even not that long ago, the only way you found out about failing systems was going to the mechanic and having them hook up a reader to the OBD port. Or, worse you found out when the car stopped running or a warning light came on. A lot has changed with the advent of ISO 26262 which defines the standard for automotive electronic system safety.

The most critical systems, which are designated as Automotive Safety Integrity Level (ASIL) D, have many requirements imposed on them to ensure high reliability. This applies to systems where a failure could lead to death or serious injury, e.g. ADAS systems. It is easy to understand that these systems must be carefully designed and documented. While this is true, the safety requirements extend into ensuring that these systems can self-check at startup and also continuously monitor their own health. In fact, ISO 26262 requires that every block in these systems must run a self-test every 100 milliseconds during operation. This can include fault injection too. In effect, it is now necessary to “check the checkers”.

Even the scope of the functional safety self-monitoring tests is impressive. For instance, in the case of memories, in addition to requiring ECC on data, addresses are also protected by ECC. SOC designers for automotive applications are now faced with not only building the system they have specified, they need to build in extensive new on-chip functionality to ensure functional safety that meets the ISO 26262 standard. This process is somewhat familiar to designers, who have been adding BIST and Scan test functionality into their designs for decades.

Fortunately, similar to the model for BIST and Scan, there are commercial IP based solutions and tools chains that address the needs arising from these dramatic changes in system testing. Synopsys has long been a player in both the IP and test markets. It is only natural that they extend and adapt these offerings to create a combined and comprehensive solution. They have done the work to make their solution ISO26262 ASIL-B through ASIL-D ready.

There is a very informative video presentation that gives an overview and then goes into detail on the Synopsys functional safety offerings along with specific customer experience from Bosch in the application of STAR Memory System (SMS) and STAR Hierarchical System (SHS). The presenters are Yervant Zorian, Synopsys Fellow & Chief Architect, and Christophe Eychenne, Bosch DFT Engineer. In the first half, the very knowledgeable Yervant covers the requirements of ISO 26262 systems and then outlines the offering from Synopsys. In the second part Christophe goes through a case study from Bosch.

Here are a few of the interesting things I learned from this video. In the case of memories, each memory is given a wrapper that can perform testing and then implements memory repair. The repair information is saved and reloaded at start up, However, an additional test is performed at every start to fully check each memory. This helps manage aging in memories. Soft error correction in memories is complicated in newer process nodes because of the increased likelihood of multi-bit errors. The SMS is aware of memory internal structure and this helps in error detection and correction.

The SHS wraps interface and AMS blocks, and connects to a sub-server using IEEE 1500. The wrapped memories are tied together using the SMS processor and all of these elements and the digital IPs with DFT scan are then connected to a server which offers the traditional TAP interface. In addition, there are external smart pins that can be used to quickly and easily initiate tests without needing a TAP interface controller. As an added bonus, this entire system can be used to facilitate silicon bring up using with the Synopsys Silicon Browser.

Back when we were changing hoses and belts for long trips, Synopsys was just an EDA tools company. But, like I said, that was a long time ago. Synopsys has evolved and developed impressive and sophisticated offerings in IP, which now includes many of the essential elements to build SOCs and systems for safety critical automotive systems. The video presentation is available on the Synopsys website, if you would like to get the full story behind their latest work in this area.