Banner Electrical Verification The invisible bottleneck in IC design updated 1

Getting A Handle On Your Automotive SoCs For ISO 26262

Getting A Handle On Your Automotive SoCs For ISO 26262
by Mitch Heins on 10-17-2017 at 12:00 pm

When it comes to safety and automotive systems, ISO 26262 is the standard by which system vendors are judged. As with all things the devil is in the details. To be compliant to the standard, design teams must have a well-defined and rigorous design and validation process in place that copiously documents all the requirements of their system. Additionally, they must also be able to prove that all the system requirements were in fact implemented and validated satisfactorily. These requirements are representative of how the system should respond when things are going right.

Because safety is involved, ISO 26262 also compels designers to specify how the system will respond when things are not going right. This creates another set of requirements that describes how the system is to respond to both permanent (bugs) and intermittent faults. As with the design requirements, you must also be able to trace these requirements through implementation and verification.

I’ve been on a lot of software teams and while there are usually good intentions, it seems that it is in the validation stage where things are usually lacking. Perhaps this is because most of the software with which I’ve worked hasn’t been deemed “safety critical”. None the less, it scares the death out of me to think about how rigorous a design team needs to be when working on something like autonomous driving systems. Fortunately, Mentor, a Siemens business, has a tool called ReqTracer that can be used to help design teams get a handle on this problem.

ReqTracer is a tool used to manage and track requirements by providing traceability from requirements and plan documents, through implementation and verification. The tool allows an auditor to select and trace a specific requirement to prove that the requirement was indeed implemented and verified by the design team. The tool however is not just useful for auditors. Design teams are finding that it is a great tool to help them organize and communicate between various functional teams that are working on the system.

As an example, system functional requirements are typically generated from a set of system “use cases”. The resulting functional requirements may impact system hardware, software or both. There may also be functional safety requirements of the system that interact with these functional requirements. Per the ISO 26262 standard, all these requirements must be documented along with implementation and verification plans for each requirement. There are teams of people that work on various parts of the design (e.g. system designers, hardware RTL engineers, hardware implementation engineers, software designers, logic validation engineers and the like). For the system to work, all these people must be in sync with the requirements, implementation and validation plans.

ReqTracer is used to organize a design project by gathering information from a wide variety of sources like office documents, requirements databases, design and simulation databases etc. and then used to draw the lines that connect the proverbial dots between individual requirements, design implementation and validation files and final validation results. Once these relationships have been created, ReqTracer can be used to report on and visualize the development process as it proceeds. Any team member can visualize and trace requirements to actual status of implementation and validation for those requirements. A designer can ask, “What should I work on next?” or a quality team can ask, “What tests have not yet run?” and ReqTracer will be able to highlight those requirements that have been met or not met.

ReqTracer can also be used to manage changes to requirements as the design and implementation process proceeds. Rarely are all requirements known before design starts. Design is usually an iterative process whereby additional items, sometimes called derived requirements, will come out of the design process. There may also be missing requirements that emerge at the last minute before the design is complete. Last-minute changes can be the costliest, as they can have far reaching impacts across the system. All these requirements must also be tracked and traced to ensure that they too have been properly implemented and verified.

The beauty of ReqTracer is that it can be used to visualize the impact on the system for a given requirement change. The tool will let the design team visualize all the work items that will be impacted by a proposed requirement change, leading to a more informed decision-making process before accepting additional requirements. If the new requirement is accepted, ReqTracer will also ensure that no components affected by the change will be forgotten in the last-minute crunch to implement the requirement.

Interestingly, a tool like ReqTracer may be at first thought of as a necessary evil. You really need something like this to be able to work your system through the ISO 26262 process. Upon closer examination though, it turns out the necessary evil may be in fact be more of a big productivity booster for your team. I like tools that give designers more control and visibility over their design processes. Even better are tools that boost your overall team productivity. Having a good understanding of where your project stands, what has been done and what still remains, is key to predictable design schedules. And in the world of ISO 26262, all the better that you can ensure that your design has complete coverage for its requirements. ReqTracer is a unique tool in this regard and is well worth a look if you haven’t already.

Personally, when I get my first autonomous vehicle, I’m going to be really hoping that ReqTracer was used by the design team who put it all together.

See Also:
Mentor ReqTracer datasheet
Whitepaper: Automotive Defect Recall? Trace it to Requirements


Reliability Signoff for FinFET Designs

Reliability Signoff for FinFET Designs
by Bernard Murphy on 10-17-2017 at 7:00 am

Ansys recently hosted a webinar on reliability signoff for FinFET-based designs, spanning thermal, EM, ESD, EMC and aging effects. I doubt you’re going to easily find a more comprehensive coverage of reliability impact and analysis solutions. If you care about reliability in FinFET designs, you might want to check out this webinar. It covers a lot of ground, so much that I’ll touch only on aspects of thermal analysis here with just a few hints to the other topics. The webinar covers domains with products highlighted in red below.
Incidentally, ANSYS and TSMC are jointly presenting on this topic at ARM TechCon. You can get a free Expo pass which will let you into this presentation HERE.

Why is reliability a big deal in FinFET-based designs? There are multiple issues impacting aging, stress and other factors, but one particular issue should by now be well-known – the self-heating problem in FinFET devices. In planar devices, heat generated inside a transistor can escape largely through the substrate. But in a FinFET, dielectric is wrapped around the fin structure and, since dielectrics generally are poor thermal conductors, heat can’t as easily escape leading to a local temperature increase, and will ultimately escape significantly through local interconnect leading to additional heating in that interconnect.


Also, since FinFETs are built for high drive strength, they are driving more current through thinner interconnect resulting in more Joule heating. In addition to these effects, you have to consider the standard sources of heating, thanks to complex IP activity profiles in modern SoCs: active, idle, sleep modes and power off – all of which contribute to a heat map across the die which will vary with use-cases. Self-heating effects may contribute 5[SUP]o[/SUP] or more in variation and use-case effects may contribute 30[SUP]o[/SUP] or more across the die.

An accurate analysis has to take both these factors into account to meaningfully assess reliability impact. Typical margin-based (across the die) approaches are ineffective and lead to grossly uneconomic overdesign. Which of course would next take us into the big data and SeaScape topic but I’m not going to talk about that here. In this webinar Ansys’ focus is the reliability analysis.


The thermal reliability workflow starts with Totem-CTA for analysis of AMS or custom blocks. This is based on a transient simulation and library models to determine local heating, EM violations and FIT violations. Totem will also build a model for the block which you can then use in the next step.


RedHawk-CTA will analyze digital IPs and the full chip-package system in a power-thermal-electrical loop simulation to determine temperature profiles by use-case, along with thermal-aware EM and FIT violations. You probably know from my previous posts that it can also do this for 2.5D and 3D systems. Out of all of this, RedHawk-CTA tool will generate a model which can be used in system level analysis using Ansys IcePak, since system reliability concerns don’t stop at the package.

Ansys talks about a couple of customer case studies in the webinar where focus is very much on the additional complexity self-heating introduces to increasing FIT rates and how improved visibility into root causes can help manage these down to an acceptable level through local (modest impact) rather than global (high impact) fixes.

In other aspects of reliability, the webinar first touches on ESD and path finding. Again, both Totem and RedHawk provide support to aid in ESD signoff through resistance, current density, driver-receiver checks and dynamic checks. And out of this RedHawk (PathFinder) will also build a system-level model for system-level ESD analysis.

Electromagnetic compatibility (EMC) is an important component of reliability in part because many SoCs now have multiple radios. So it becomes important to analyze both for EMI (EM noise) and EMS (EM immunity). An interesting consequence of studies in this area is around the EMI impact of power switching in an SoC. We normally think of the impact of power switching on power noise, but also, unsurprisingly perhaps, power switching can create significant EMI spikes.

Finally the webinar covers analysis of aging effect using Path-FX. Aging is a hot topic these days. It’s important first to prove a design works correctly when built, within whatever margins, but what happens if behavior drifts over time, as it inevitably will, thanks to aging? One consequence can be that new critical paths can emerge, and therefore what were once safe operating conditions can become unsafe unless (in some cases) you slow the clock down. As a result, aging can create reliability problems. Since this aging won’t be uniform across the die, again you need detailed analysis to guide selective mitigation if you are going to avoid massive over-design.

That’s where Path-FX comes in; it simulates orders of magnitude faster than conventional circuit sim solutions, but still with Spice-level accuracy, using all design model, layout, parasitics and reliability PDKs from the foundry. From this you can compare the fresh design model critical paths with the aged model to find those paths where you need to take corrective design action.

Ansys really does seem to be in a class of its own in reliability analysis; I can see why they got a partner of the year award this year at TSMC. For anyone who cares about reliability tightly coupled with advanced foundry processes, they seem to be unbeatable. You can watch the webinar HERE.


Implementing IEEE 1149.1-2013 to solve IC counterfeiting, security and quality issues

Implementing IEEE 1149.1-2013 to solve IC counterfeiting, security and quality issues
by Tom Simon on 10-16-2017 at 12:00 pm

As chips for any design are fabricated, it turns out that no two are the exactly the same. This is both a blessing and a curse. Current silicon fabrication technology is amazingly good at controlling factors that affect chip to chip uniformity. Nevertheless, each chip has different characteristics. The most extreme case of happens with chips that fail to meet timing. Next in line are chips that perform better or worse than others. I’ll touch on these kinds of differences and the implications a bit later. However, there is another reason to want to discern among unique chips from the same mask set.

If individual chips can be distinguished securely, it creates the potential to enable many important capabilities. If each chip can be given a unique and unalterable and non-duplicable identity, it enables secure boot, cloning protection, keyed feature upgrades and configurability, and secure encryption and decryption. The short version is that we want to transfer publicly viewable but encrypted information to a specific unique IC that is the only device that can decrypt that information. A prevalent way to do this is with public-private key encryption.

However, we have a chicken and egg problem. If all the chips that roll off the production line are identical how can we seed the chips with unique secure keys so they can bootstrap the security process? We need some kind of non-volatile storage that can be easily provisioned in silicon and easily written to right after fabrication. If the key is going to be verifiable and non-clonable, there needs to be hash data to verify it and the on-chip storage must prevent reverse engineering of the data.

Because this level of security is just as important for smaller and low volume IoT designs as it is for large high volume consumer chips, the non-volatile memory must also be cost effective and easy to implement. This rules out many technologies like Flash-NAND, eFuse, etc. They can add the need for additional process layers, complex write support circuitry, external power pads and so on.

Many people are turning to one time programmable (OTP) NVM, like that offered by Sidense. It avoids these pitfalls and offers a high degree of flexibility. To facilitate this Sidense has partnered with Intellitech to provide a complete solution for externally writing security information to on-chip OTP NVM using the IEEE 1149.1-2013 standard. This is done using a JTAG TAP or an SPI interface that is easily added to the chip, and most likely already used for other JTAG functions.

Coming back to the topic of performance variations in chips, we should look at how chips are graded for different applications. It is a common practice to test chips to evaluate their individual speed and thermal performance. The failing chips are discarded – hopefully for good. The rest are often graded and sold for different end applications. Some are sold for higher prices because they run faster. Other better performing parts are used in systems that require higher reliability, such as aircraft, cars or military equipment.

However, there are many instances of lower performing chips illicitly relabeled as higher performance parts. Or even worse, failed parts have been put back into the supply chain. The customary method of indicating the grade of a part after testing is by marking the package. Package markings can be altered, leading to expensive quality and reliability issues in final assembled systems. What is needed is a system for storing part grading within the parts in a tamperproof format.

Once again IEEE 1149.1-2013 offers assistance through its Electronic Chip ID (ECID) specification. ECID allows on chip storage of test results, temperature and speed grade, wafer number, die xy, location and other information. The storage area for ECID can be used for private information as well. By using ECID, it is possible ensure that genuine and correct parts are being used in systems. It also enables a number of key reliability activities. If there are field issues, the wafer lot and die location information can be fed back to the supplier to help resolve quality issues.

ECID is another area that Sidense and Intellitech have focused on. Their complete solution provides for secure writing of the ECID data block. Intellitech also offers user level software and interface boards that allow for easy reading of the ECID information so it can be used to verify parts before they are soldered to a board. Additionally, in the case of failures, it is possible to read out the information needed for resolving reliability issues.

IEEE 1149.1-2013 is playing a major role in adding value and preventing fraud in the supply chain. With a solution like the one proposed by Sidense and Intellitech, it becomes feasible to maximize the benefits of ECID and to ensure that chips for niche markets can have security features matching larger mainstream SOC’s. After all, the most likely target for a security attack would be edge node chips that might not be designed with robust security.

Sidense OTP-NVM has a multitude of features to prevent reverse engineering, side-channel attacks. They also can come with completely self-contained write logic that can work with system supply voltages. This, and the requirement for no additional layers makes it an excellent choice for implementing ECID, and key and feature configuration storage. More detailed information about how the Sidense and Intellitech joint solution works can be found on the Sidense Website.


This is How We Autonomous

This is How We Autonomous
by Roger C. Lanctot on 10-13-2017 at 12:00 pm

Some days it seems like the world is obsessed with autonomous vehicles. No one really understands why. Surveys tell us that consumers are both interested in and repelled by self-driving cars. What’s missing is the business model – the commercial reason for the existence of self-driving cars.

Today’s announced acquisition of Auro Robotics by RideCell should go a long way toward clearing things up. RideCell has announced today its all stock acquisition of Auro Robotics, maker of driverless, zero-emission shuttles.

Auro’s SAE Level 4 application of geo-fenced automated driving is precisely what is at stake in the current debate: self-driving shuttles operating in a defined area. Of course we’ve seen the headline-grabbing antics of Tesla Motors’ Level 2 vehicles and Waymo’s Level 4 shuttles with human operators, but Auro is first to market with a commercial driverless application.

Simultaneous to its acquisition of Auro, RideCell also announced the public availability of its autonomous operations platform which has been used in multiple autonomous pilot programs. Ridecell now offers the industry’s first complete autonomous new mobility solution that enables on-demand autonomous shuttle mobility service in low-speed, private-road settings.

While testing on public roads, including highways and surface streets, and in dense urban environments may represent brass ring objectives for the autonomous driving industry, controlled use opportunities represent the low hanging fruit that will pay the bills and pave the way. The Auro acquisition allows RideCell to accelerate the evolution of its connected autonomous vehicle platform while enabling the company to refine the critical user interfaces crucial to crafting real world applications.

RideCell may have 20 customers operating in a variety of environments and scenarios, but the Auro acquisition and the new autonomous platform opens the door to opportunities where vehicles may or may not already be implemented. While RideCell partners, such as BMW’s ReachNow, may be operating in the wild with human drivers accessing car sharing or ride hailing use cases, the stage is set with Auro for a future driverless ReachNow proposition.

RideCell says it will continue to collaborate with partners to apply the RideCell platform to self-driving vehicles for automated management of operational tasks including cleaning, refueling, and emergency response situations. Auro expands RideCell’s opportunities with shuttle manufacturers to add self-driving capabilities for neighborhood electric vehicle platforms. These systems are designed to safely drive people around campuses, theme parks, resorts, business parks, and retirement communities.

Private environments with low-traffic, low-speed roads provide the perfect setting for deploying autonomous vehicles today. Auro-enabled shuttles were among the first driverless shuttles put into daily operation on the Santa Clara University campus in California and have already provided safe transportation to thousands of riders, the companies say.

The future of self-driving cars, therefore, is unfolding on college campuses, in theme parks, at resorts and in retirement communities. These are the places where valuable human interactions will be tested and assessed for the refinement of systems which will ultimately be accessible on surface streets, in urban centers and on highways. This is how we autonomous. This is why we autonomous.


Magillem User Group Meeting

Magillem User Group Meeting
by Bernard Murphy on 10-13-2017 at 7:00 am

Magillem is hosting a user group meeting on October 26th at The Pad in Sunnyvale. User Group meetings are always educational; this one should be especially so for a number of reasons, not least of which is the keynote topic: Expert Systems for Experts.


REGISTER HERE for the meeting in Sunnyvale on October 26[SUP]th[/SUP] from 10:00am to 6:00 pm

A great reason for going, as always in UG meetings, will be to hear other Magillem users talk about how they are using these tools in their daily design tasks. Since Magillem represents the leading edge of IP-XACT-based design assembly and support, that alone should be an incentive to attend.

Another reason for you budding entrepreneurs is to get a look at The Pad, a co-working/ innovation center for young startups. I’m not sure you can get in if you’re not a member or have business with a member (prospect or VC for example), so you might want to take advantage of this opportunity.

A particularly interesting reason is Magillem’s theme for the meeting: A Cognitive Assistant for SoC Designers. As a company working at the intersection of data representation standards and system design applications, Magillem have never operated as if they felt especially bound to limited functional areas. Today they work in system design, assembly and verification, traceability, data analytics and documentation. I’m sure there are rich opportunities for them to use cognitive methods in these domains.

REGISTER HERE for the meeting in Sunnyvale on October 26[SUP]th[/SUP] from 10:00am to 6:00 pm

Among other topics in the agenda, Eric Mazeran will talk on how cognitive assistants could help designers of SoCs and complex subsystems. There will also be demos of Magillem architecture intent and content platforms as well as their Crystal Bulb capability. From users, I expect to hear both from teams who have had Magillem technology embedded in their flows for many years and from others who have only recently adopted these flows. Should provide an interesting range of perspectives.

About the keynote speaker
Dr. Eric MAZERAN heads Prescriptive Analytics & Decision Optimization R&D within IBM. He is responsible for delivering innovative market-leading products based on Optimization & Cognitive technologies in the area of Prescriptive Analytics.

In addition to several executive roles in software development within IBM and ILOG, Eric has a deep technical background in Artificial Intelligence (author of Expert Systems, Expert System Shells, Knowledge Acquisition Systems, Rules Engines) and Software Architectures (author of Object Oriented Languages, Discrete Events Simulation software, Network Management System software, etc.) and has over 25 years experience with business applications in the fields of AI & decision automation with many different companies and industries.

An alumnus of the Stanford University Graduate School of Business, Eric is passionate about both epistemology and management science. He holds a PhD in Artificial Intelligence from French National Institute of Applied Sciences (INSA), Lyon, France, a Master in Robotics and CS from INSA, Lyon, and a Master/Engineer’s Degree in Civil Engineering from Ecole Nationale des Travaux Publics de l’Etat, Lyon, France.

About Magillem
Magillem delivers software that provides seamless integration across Specification, Design, and Documentation processes, connecting all product-related information in a traceable hub of links.

Magillem sells to large corporations in various industries (semiconductors, embedded systems…) tools and services that drastically reduce the global cost of complex projects and tasks.

Magillem industry expertise can achieve a real quality improvement for our customers helping them to deal successfully with the implementation of a new methodology. We can leverage the knowledge, experience and solutions we have to address our customers’ needs when it comes to the most challenging projects: the motto is to give you the tools to mine your own expertise and add value to your business.


Why Cars aren’t as Safe as Planes

Why Cars aren’t as Safe as Planes
by Roger C. Lanctot on 10-12-2017 at 12:00 pm

People are often confused and amazed that airline travel is so much safer than travel by bus, rail or car. The safety of air travel is truly miraculous, particularly in comparison to the disastrous history of terrestrial travel.

This reality was highlighted by the latest fatality data released by the U.S. Department of Transportation revealing that annual highway fatalities in the U.S. rose (5.6%) in 2016, for the second consecutive year. In contrast, 2016 was the second safest year ever for air travel.

It seems counterintuitive that what appears to be the most terrifying mode of travel – flying through the air in a metal tube with hundreds of other humans (or alone) – is actually safer than driving to the local shopping center. But the safety of air travel is a direct positive return on a substantial investment into regulatory oversight.

What really sets airline safety apart from the safety of terrestrial travel is the amount of scrutiny brought to bear when crashes and fatalities do happen. Federal investigators jump into action when a crash occurs, whereas the routine nature of car crashes merits little attention – even in the event of fatalities.

In fact, the first time such scrutiny was brought to bear on a car crash was the investigation of Tesla Motors’ first fatal crash by both the National Highway Traffic Safety Administration and the National Transportation Safety Board. The findings differed, but the outcome was identical – Tesla used data collected by the vehicle and its working knowledge of the system to more or less exonerate itself.

It is a sad comment on that crash that the driver who died in the Tesla crash was using Tesla’s inaptly named Autopilot. Perhaps we can forgive that driver for taking the name of the system too seriously. Any actual airplane pilot will tell you that “autopilot” is a function best used in an airplane operated in the sky by a trained pilot, not in a vehicle on the ground operated by an insufficiently trained and inattentive human driver.

The delta between the safety of air and car travel is directly related to the investment in regulatory oversight. In the U.S., nearly $16B is spent annually in support of the Federal Aviation Administration, more than 16x the $908M invested in the National Highway Traffic Safety Administration. Both organizations reside within the U.S. Department of Transportation.

It might be argued that we are saving $15B by not expanding NHTSA on the same scale as the FAA. The hundreds of billions of dollars in economic losses from 100+ daily highway fatalities and hundreds of thousands of injuries is enough of an argument to dispel any such apathy.

Correlated to that spending gap are the 50,000 employees working in the FAA vs. the 610 working at NHTSA, according to Wikipedia data. In essence, the Federal government has prioritized the safety of air travel and that commitment is reflected in appropriate investments in personnel and resources. Automotive safety, meanwhile, is underfunded and de-emphasized with a correspondingly predictable outcome

Both the aviation and automotive industries are viewed as essential pillars of the U.S. economy and a source of national pride and a projection of U.S. power and influence. Sadly, the lack of focus on automotive safety regulation has contributed to a decline of that prestige and power in the U.S. automotive industry which has seen a steadily declining share of global vehicle sales as well as diminished technological leadership.

Getting serious about automotive safety and reducing highway fatalities is going to require a far greater Federal financial commitment than is currently contemplated. NHTSA itself has been defunded and demoralized over recent years and now lacks internal leadership – with no Administrator having been appointed to lead the agency.

The 37,461 people who died on U.S. highways in 2016 can blame legislative distraction fueled by automotive industry lobbyists. NHTSA has shared the details of the toll this inaction took in 2016:

  • Distraction-related deaths (3,450 fatalities) decreased 2.2%;
  • Drowsy-driving deaths (803 fatalities) decreased 3.5%;
  • Drunk-driving deaths (10,497 fatalities) increased 1.7%;
  • Speeding-related deaths (10,111 fatalities) increased 4%;
  • Unbelted deaths (10,428 fatalities) increased 4.6%;
  • Motorcyclist deaths (5,286 fatalities – the largest number of motorcyclist fatalities since 2008) increased 5.1%;
  • Pedestrian deaths (5,987 fatalities – the highest number since 1990) increased 9.0%;
  • Bicyclist deaths (840 fatalities – the highest number since 1991) increased 1.3%.

Annual highway fatalities of nearly 40,000 souls can no longer be regarded as simply the cost of getting around. The ongoing increase in highway fatalities is a sobering reminder of the severity of the current crisis and one that won’t be fixed with self-driving car legislation or a vehicle-to-vehicle communications mandate without investments in greater regulatory resources.

Automotive technology is becoming increasingly complex with a vast expansion of safety systems, semiconductors, electronics and software in vehicles. The responsible regulatory agency, NHTSA, has not expanded in kind to take on the task at hand which is nothing less than the ineffable challenge of proving a negative – that a crash did not occur and lives were saved because of the implementation of a safety system.

Car companies and their suppliers need a coordinated data-driven effort backed by appropriately trained engineers to create the tools to determine how to mitigate fatalities. Such a program should be tasked with determining which system or combination of systems have proven effective in avoiding collisions and crashes.

Maybe a ranking of highway fatalities by car maker will get the industry’s attention – even if it might be misleading.

The default of the industry and regulators has been to blame 94% of crashes on drivers. If that is indeed true, then let’s attack driver training and licensing. And let’s do our best to design vehicle systems that are more forgiving of subpar drivers.

Let’s face it, it will be years before we are able to turn the driving task over entirely to the robots. It may not be possible in the long run without changes in infrastructure. In the meantime, it is criminal to tolerate 100+ daily fatalities. It’s not okay.

In the U.S. and in other parts of the world there is an assumption that everyone wants to drive, everyone should drive and maybe everyone should own a car. Maybe some people shouldn’t be driving and/or shouldn’t own cars. Maybe driving privileges should be more easily revoked or removed.

It’s not as if there aren’t options for getting around without doing the driving oneself. Why not leave it to the professionals? There has to be a better way. It’s simply not practical to fly commercial to every daily destination – even if it is safer.


TechCon: See ANSYS and TSMC co-present

TechCon: See ANSYS and TSMC co-present
by Bernard Murphy on 10-12-2017 at 7:00 am

ANSYS and TSMC will be co-presenting at ARM TechCon on Multiphysics Reliability Signoff for Next Generation Automotive Electronics Systems. The event is on Thursday October 26th, 10:30am-11:20am in Grand Ballroom B.


You can get a free Expo pass which will give you access to this event HERE and see the session page for the event HERE.

This topic is becoming a pressing concern, especially for FinFET-based designs. There are multiple issues impacting aging, stress and other factors. Just one root-cause should by now be well-known – the self-heating problem in FinFET devices. In planar devices, heat generated inside a transistor can escape largely through the substrate. But in a FinFET, dielectric is wrapped around the fin structure and, since dielectrics generally are poor thermal conductors, heat can’t as easily escape leading to a local temperature increase, and will ultimately escape significantly through local interconnect leading to additional heating in that interconnect. Add to that increased Joule heating thanks to higher drive and thinner interconnect and you can see why reliability becomes important.

Arvind Vel, Director of product management at ANSYS and Tom Quan, Deputy Director, Design Infrastructure Marketing Division, TSMC North America will be presenting. This should be an interesting session. ANSYS have developed an amazingly comprehensive range of solutions for design for reliability, spanning thermal, EM, ESD, EMC, stress and aging concerns. In building solutions like this, they work very closely with TSMC, so much so that they got a partner of the year award this year at the TSMC OIP conference.

Summary
Design for reliability is a key consideration for the successful use of next generation system-on-chips (or SoCs) in ADAS, infotainment, and other key automotive electronics systems. These SoCs manufactured on TSMC’s 16FFC process are advanced multi-core designs with significantly higher levels of integration, functionality and operating speed. These SoCs have to meet the strict requirements for automotive electronics functional safety and reliability.

ANSYS and TSMC have collaborated to define workflows that enable electromigration, thermal and ESD verification and signoff across the design chain (IP to SoC to package to system). Comprehensive multiphysics simulation that captures the various failure mechanisms and provides signoff confidence is needed to not only guarantee first time product success but also ensure compliance within regulatory limits.

This session will provide an overview of ANSYS’ chip package system reliability signoff solutions to create robust and reliable electronics systems for next generation automotive applications along with case studies based on TSMC’s N16FFC technology.

Founded in 1970, ANSYS employs nearly 3,000 professionals, many of whom are expert M.S. and Ph.D.-level engineers in finite element analysis, computational fluid dynamics, electronics, semiconductors, embedded software and design optimization. Our exceptional staff is passionate about pushing the limits of world-class simulation technology so our customers can turn their design concepts into successful, innovative products faster and at lower cost. As a measure of our success in attaining these goals, ANSYS has been recognized as one of the world’s most innovative companies by prestigious publications such as Bloomberg Businessweek and FORTUNE magazines.

For more information, view the ANSYS corporate brochure.


A better way to combine PVT and Monte Carlo to improve yield

A better way to combine PVT and Monte Carlo to improve yield
by Tom Simon on 10-11-2017 at 12:00 pm

TSMC held its Open Innovation Platform Forum the other week on September 13[SUP]th[/SUP]. Each year the companies that exhibit at this event choose to highlight their latest technology. One of the most interesting presentations that I received during the event was from Solido. In recent years they have produced a number of groundbreaking machine learning-based tools for Monte Carlo, Statistical PVT, cell optimization, library characterization, and other critical verification tasks.

Simply put, their aim has been to reduce the amount of brute force simulation needed to validate designs before silicon production. Their latest offerings have come from their Machine Learning Labs. The product they introduced at OIP this year is another in this line. It is called PTVMC Verifier, which is a mouthful, but should be correctly construed to mean that they are combining analysis over PVT’s with Monte Carlo – to fully account for all meaningful operating conditions and states.

This is significant because the largest growth segments in semiconductor design each need this kind of thorough analysis to ensure silicon success. IoT and mobile are pushing the limits of lower operating voltages, which can push margins to the edge. Automotive has extremely high reliability requirements, combined with harsh environmental conditions. High performance computing demands the tightest timing again at often high operating temperatures.

With unlimited time and computing resources the easiest approach would be to perform Monte Carlo simulations on every single PVT corner. This would ensure that working silicon could be achieved within the desired sigma. In order to adapt to the realities of chip level verification, the typical flow starts out by running a large number of PVT corners to find the worst-cases. These worst-case corners are run through Monte Carlo simulations. While this saves significant time, it still suffers from the possibility that a worst case PVT at nominal may not be the worst case at the target sigma. In this event a true worst case may be overlooked – possibly leading an unexpected failure.

This problem until now has been partially addressed through the use of statistical corners. This attempts to solve the problem by running Monte Carlo first to obtain sigma distributions then applying these to PVT corners to see where they are likely to be failures. The catch is that a given sample in a Monte Carlo may move to a different probability at other PVT conditions. Thus the probability profile can change dramatically for some sets of samples as they are moved across PVT’s.

Solido is applying Machine Learning to solve this difficult issue. They have learned that Monte Carlo samples have several different behaviors relative to each other as they are run at different PVT’s. Machine Learning can be used to characterize the relative ordering of the samples based on a small number of simulation runs. This garners enough information to predict how the probability distributions will shift moving between PVT corners.

In a sequence of three cycles of simulations using a small subset of the possible corners and samples, it is possible to predict the worst cases for a design. This takes the number of simulations required down from 10’s of thousands to somewhere between 500 and 2,000, and works up to 5 sigma by default, but can go higher if needed. The reliability of the results can be verified at runtime, so there is no doubt about the integrity of the results.

I have been saying that Machine Learning will be a revolutionary force in EDA. Solido has made great headway in applying it to numerical problems. We can also expect to see more in the area of visual pattern recognition. For instance, Mentor already has some technology it is using for DRC. For more information on how Solido is applying Machine Learning technology developed in its Machine Learning Labs, please look on their website.


Do investors understand the new memory paradigm?

Do investors understand the new memory paradigm?
by Robert Maire on 10-11-2017 at 7:00 am

Micron put up a great quarter beating both quarterly expectations and guidance. Even though the stock was up 8% and we still think it has a long way to go as investors have not fully embraced the upside ahead in the memory market.
Continue reading “Do investors understand the new memory paradigm?”