Banner Electrical Verification The invisible bottleneck in IC design updated 1

In-System Automotive Test

In-System Automotive Test
by Daniel Payne on 02-01-2018 at 12:00 pm

I’ve been driving cars since 1975 and in the early days we had simplistic gauges for feedback like: Speed, Fuel level, Oil level, RPM. Back then when you popped the hood of a car you could see through the engine compartment onto the ground below, however with today’s cars the engine compartments are crammed with tubes, wires and assemblies packed so tightly that you cannot see the ground and electronic content abounds. Dashboards of modern cars even report when a turn signal or other light bulb is burnt out or if the tire pressure is out of spec and needs to be checked. A report from IDCpredicts a 19% growth rate for infotainment systems in our cars. McKinsey sees automotive semiconductor trends in several areas:

  • Vehicle electrification
  • Increased connectivity
  • Autonomous driving
  • Shared mobility services

One established vendor to the automotive industry over the years has been Renesas, offering products in eleven distinct areas:

So a big challenge in automotive electronics is safety, we want to know that our cars are operating safely and furthermore we want to know when something has gone wrong so that we can take action like schedule maintenance or repairs. To accomplish these goals our semiconductor suppliers do extensive testing of electronic parts before they are shipped to automotive vendors, but what about after the parts are installed in the vehicle? Is there a way to test some or all of the electronic components in our cars throughout the lifetime of use?

The short answer is yes, there are techniques that chip designers can employ to allow testing of electronics that are installed, or in-system testing. We love our acronyms in high tech so I’ll give you another one that fits our topic and it’s called BIST, which stand for Built In Self Test. With BIST the chip designer adds some extra logic inside their IC that will:

  • Check itself at power-up for faults
  • Check for conditions that cause failures
  • Report the issue(s)

An ideal BIST approach would be one that provides high fault coverage of all gates inside the IC, and does so in a short amount of time like milliseconds. EDA tools have come to automate adding the BIST logic to a chip so that a designer doesn’t have to manually figure out the best implementation, and engineers at Renesas have just publicly talked about how they used the DFTMAX LogicBISTtool from Synopsys in their automotive group. From a design engineer perspective you add LogicBIST when your RTL code is stable and before physical implementation as shown in the following BIST flow:

During the process of adding LogicBIST the Renesas engineers also used a tool to increase fault coverage levels called SpyGlass DFT ADV that accepts RTL code, ensures the design is scan compliant, diagnoses DFT issues early, does lint checking and DFT analysis and estimates stuck-at and transition faults:

Renesas and Synopsys were able to certify DFTMAX LogicBIST was meeting Part 8 of the ISO 26262 functional safety standard, something that the automotive industry regulates.

Benefits that Akira Omichi from Renesas sees from this LogicBIST approach include:

  • Useful on mixed-signal automotive designs
  • Power-on Self Test (POST) improves safety
  • Consumes minimal chip area
  • Gives high fault coverage on the digital logic
  • Easy to use

Talking with Robert Ruiz of Synopsys by phone I learned that the ISO 26262 standard for functional safety presumes that there is a human driver in the car, so it doesn’t directly pertain to fully autonomous vehicles. With POST and LogicBIST the goal is to find and report any latent faults, typically showing up as a message on the dashboard.

Over time you can expect to see more and more automotive semiconductor suppliers to add LogicBIST to their IC designs as a means to improve functional safety and differentiate their product offerings. With EDA vendors like Synopsys it’s an easier process for chip designers to add LogicBIST because of the automation in their DFTMAX LogicBIST tool.

Related blogs:


Crystal Bulb: Sharing Design Intelligence

Crystal Bulb: Sharing Design Intelligence
by Bernard Murphy on 02-01-2018 at 7:00 am

There is a trend among design companies to want to extract more intelligence, from designs in-process and designs past, in support of optimizing total enterprise efficiency. Design automation companies see opportunity in leveraging this interest since they, in various ways, have a handle on at least part of the underlying data. The question of course is what constitutes a sufficient base of design information to support extracting information that will be widely useful to the enterprise. That depends on what sort of information you want to extract.

Design data management companies are making a play in this space because they have a good handle on the chip design bill-of-materials (BOM) view (and perhaps aspects of the software), if not packaging, reference board and documentation. Some even go deeper into some IP, reading OpenAccess (OA) physical databases and netlists. This unquestionably extends their ability to mine more information for those types of design object, though it is unclear how deep an understanding they have of higher-level design semantics.

But how extensively does that level of understanding help the enterprise? If I’m an applications engineer, or a packaging engineer, or I’m developing documentation or software drivers, or planning a reference board, or even program-managing the technical interface with key accounts, understanding the current state of the BOM is certainly important but it’s less clear how useful low-level design details will be beyond the needs of hands-on chip design and implementation teams.

But if you buy into IP-XACT-centric design flows, the picture changes quite significantly. Now you have a much more structured view of a design from the IP level on up with quite a lot of design semantics already built in. Configured IP have their configuration parameters stored in a well-defined way, so you know not only BOM info but also in what way each IP was configured. You can extract clock and reset trees to document / validate how these are structured. You have complete information on memory mapping, essential to documenting the hardware/software interface, auto-generating the low-level hardware abstraction layer and building sequence tests for verification. IO pin-muxing mapping provides test, documentation and reference board teams early insight into the package interface.

Access to this information is what Magillem leverages through their CrystalBulb product. Pascal Chauvet (Strategic Account Manager) told me they have extended the scope of what can be understood, beyond the IP-XACT schema, through a Magillem schema cover to the chipset and board level (he expects these features may eventually be suggested as extensions to the standard).

CrystalBulb runs on a server, accessing data from distributed teams and providing that data either though web-based dashboard or through standard file format (Excel, text, PDF). At this stage Magillem doesn’t emphasize need for an API and scripting to aid gathering data, (though they do provide a RESTful API), instead stressing a simpler (non-programming) approach through filters, which seem appropriate for the wide user-base they are targeting. Pascal also mentioned that they currently work with customers on any extensions they might require.

As an example of usage, Pascal pointed to the needs of an application engineer doing software bring-up on first product samples. This engineer will need to understand the correlation between IP versions, IP configuration, chip-level functional modes, IO muxing, die pads/IC package pin-bonding and board. Old way: dig through all the documentation, talk to the design team, learn “oh yeah, we changed that configuration at the last minute and it didn’t yet get to documentation”. New way: all the latest data is available real time.

The product is designed for viewing data rather than editing (as it should be as an analytics system) and provides all the standard infrastructure for access control, following areas of interest and supporting review with annotations by stakeholders across the enterprise. Again, a more dynamic and real-time way to support enterprise-wide alignment and signoff.

Overall CrystalBulb looks like an interesting step towards widely-accessible analytics based on structured design. You can learn more about CrystalBulb HERE.


Open Silicon Year in Review 2017

Open Silicon Year in Review 2017
by Daniel Nenni on 01-31-2018 at 7:00 am

If you are interested in what types of chips we will see in the coming years always ask an ASIC provider because they know. Companies of all sizes (small-medium-large) use ASIC companies to get their chips out in the least amount of time and at a minimum cost because that is what ASIC companies do.

IP is an important ingredient to the ASIC business model of course and it is easy to see the types of IP that are attracting the big ASIC customers, all you have to do is look at the ASIC company press releases and webinars:

Open-Silicon Expands Networking IP Portfolio to Address High-Bandwidth Ethernet Endpoint and Ethernet Transport Applications

Comprehensive IP subsystem includes Interlaken, Ethernet PCS, Flex Ethernet and Forward Error Correction IPs

“More and more enterprises and small/medium businesses are shifting their IT investments from in-house IT infrastructure to cloud-based IT services—spending on cloud services is on track to rise by over 30% in 2017,” said Matthias Machowinski, Senior Research Director at IHS Markit. “In response, cloud service providers (CSPs) are heavily investing in their infrastructure, and are currently upgrading their networks to 25/100G. In anticipation of continuing growth, CSPs are already looking towards the future, and once 400G becomes available in 2018/2019, we expect them to rapidly adopt this new technology.”

Open-Silicon Unveils Industry’s Highest Performance Interlaken Chip-to-Chip Interface IP

Supports up to 1.2 Tbps and up to 56Gbps SerDes rates

“The 3rd-party IP ecosystem has always played a key role in the industry. And now, with the unstoppable growth of high-bandwidth networking applications together with the desire to further technological advancements on a much quicker cadence, the demand for industry consortium standards that ensure interoperability grows sharply,” stated Michael Howard, senior research director and advisor, carrier networks at IHS Markit. “It is for these reasons that solutions such as this chip-to-chip Interlaken IP core, will likely have high adoption into next-generation routers and switches, packet processors, and high-end networking and data processing applications.”

Open-Silicon Completes Successful Silicon Validation of High Bandwidth Memory (HBM2) IP Subsystem Solution

Silicon validation in TSMC’s 16nm FinFET technology and interoperability with HBM2 memory; Silicon-proven SoC solution enables next generation high bandwidth applications

“Open-Silicon’s successful silicon validation of an HBM2 IP subsystem in 16nm means that volume production of HBM2 ASIC SiPs are now a reality,” said Herb Reiter, President, eda2asic Consulting, Inc. and author of the most recent Multi-Die IC User Guide, co-sponsored by the Electronic System Design Alliance (ESD Alliance). “The benefit to the industry is significant, in that system developers of high bandwidth applications can minimize risk and time-to-market by having access to a complete silicon-proven HBM IP subsystem, and design/manufacturing of HBM2 ASIC SiPs from a single vendor.”

Open-Silicon Receives TSMC OIP Ecosystem Forum Customers’ Choice Award for Best Paper

High Bandwidth Memory (HBM2) IP Subsystem Solution Va
lidation and Interoperability with HBM2 Memory Die Stack

“TSMC has a very selective process for accepting papers, and those that are chosen represent the highest quality and highest value to our customers,” said Suk Lee, TSMC Senior Director, Design Infrastructure Marketing Division. “Receiving the Customers’ Choice Award for the best paper clearly demonstrates the high level of interest in Open-Silicon’s HBM2 IP subsystem solution and the benefits it offers to our mutual customers using TSMC FinFET and CoWoS® technologies.”

One of the trends I have been tracking on SemiWiki is systems companies using ASIC services rather than investing millions of dollars in hiring or acquiring a team and tools. From the SemiWiki analytics you can see an increasing amount of non-traditional chip companies researching IP, EDA tools, foundry and ASIC services. It really is encouraging to see the systems companies investing in the fabless semiconductor ecosystem making the ASIC business great again, absolutely.


Adapting an embedded FPGA for Aerospace Applications

Adapting an embedded FPGA for Aerospace Applications
by Tom Dillinger on 01-30-2018 at 4:00 pm

The IC industry is commonly divided into different market segments – consumer, mobile, industrial, commercial, medical, automotive, and aerospace. A key differentiation among these segments is the characterization and reliability qualification strategy for the fabrication process and design circuitry. For each segment, specific voltage and temperature environment ranges are input to electrical characterization and analysis flows to confirm functional operation. Reliability analysis expands upon these circuit characterization parameters to evaluate various failure mechanisms, which can lead to either “hard error” lifetime fails or “soft error” transient fails. The primary lifetime failure mechanisms are related to device and interconnect parameter drift due to “aging”, associated with: the number of power-on hours; the on/off (thermal) cycles; interconnect current density; and, circuit switching activity. The primary source of soft errors is due to exposure to external radiation.

I was curious to learn more about the aerospace segment – but first, I needed to study up on radiation-hardened (“rad hard”) soft error concerns and circuit design.

Rad Hard Design

Over the evolution of VLSI process technologies, failure diagnosis and experimental research has identified two principal sources of radiation-induced soft errors:

  • alpha particles incident on sensitive circuit nodes
  • cosmic ray-generated high-energy neutrons

IC industry veterans will remember the crucial time in the 1970’s when DRAM “soft error upsets” (SEU) were a dominant failure mechanism. Groundbreaking research identified the root cause as the introduction of free electron-hole pairs generated by an incident radiated alpha particle, which creates an “ionization track” as it traverses the silicon, as illustrated in the figure below.


Figure 1. An illustration of the “ionization track” due to an incident alpha particle at an nFET device node. The principal charge collection mechanism is the drift due to the depletion region electric field. (From Autran, et al, “Soft-Error Rate of Advanced SRAM Memories: Modeling and Monte Carlo Simulation”.)

If these free carriers are generated in the critical “collection volume” associated with a depletion region electric field, the charges would drift to a circuit node, significantly disrupting the node voltage. (The electric field drift mechanism would be stronger than the diffusion of the free charge distribution.)

Fast forwarding several decades, many technical advances have been made to reduce the SEU rate:

  • packaging and die-package attach materials improvements reduce the flux of alpha particles from radioactive decay
  • with process/voltage scaling, the collection volume has decreased (although so has the critical charge, Qcrit, on dynamic storage nodes)
  • DRAM architectures have added ECC functionality to correct single-bit errors
  • systems have implemented memory scrubbing operations

Today, the principal cause of SEU is due to the incident flux of cosmic radiation – the figure below illustrates the constituent high-energy particles.


Figure 2. Cosmic rays generate high-energy neutrons, which have an extremely long range. (From NTT Systems Laboratories, “The action against soft-errors to prevent service outages”.)

The key contribution to SEU is from the neutron flux incident on a die. A neutron collision with the silicon lattice may result is a permanent displacement which is a hard fail lifetime consideration, especially for bipolar devices (due to increased recombination rates in the base junction). Of principal concern for CMOS devices, a neutron collision may generate (high-energy) secondary particles, which can then create free electron-hole pairs resulting in an SEU, in a manner similar to alpha particles.


Figure 3. Illustration of an inelastic neutron collision with the silicon lattice, and (charged) secondary particles creating free electron-hole pairs. (From Yuanfu, et al., “Single event soft error in advanced integrated circuit”.)

A specific consideration is that the high-energy neutron flux is much greater at high altitude, where aerospace equipment will be operating.


Figure 4. Neutron flux (1-10MeV momentum) versus altitude. (From KVA Engineering)


Figure 5. Neutron momentum versus flux rate, measured at sea level. (From Autran, et al.)

Radiation-hardened circuit development is focused on reducing the susceptibility to the impact of an ionizing event. SRAM bit cell and flip-flop circuits are designed to increase the Qcrit over their equivalent commercial library offerings. The circuit layouts are developed to minimize the collection volume, specifically the sensitive node area, and the magnitude of the depletion region electric field to the node.

The Aerospace Market and eFPGA’s

I knew the aerospace market was a big user of FPGA technology, so I reached out to the team at Flex Logix, developers of embedded FPGA IP, for their insights. I recently had the opportunity to chat with Geoff Tate, CEO, and Andy Jaros, VP of Sales, and learned a great deal.

Geoff had some very interesting financial data, indicating, “10% of the FPGA market revenue is from the aerospace segment. And, FPGA’s represent 35% of the electronics cost in aerospace products. FPGA’s are especially appealing due to the reconfigurability, as these products have a long deployment lifetime. Currently, there are very few commercial FPGA products qualified for the aerospace market.”


Andy added, “Aerospace companies have unique requirements for electronics, with regards to performance, power, and product volume.” (We are all aware that mobile product applications are extremely sensitive to cubic volume – I hadn’t thought much about aerospace-related designs, but they certainly are, as well.)


Andy continued, “For these reasons, there is a need to pursue technology integration, but the individual unit volume is relatively low. That’s why we are seeing strong interest from aerospace developers in leveraging embedded FPGA technology – providing for both integration and reconfigurability.”


“What about the rad hard requirements?”
I asked.

Geoff said, “Our eFPGA architecture is extremely modular, allowing us to readily embed rad hard library cells into the LUT design, and rad hard bit cells into the SRAM. We recently completed a collaborative project with an aerospace Licensee, where we took their library and re-implemented the EFLX core into a rad hard implementation, with a preferred metallization stack. All the eFPGA synthesis and compiler support remains the same. We re-extracted and re-characterized the power/performance to the aerospace environment. All within a matter of a few months.”

Given a rad hard circuit library, the recent Flex Logix Technologies collaboration demonstrates that developing an eFPGA for aerospace products is achievable quickly and with low cost.

Considering the traditional appeal of FPGA’s for this segment, and the benefits of technology integration, I anticipate there will be more announcements in the near future. For additional information on aerospace applications for eFPGA’s, please follow this link.

PS. The DAC and ICCAD conferences are the premier places to learn about the latest in EDA tools research. Advanced process technology presentations are the highlight of the IEDM conference. For aerospace product developers, I learned from Geoff and Andy that the GOMACTech conference is the place to be: https://www.gomactech.net/2018/index.html . If you happen to be attending GOMACTech 2018, be sure to stop by the Flex Logix booth and say “Hi!”.

-chipguy


Automotive Mega-trends, Safety and Requirements Management

Automotive Mega-trends, Safety and Requirements Management
by Daniel Payne on 01-30-2018 at 12:00 pm

I come from a car-centric family where my father actually bought and sold over 300 vehicles in his lifetime, so automotive mega-trends pique my interest. A new conference called Semiconductors ISO 26262 held it’s first annual event last month, meeting in Munich with guest speakers from some impressive companies like: Intel, NVIDIA, STMicroelectronics, Renesas, Melexis, Texas Instruments, Toshiba, Robert Bosch, Jama Software and NetSpeed Systems. There was a panel discussion all about ISO 26262, the functional safety standard for automotive, and Adrian Rolufs of Jama Software presented on: Staying Competitive in Safety-Critical Applications with Requirements Management. I’ve read his 18 slide presentation and learned some important details. Adrian’s background includes working for 10 years in mixed-signal IC development and 5 years with requirements management for multiple industries: Semiconductor, Automotive, Medical Device, Aerospace & Defense.

So the big three automotive mega-trends are:

We’ve been blogging on SemiWiki for several years now on the Electrification trend of increased semiconductor content for automotive, and most analysts see something like a 10% growth rate here. Tesla, GM, Ford, Toyota, Waymo, Mercedes Benz, Google, Intel, NVIDIA, Audi and many other technology companies have joined the quest to make autonomous vehicles a reality for us to enjoy. Travel as a Service (TaaS) has been wildly popular around our globe with big brands like Uber and Lyft being in dominant positions today. So there’s a whole lot of change going on in our auto-centric society.

Electric Vehicles (EV) are now at a point where some 30 models are available worldwide and we can expect that number to grow rapidly, the sales of EV around the world went up 41% in 2016. In the USA for 2017 the EV sales were up 86%, now that’s growth worth noting. Even established car brands like Volvo have made a public commitment for 100% electric vehicles instead of gradually tapering away from combustion engines.

One big push for autonomous vehicles is safety, because at present we see some 1,300,000 people die annually in car crashes, an average of 3,287 deaths each day. The early adopters for autonomous driving are Tesla, Google and Waymo, but we will probably have to wait until 2025 to see widespread use of autonomous vehicles because of the technical, financial, government and legal challenges.

I’ve used both Uber and Lyft services in California and Oregon, so they’ve really provided excellent apps for my smart phone when GPS and a cell tower are available. The other big trend is car sharing apps like Car2Go and ReachNow to get higher utilization out of our beloved cars that mostly sit idle. I expect that traditional Taxi, Limo and Rental Car services will decline and suffer as TaaS gains momentum.

If all of these mega-trends comes to a full realization then visionaries at Google expect that we could experience some astounding benefits:

  • 90% reduction in auto accidents

    • 4.95 million fewer accidents in the US
    • 30,000 fewer deaths
    • 2,000,000 fewer injuries
    • $400B cost savings to our economy
  • 90% reduction in wasted commuting

    • 4.8 billion fewer commuting hours
    • 1.9 billion gallons in fuel savings
    • $101B saved in lost productivity and fuel cost
  • 90% reduction in cars

    • 80% reduction in cost per trip-mile
    • Car utilization climbing from 5-10% up to 75%
    • Better land use

The combination of these three automotive mega-trends and their efficiency benefits bode quite well for the semiconductor industry and will continue to drive the healthy 10% growth rate for several years. Semiconductor vendors need to navigate the new complexities of regulation and complexity in the automotive industry in order to stay competitive and relevant. With the ISO 26262 functional safety standard the vendors must now trace their requirements throughout the entire design process, and at first many companies will use standard office applications like Word and Excel to cobble together something quickly..

The company Jama Software was founded to automate the traceability of requirements throughout many complex product design flows, so you don’t have to kludge together something less efficiently with office tools. This automated approach provides new benefits like live traceability during product development to quickly provide the program manager with a view of how every user requirement impacts the technical requirements, design and even test cases.

This approach from Jama isn’t a one-size fits all, in fact you can even adjust the tools to fit the way that your company does automotive design, with each step of your process captured along with dependencies.

During the automotive design process you have many stakeholders, and when something comes up that will impact other stakeholders then the change needs to be reviewed. The Jama tools support this kind of stakeholder review by automating and capturing all review items in a convenient GUI format.

As the electronics in your automotive system are being design there’s need for verification and validation of all requirements, so for example there’s a requirement for supply current on a pin to be 2mA maximum, while the verification limit is set at 1.7mA, but then there’s a validation where the value is outside of the specification at 2.05mA, then the Jama system quickly pinpoints this validation error by using a red color to catch your attention.

What makes the Jama tools so intuitive and easy to use are the web-based GUI, liberal licensing model so that all the engineers can use it, and a low learning curve. Automotive companies may start out using Jama tools to meet ISO 26262 traceability requirements, enjoy the benefits, then start to use the same tools for every product design. If a tool can reduce multiple silicon tape-outs, then its value becomes quite important. The consultants at Jama know about the semiconductor and automotive industries and will help you get setup quickly to fit your specific design process using best practices.

Related blogs


Conflating ISO 26262 and DO-254

Conflating ISO 26262 and DO-254
by Bernard Murphy on 01-30-2018 at 7:00 am

If you’re in the ASIC business, by now you should have a rough understanding of ISO 26262, the safety standard for automotive electronics. You may be less familiar with DO-254 which has somewhat similar intent for airborne electronics. Unless, that is, you design with FPGAs in which case your familiarity may be the other way around since there aren’t enough new aircraft produced each year to justify custom ASICs. So, ISO 26262 – ground-based vehicles, DO-254 – air-based vehicles, right?

Those lines are starting to blur. FPGAs are increasingly popular in ADAS and self/assisted- driving applications, particularly for their flexibility in supporting logic updates. Similar functionality is also useful in airborne applications. Planes are already well ahead of cars in self-piloting and continue to advance. Meantime ASIC products for those advanced cars (think deep-learning platforms and more sensor fusion) could also be used in planes.

The technology and business potential are appealing. But if you’re aiming at both markets, you now have to comply with both standards. How much fun is that going to be? Two sets of documentation, two organizations, two engineering flows, even two products? Actually, it isn’t that bad. Aldec recently hosted a webinar on what it takes to get to dual compliance and showed there is a significant degree of overlap and that while there is some overhead, it can be managed with careful planning.

Aldec invited Tom Ferrell to present on this topic. Tom is principal in his own organization and has a 25-year background in dealing with standards of this type, which naturally have proliferated as indicated in the useful chart above. Tom broadly contrasted ISO 26262 and DO-254 in this way. The automotive standard is industry-driven, compliance depends on 3[SUP]rd[/SUP]-party accreditation through the supply chain, allows for different level of compliance depending on context and compliance is voluntary, at least in principle. The aeronautic standard on the other hand is highly regulated, compliance depends on government or surrogate approval, must be uniform for a class of equipment and is effectively mandatory. He also noted that the standards are in some ways moving closer (albeit slowly) particularly in DO-254 through “guidance” extensions.

There are differences in terminology which can be confusing, particularly with respect to safety levels. There are also differences in scope (ISO 26262 is primarily about safety whereas DO-254 covers a broader range of requirements), how reliability is treated (the ISO standard is more explicit here), handling validation out of context (again ISO is better here) and personnel requirements (ISO requires identified staff with training/certification).

He added that that the ISO standard better defines the supplier interaction and that thanks to its relatively recent development and learning from prior standards it takes a more holistic view of the system and software. It also allows for data-driven decision-making. In contrast, the DO standard is arguably less prescriptive and integrates requirements for process assurance (unlike ISO).

Tom’s net from this is first that there are no blocking points or direct contradictions between the standards and that the tailoring (alternate means of compliance) option in the ISO standard provides a path to having it both ways. That said, you can’t fudge the documentation. You still have to generate two sets, but you can simplify the task (internally) by generating a compliance matrix where you match “items” (a loaded word, with different meaning in the ISO and DO standards) to check and document your compliance.

He also stressed that dual compliance is not something you can bolt on at the end of the product cycle. Planning for both has to start up-front, looking at architectural mitigation plans and safety analyses in both contexts. On the plus side, the required safety manager for ISO can also serve as the focal point for certification for the DO standard.

Tom made the point that this is hard work, requiring a deep understanding of both standards. Of course he’s plugging his services but I’m guessing people with his level of expertise are not thick on the ground. You can watch his presentation HERE. I should add a plug for Aldec also. This is an important topic and the webinar barely mentions Aldec tools (which are widely used in FPGA design, particularly in support of needs like this). Kudos to Aldec for promoting this general interest topic. This is true high-value content marketing!


IoT Designs Beginning to Shift to 7nm: Promises Upside for Cadence Physically-Aware Design Flow

IoT Designs Beginning to Shift to 7nm: Promises Upside for Cadence Physically-Aware Design Flow
by Mitch Heins on 01-29-2018 at 12:00 pm

Until recently, ICs at bleeding edge nodes like 7nm technology from foundries like TSMC were mostly targeted for high-performance-computing (HPC) and mobile applications or possibly high radix switches that needed the increased performance of advanced nodes. The momentum of Moore’s law and Moore-than-Moore saw foundries continuing to invest in more aggressive nodes, but it left many wondering what markets (and associated IC volumes) would justify the investments needed for 7nm and below.

In the meantime, the internet-of-things (IoT) emerged, and with it came the need for highly heterogeneous IC architectures with multiple different processor cores, embedded memories, networks-on-chip (NoCs) and a variety of different sensors, actuators, and interfaces including wireless transceivers. Conventional wisdom had been that these designs would separate the analog/mixed-signal portions (e.g. analog sensors, radios, transceivers, etc.) onto separate die from the digital processing elements and then combine everything into Systems-in-a-Package (SiPs).

However, we’re starting to see a shift. IoT designs, for example, can potentially drive huge volumes and as such SiPs may be too costly, forcing designers to move to the bleeding-edge nodes to once again do it all on one chip. Similarly, IP companies that may have been making their own discreet analog/mixed-signal chips are being asked to integrate their IP into customer’s systems-on-chip (SoCs) to lower overall costs, especially in markets like mobile and IoT. Moving to nodes like 7nm enables designers to consolidate everything onto one die while achieving superior performance, lower overall system power and reduced piece part pricing. Viola, build it and they will come. It looks like the IoT market may end up making 7nm a node that hangs around for a long time. It’s a big gamble though as designing at 7nm is not for the faint of heart.

I recently spoke with David Stratman, DSG Product Management & Business Development manager at Cadence Design Systems. David confirmed that they have seen a dramatic upturn in the number of medium- and large-sized companies moving their mobile and IoT designs from more mature nodes straight to 7nm for the reasons mentioned above, and in turn requiring any digital or mixed signal subsystem IP providers do the same. These companies are looking to Cadence’s expertise at 7nm to help them make the transition into a very different design environment. Fortuitously, Cadence has been planning for this since its introduction of the first of their next-generation digital platform products (Ex: Innovus, Tempus, Genus).

Innovus initially took on 16nm challenges such as FinFET design including the use of colorization for multi-patterning. At 10nm, colorization became a full requirement and the tools also had to deal with increased wire resistance making interconnect layer selection more challenging. At 7nm, it becomes even more complex with the use of via-pillars to get access to device pins. In fact, the entire flow had to be re-engineered to take physical aspects of 7nm design into consideration from the very beginning of the flow through to tapeout.

In the past, there was a clear line of demarcation between logic design and physical design. For decades designers used flows made up of tools from multiple electronic design automation (EDA) vendors, passing data between tools through both standard and non-standard interfaces. Cadence is betting that this is going to start changing with the latest technology nodes, and their tool flow reflects it. Their digital platform starts with a foundation of common core engines and full-flow optimizations. The flows make use of shared code that uses stage-specific abstractions of the data. Shared engines include placement, routing, delay calculation, extraction, timing and power analysis. The idea is to make the flow more predictive from the very start of the design by taking physicality into account as early in the design flow as possible.


This physical-first design flow changes the way design is done. Physical context is leveraged throughout the flow to the point that there is no explicit hand-off between logic and physical design. The idea is to continuously converge on design closure with smaller iteration loops enabled by shared compute engines. This full-flow optimization is difficult to do when using tools from multiple EDA vendors and Cadence is betting that the difference will be significant enough to convince customers to use their tools for the entire flow. To give added incentive, Cadence also upgraded their tool architecture to take advantage of massively parallel compute resources. Not only is the tool flow more integrated and predictive, but it’s also able to handle vastly larger design blocks when dealing with compute-intensive analysis algorithms such as extraction, timing and power analysis and physical verification.


David shared some recent results from a stealth startup company. The company began using Cadence Innovus on their first 16nm project alongside other industry tools and moved to the full Cadence digital flow at 7nm because of the flow’s shared physically-aware scalable engines.Not only did the company see big improvements in individual tool runtimes (example: Distributed STA on 400M gates flat ran in 6.5 hrs vs 50+ hrs), they also saw a 2X improvement physical design turnaround time with the Cadence Genus-Spatial + Innovus flow vs their traditional flow.

Back to the IoT angle on this, Cadence also made sure that their digital flow now interfaces seamlessly with their Virtuoso-based mixed-signal and custom flow. That means that IoT designers have a direct path to integrating everything (digital and mixed-signal) they need on their SoC at the advanced design nodes. If IoT customers continue in a trend to jump to 7nm both Cadence and TSMC stand to greatly benefit from Cadence’s physical-first design flow.

See Also:

TSMC @ N7 with Cadence (Cadence Breakfast Bytes Blog)
Cadence Digital Design and Signoff Platform


Global Semiconductor Market Trends ISS 2018

Global Semiconductor Market Trends ISS 2018
by Daniel Nenni on 01-29-2018 at 7:00 am

One of the other blog worthy analyst presentations at ISS 2018 was by Len Jelinek of IHS. Len is my kind of analyst, he spent 28 years in the semiconductor industry before going to the dark side so he knows what he is talking about. Len’s presentation on Global Semiconductor Market Trends is action packed so I will be doing a lot of cut and pasting here:

IHS forecasts that total semiconductor industry revenue will grow by 7.4% in 2018

  • Overall semiconductor revenue growth in 2017 is estimated to be 21.7%
  • In 2018 IHS is forecasting that Memory will grow by 14% and all other semiconductors will grow by 4.5%
  • Wireless communication is forecast to benefit from next generation handsets incorporating new features like biometrics, AI capability and increased batty life
  • Automotive electronics continues to grow by focusing on advanced safety features
  • Consumer electronics benefits from the move toward internet connected devices
  • Increased servers supporting cloud computing is major driver in Data Processing segment

I really think the semiconductor industry is being a bit conservative here. I would be quite shocked if we did not see double digit growth here. One example, China has more than 900 fabless semiconductor companies designing chips for new applications and I highly expect India and other populous countries to follow suit. This is like the second coming of the fabless transformation we talked about in our first book Fabless: The Transformation of the Semiconductor Industry.

Consumers Identify Increasing Value in Smartphones

  • Smartphones returned to growth in 2[SUP]nd[/SUP] half of 2017 driven by introductions of revolutionary features
  • Total smartphones in 2017 are forecast to grow by 7.4% with mid-tier handsets growing by 11.9%
  • Smartphone shipments are forecast to increase in 2018 to 1.6 billion units with YoY growth rate of 6.8%
  • Mid-tier smartphones are providing consumers with the most features / value for lowest price. In 2018 IHS anticipates these phones will grow by 12.4%

In my opinion the smartphone business still has plenty of room to grow. TSMC’s smartphone forecast for 2018 was even more conservative, which may be true, but my friends at Apple have other plans. Apple is all about the edge device and now that they have higher performance SoCs with embedded AI cores coming we will start to see some amazing things, absolutely. Think Health and Wellness applications.

Revitalization of Consumer Electronics is Underway

  • Opportunities exist for consumers to benefit from connected products (IoT) which enhance daily lives
  • China continues to represent the largest single market for consumer electronics with a CAGR of 6%
  • Wearable devices, home appliances and consumer drones are forecast to outperform the overall market segment
  • Home appliances are the largest sub segment within consumer electronics and represent one of the largest IoT

Again, I think Len is being conservative here. Another application to watch, especially in China, is crypto currency mining (GPUs and ASICs). TSMC mined the mining market quite well in 2017 and I expect that to continue well into 2018 and maybe even 2019.

Semiconductor Device Growth 2016/2017


Semiconductor Device Forecast 2017/2018

Summary
A strengthening global economy should provide a positive impact on semiconductor industry in 2018:

  • Consumers and business anticipate positive economic growth in 2018 which should provide stable demands for semiconductor components throughout 2018
  • IHS anticipates that semiconductor revenue in 2018 will grow by approximately 7.4%
  • Impact on the semiconductor industry revenue from volatility of memory pricing is forecast to be minimal
  • Non memory component revenue growth will slow in 2018 to 4.5%, down from 10.3% in 2017

Next generation demand drivers supporting IoT, 5G, and autonomous driving are still several years away from impacting semiconductor component manufacturing:

  • Next generation 5G wireless communication will start limited trials in 2018, commercialization targeted for 2020
  • Strong inventory management will impact manufacturing run rates throughout the first half of 2018
  • Q1 revenue is forecast to decline by 5.3% revenue recovery forecasted to start in Q2
  • Demand for handsets, PC’s, and consumer electronics is forecast to experiences seasonal post-holiday slowing

Capital investments will continue to grow in 2018 by 5.4% supporting new technology development and capacity expansions for memory and advanced CMOS

Absolutely!


What GM Can Learn from Tesla

What GM Can Learn from Tesla
by Roger C. Lanctot on 01-28-2018 at 7:00 am

General Motors has had wireless connections to its cars for more than 21 years, thanks to Project Beacon, better known as OnStar, now operated as Global Connected Consumer Experience. OnStar has likely saved hundreds of lives, if not thousands, by summoning emergency responders to the scenes of crashes where airbags deployed.

To this day, the rest of the automotive industry remains of two minds regarding OnStar’s “automatic crash notification” functionality. Some luxury car makers have replicated and implemented the feature. Ford Motor Company offers a smartphone-based equivalent dubbed 911 Assist. But the feature never became the checkoff item it was intended to be and U.S. government regulators never saw fit to require it – unlike Europe where a local equivalent called eCall will become standard equipment on new type-approved vehicles later this year.

Even Tesla Motors chose not to implement an OnStar-like function in its cars. Every Tesla Motors vehicle is built with an embedded telecommunications connection, but if you crash in a Tesla and you are unconscious you will have to depend on the kindness of strangers to alert emergency responders.

Arguably Tesla vehicles were not designed with the prospect of a crash in mind. Yet Teslas continue to experience crashes including one famous fatal one in Florida in 2016. The thinking at Tesla may be that it is best to avoid crashes altogether rather than focusing on what to do after one occurs.

Now, both Tesla and GM, caught up in the struggle to deliver an automated driving proposition, have experienced nearly simultaneous collisions potentially tied to automated driving systems. In GM’s case, a motorcyclist has filed a lawsuit after a low-speed mishap involving a Cruise Automation test vehicle. For Tesla, a Model S equipped with Autopilot (activated) collided with a firetruck parked on the side of the road.

– GM Faces Lawsuit after Crash between Motorcyclist and Self-Driving Chevy Bolt – theverge.com

– Tesla Crash with Autopilot Triggers Safety Board Interest – Bloomberg.com

Tesla and GM have faced similar safety challenges in the past. In Tesla’s case, the company used vehicle data for its own forensic purposes to determine 1) that the Florida crash was entirely the fault of the driver of the vehicle misusing the Autopilot feature and 2) that Tesla vehicles equipped with Autopilot create fewer insurance claims.

In GM’s case, the company is still wrestling with the aftermath of the ignition switch crisis which has led to multiple individual and class action lawsuits, a $900M fine from the U.S. Department of Transportation and multiple Capitol Hill hearings to respond to Congressional questions as to how the ignition switch vulnerability had been overlooked for as long as it had. The possibility of GM exonerating itself Tesla-style with telltale vehicle data was never on the docket.

But knowledgeable industry observers and even some incredulous Congressional representatives were asking themselves, if not GM, how the company could have failed to detect the ignition switch failing from the data collected from OnStar-equipped vehicles in the wild to say nothing of inferences that might have been drawn from diagnostic scans at dealerships. For GM, it was the internal e-mail trail that proved determinative.

Maybe in this latest incident in California GM will be able to finally steal that page from the Tesla playbook and show, from vehicle data, that the Chevy Bolt was blameless in the low-speed collision with the motorcyclist. It’s an essential turning point for GM and the global automotive industry.

GM’s Cruise Automation unit, which is running tests in California, is in a contest with Waymo for self-driving car leadership. While Waymo has put up gaudy (i.e. impressively low) disengagement event data, Waymo has been focused on operating in the suburbs. Cruise has been subjecting itself to the acid test of operation in an urban environment – San Francisco, to be specific – with all of the conflicting modes of transportation pedestrian and otherwise and some unique topography and street restrictions.

It’s no surprise that Cruise has been experiencing way-more (wink) collisions than Waymo ever did. But it’s no excuse and GM/Cruise need to clear up the source of responsibility for the crash and make any necessary corrections before the otherwise minor incident is whipped up by safety advocates into a full-blown emergency brake moment for automated driving.

Why is it so important that we stay the course on automated driving? Because! Because current estimates are that approximately 37,000 people were killed on U.S. highways in 2017, the second consecutive annual increase. This is more than 100 daily fatalities and ignores the even more terrifying figures for injuries, to say nothing of the economic impact.

Automated driving and all of the advances leading up to it hold the promise of actually putting a dent in those figures and mitigating the economic impact and personal tragedy simultaneously. The minor incidents characteristic of current real-world testing of automated driving technology pale in comparison to the massive carnage being inflicted daily by non-autonomous driving activities. Where is the outrage?

We do need to learn, though, from these experiences so let the investigations of both incidents begin. With some luck GM will learn from Tesla and use this event as an opportunity to educate regulators and the general public as to the extent of its responsibility and the nature of the crash. As for Tesla, if history is any guide, Tesla will forage through its data trove and deliver up an exculpatory assessment of events preceding and leading to the Model S crash with the firetruck. (Maybe Musk will work his Jedi magic and have us believing that the crash never happened in the first place!)

GM is attempting to school Tesla and Alphabet as to how to deliver a self-driving car to market by 2019. Before GM schools any competitor, though, it needs to learn a few lessons of its own as to how to put vehicle data to work. Tesla and Alphabet are leaders in this respect and GM has some catching up to do.

As for avoiding collisions with motorcycles, GM and other car makers (and smartphone makers) ought to take a closer look at Ridar Systems. Ridar offers an app capable of alerting drivers to the presence of a nearby motorcyclist. A Lyft driver tooling me around Las Vegas two weeks ago might have benefited from the use of such an application in the near-miss event we shared. There are solutions around for common every day driving challenges – if one just looks closely enough. As for GM and Tesla, the answers to autonomous driving are in the data.


Webinar: The Emergence of FPGA Prototyping for ASIC and SoC Design

Webinar: The Emergence of FPGA Prototyping for ASIC and SoC Design
by Daniel Nenni on 01-26-2018 at 12:00 pm

One of the more interesting markets that I cover is FPGA Prototyping. Interesting because it is fast growing ($150-250M) and interesting because it is all about design starts and design starts are the lifeblood of the semiconductor industry.

If you are interested in FPGA prototyping you might want to start with the 30+ S2C Inc blogs we have done over the last three years. You can also read our eBook Prototypical. Even better, you can sign up for the upcoming S2C webinar:

The Emergence of FPGA Prototyping for ASIC/SoC Design
FPGA prototyping has become a dominant part of the ASIC and SoC design and validation flow by significantly speeding up the progress of your project. This webinar will provide a brief overview of FPGA prototyping and address the key points on system selection. We will analyze how the flow changes on single or multiple FPGA systems and how prototyping software participates in the verification flow.

  • Thu, Feb 8, 2018 8:00 AM – 9:00 AM PST

Agenda:

  • S2C Prototyping Solutions Overview
  • How to Quickly Build and Target FPGA Prototyping
  • The Importance of Daughter Cards
  • Demonstration Image processor on VUS/VUD
  • Questions and Answers

Presenters:
Daniel Nenni Founder of SemiWiki.com
Richard Chang Vice President of Engineering, S2C Inc.

I’ve recently started doing business development work with S2C and have been impressed with not just the products and employees, but also the diverse markets they serve. China, for example, has more than 900 fabless chip and systems companies doing designs, most of which will be using FPGA prototyping for chip verification and software development. FPGA prototyping is a front row seat to the future of semiconductor devices with S2C and China leading the way, absolutely.

About S2C
Founded and headquartered in San Jose, California, S2C has been successfully delivering rapid SoC prototyping solutions since 2003. S2C provides:

With over 200 customers and more than 800 systems installed, S2C’s focus is on SoC/ASIC development to reduce the SoC design cycle. Our highly qualified engineering team and customer-centric sales force understand our users’ SoC development needs. S2C systems have been deployed by leaders in consumer electronics, communications, computing, image processing, data storage, research, defense, education, automotive, medical, design services, and silicon IP. S2C is headquartered in San Jose, CA with offices and distributors around the globe including the UK, Israel, China, Taiwan, Korea, and Japan. For more information, visit www.s2cinc.com.