NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Semicon West – The FDSOI Ecosystem

Semicon West – The FDSOI Ecosystem
by Scotten Jones on 07-21-2017 at 12:00 pm

At Semicon West last week I attended presentations by Soitec and CEA Leti, and had breakfast with CEA Leti CEO Marie Semeria, key members of the Fully Depleted Silicon On Insulator (FDSOI) ecosystem. I have also seen some comments in the SemiWiki forum lately that make me believe there is some confusion on the roles of different companies in the FDSOI ecosystem. In this article, I will review the key players and their roles and then discuss the latest updates.

FDSOI Ecosystem
Figure 1 illustrates the roles of the major players in the FDSOI ecosystem.


Figure 1. The FDSOI ecosystem.

Regardless of whether a process is bulk, FDSOI or FinFET all of the major companies running wafer fabs buy the starting substrates. For FDSOI, an SOI wafer is needed with a thin silicon devices layer over a thin buried oxide layer. The leading provider of FDSOI wafers is Soitec with SEH as a licensed second source.

The fab operators for FDSOI are ST Micro as an Integrated Device Manufacturer (IDM) and Samsung and GLOBALFOUNDRIES as foundries. CEA Leti is the leading development organization working on FDSOI technology.

FDSOI products are starting to reach the market, Sony has produced an FDSOI GPS chip that reduces power by 5x to 10x versus standard GPS chips and NXP is producing 28nm FDSOI parts at Samsung for Amazon’s Alexa.

Automotive is an emerging area due to FDSOI’s inherent radiation tolerance. IOT is also expected to be a big market for FDSOI due to good RF and analog performance coupled with low power, high performance and relative ease of design.

Soitec
Soitec has been manufacturing 300mm SOI wafers for many years. Originally 300mm was Partially Depleted SOI (PDSOI) used primarily by IBM. At one-time IBM produced the processors for all three major gaming consoles but that business is largely gone now. When I blogged about Soitec back in October of 2016 their 300mm manufacturing capacity was underutilized and the company was struggling financially.

My October 2016 Soitec blog is here.

During Semicon West, Soitec held a lunch briefing and they disclosed that the company is now profitable. 200mm SOI is utilized to make RFSOI that goes into the front-ends of cell phones and that has been a big success. 60% of Soitec’s revenue is from RFSOI with 20% from automotive and 20% emerging. RFSOI is beginning to migrate to 300mm and FDSOI on 300mm is ramping. Silicon Photonoics is another emerging application for 300mm SOI.

Soitec has 650 thousand wafers per year of 300mm capacity in France. 100 thousand wafers per year of the 300mm capacity is currently FDSOI with 400 thousand wafers per year planned. Soitec is also restarting their Singapore facility with plans to produce 800 thousand 300mm wafers per year.

ST Micro
ST Micro was an early proponent of FDSOI and developed 28nm and 14nm processes working with CEA Leti. ST MIcro has put 28nm into production and licensed it to Samsung. ST Micro has never put 14nm into manufacturing but did license it to GLOBALFOUNDRIES to serve as the front end of line (FEOL) technology for 22FDX.

Samsung
Samsung licensed 28nm several years ago but then delayed the introduction while they worked out the manufacturing process. 28FDS was introduced in 2016, RF is being added in 2017 and embedded MRAM (eMRAM) in 2018. NXP has been very vocal in support of 28FDS.

Samsung has now announced 18FDS for 2019 with RF and eMRAM in 2020.

I have recently blogged about Samsung’s foundry roadmap including FDSOI here.

GLOBALFOUNDRIES (GF)
GF is currently ramping up 22FDX. 22FDX utilizes a 14nm FEOL licensed from ST Micro with a middle of line (MOL) that has two double patterned layers. 22FDX supports RF and will add eMRAM in 2018. 22FDX is the densest FDSOI process currently available and GF is reportedly engaged with over 60 customers.

My blog about 22FDX is available here.

GF is developing 12FDX with CEA Leti for introduction in 2019.

CEA Leti
CEA Leti has been a driver of FDSOI development. They did early work with ST Micro that led to the ST Micro 28nm and 14nm processes, that technology is being further commercialized by GF and Samsung. CEA Leti is now working with GF on GF’s 12FDX development and according to CEO Marie Semeria has 15 researchers stationed at GF’s fab in Dresden.

CEA Leti has modeled a 10nm FDSOI process and run test devices that match the modeled results. CEA Leti has also modeled 7nm and because 10nm did not need all of the performance boosters that are available Marie Semeria said she is confident 7nm is possible.

My previous interview with Marie Semeria is available here.

Conclusion
FDSOI has now built up a strong ecosystem. Starting wafers are available from Soitec and SEH, ST Micro is in production with 28nm as an IDM, Samsung offers 28FDS as a foundry with 18FDS in development, and GF offers 22FDX as a foundry with 12FDX in development. CEA Leti provides a world class research institute continuing to develop denser version of the technology with 7nm as a future option.


Custom SoCs for IoT Revolution!

Custom SoCs for IoT Revolution!
by Daniel Nenni on 07-21-2017 at 7:00 am

There are two interesting transformations that are currently taking place inside the semiconductor industry: First, systems companies (not chip companies) are now driving the semiconductor industry. Second, IoT focused chips are accelerating design starts. The result is what I would call the Custom SoCs for IoT Revolution!

IoT first came to SemiWiki in 2014 and and was met with a lot of doubters. Since then we have published 383 IoT related blogs that, as of today, have been viewed 1,210,095 times by 19,759 different domains. Design IP is the most popular IoT topic and as expected, ARM is the predominant vendor in IoT blogs. According to ARM, their mbed IoT Device Platform has already been adopted by more than 200,000 developers and is a fast path to silicon success. While I agree, there is an even faster path to Custom IoT SoC silicon success and that is working with an approved Arm Design Partner like Open-Silicon.

What is an ARM Design Partner? A company that is vetted and audited for their ability to deliver successful SoC design services based around the Cortex-M0 and Cortex-M3 processors in the ARM DesignStart program. ARM Design Partners must also be well versed in other ARM IP, have their own libraries of IP, and have a track record of silicon success which brings us to Open-Silicon.

“With the broadening product portfolio of ARM DesignStart, now including both Cortex-M0 and M3, it is clear that ARM shares our vision for simplifying the path for system developers to deploy IoT platforms,” said Mark Wright, Sr. Vice President of Sales and Marketing, Open-Silicon. “Open-Silicon’s Spec2Chip IoT platform, based on Cortex-M, is enabling the development of highly-differentiated custom SoCs for various IoT applications with reduced risk, schedule, and cost.”

Open-Silicon has a nice ARM IoT SoC Platform landing page HERE with white paper downloads for:

· Product Differentiation Using ARM Cortex-M Based IoT Edge SoCs
· IoT SoC Platform Demonstration Cortex-M Series
· Trust Based IoT Security Mechanism For ARM Based SoCs to get you started

In addition to design, Open-Silicon also does manufacturing and can deliver tested chips ready for assembly. In fact, Open-Silicon is the only ARM Design Partner that can do end-to-end Custom IoT SoCs that I know of.

Remember, Open-Silicon has shipped more than 125 million chips so if you are considering a custom IoT SoC, that is where you should start. If you need a proof of concept to raise money or if you need to get your software development started ASAP Open-Silicon can quickly deliver your design via FPGA then move it to custom silicon for mass production.

Bottom line: The IoT systems business is highly competitive so you will need to have complete control over your silicon. If you are not doing a Custom SoC today you may not have the opportunity to do one tomorrow, absolutely.

About Open-Silicon
Open-Silicon transforms ideas into system-optimized ASIC solutions within the time-to-market parameters desired by customers. The company enhances the value of customers’ products by innovating at every stage of design — architecture, logic, physical, system, software and IP — and then continues to partner to deliver fully tested silicon and platforms. Open-Silicon applies an open business model that enables the company to uniquely choose best-in-industry IP, design methodologies, tools, software, packaging, manufacturing and test capabilities. The company has partnered with over 150 companies ranging from large semiconductor and systems manufacturers to high-profile start-ups, and has successfully completed 300+ designs and shipped over 125 million ASICs to date. Privately held, Open-Silicon employs over 250 people in Silicon Valley and around the world. To learn more, visit www.open-silicon.com


IP Diligence

IP Diligence
by Bernard Murphy on 07-20-2017 at 12:00 pm

I hinted earlier that Consensia would introduce at DAC their comprehensive approach to IP management across the enterprise, which they call DelphIP (oracle of Delphi, applied to IP). I talked with Dave Noble, VP BizDev at Consensia to understand where this fits in the design lifecycle.


IP management means a lot of different things. To most of us it revolves around design data management (DDM) which is certainly an important component. But there’s another consideration, at least as important, concerning the fitness or appropriateness of the IP you have selected for use in your design. Here we may think of this primarily in terms of functionality and PPA but there are other equally important concerns:

· What choices do I have for a specific IP?
· Do we have a paid-up license to use this IP on this design?
· Do some teams members (perhaps in overseas locations) not have permission to see or use aspects of this IP?
· Will use of this IP in this design for this target market comply with ITAR restrictions?
· Are there marketing/business restrictions on how the IP may be used for this design?
· Does our company have track record with this IP in the target process?
· Who has to signoff on changes you may want to make concerning this IP?
· Does this IP depend on other IP and what are the restrictions on those IP?

These are concerns which aren’t directly a function of the design yet can have huge impact on its viability – can it be built profitably, can it be shipped to markets targeted in the business plan and will it meet a broad enough range of target customer needs? And there’s another consideration – how effectively is your enterprise managing IP? Are you paying license fees for IP in designs which never made it to production (or profitability)? Are there opportunities to negotiate better deals with IP providers or to change the mix to better optimize for long-term goals?

Across a large enterprise the complexity of managing these concerns through many designs and hundreds of IP, each potentially being used in multiple versions, becomes as challenging a problem as DDM, yet this class of requirements doesn’t naturally find a home in traditional DDM systems. Managing these needs effectively takes on extra urgency during consolidation, where redundancy in IP assets is almost certain and assets which may be valuable across multiple designs remain unknown outside the original development group.

DelphIP aims to answer this need by integrating more comprehensive capabilities with conventional DDM for IP. This fits neatly with their approach to enterprise-level design data management (using DesignSync) which I discussed in a previous blog. This starts with capability to classify and catalog each IP so that IPs are quickly searchable and their dependencies quickly discoverable. A related need is addressed by tracking IP maturity and where the IP has been used in other designs.

Compliance to requirements like ITAR and IP-vendor restrictions can be managed through a configurable policy for design and geographically constrained IP use. Similarly access controls are configurable, allowing you to define multiple roles for who can read or modify (or even create) parts and who is allowed to create tickets, or change requests or actions items.

DelphIP also provides support for configuration management and version control of the IP BOM (bill of materials), obviously of value in design reviews and design documentation and in supporting IP vendor audits, but also important in in building compliance documentation for standards like ISO 26262. In addition, you can setup subscription-based notification and alerts for updates/changes and you can build your own analytics to guide make versus buy decisions.

Most important in what is now a heavily consolidated industry, DelphIP supports differing DDM systems across the enterprise. There’s no need to force teams to uproot their preferred DDM best practices – they can continue to use work with the flows they best understand while still allowing you to oversee and manage the total IP view across the enterprise.

You can learn more about DelphIP HERE.


Embedded FPGA Blocks as Functional Accelerators (AMBA Architecture, with FREE Verilog Examples!)

Embedded FPGA Blocks as Functional Accelerators (AMBA Architecture, with FREE Verilog Examples!)
by Tom Dillinger on 07-20-2017 at 7:00 am

A key application for embedded FPGA (eFPGA) technology is to provide functionality for specific algorithms — as the throughput of this implementation exceeds the equivalent code executing on a processor core, these SoC blocks are often referred to as accelerators. The programmability of eFPGA technology offers additional flexibility to the SoC designer, allowing algorithm optimizations and/or full substitutions to be implemented in the field.

I recently had the opportunity to chat with Tony Kozaczuk, Director of Solutions Architecture at Flex Logix Technologies, about a new application note that Flex Logix has authored, to illustrate how eFPGA technology is ideally suited to accelerator designs. I had an opportunity to see a pre-release version of the app note — it was enlightening to see the diversity of accelerators, as well as various implementation tradeoffs available to realize latency and throughput targets.

The accelerator examples in the app note pertain to the interface protocols of the AMBA architecture. This specification has evolved to encompass a breadth of (burst and single) data transfer bandwidth requirements for system and peripheral bus attach, as summarized in the figure below.

The app note illustrates how the eFPGA block can be readily integrated into these AMBA bus definitions, including both AXI/AHB master and slave bus protocols, and through an AXI2APB bridge for communication using the lower bandwidth APB bus, as illustrated below.

Tony reviewed some of the performance tradeoffs associated with embedding the AMBA bus protocol functionality within or external to the eFPGA block.

Flex Logix is providing all the Verilog models for attaching an accelerator to these AMBA bus options for free on their web site — see the link below at the bottom of this article.

Several unique features of the Flex Logix eFPGA technology are critical for accelerator design. The I/O signals on the EFLX array tile are readily connected to adjacent tiles, and very significantly, readily connected to SRAM sub-blocks integrated within the eFPGA physical implementation, without disrupting the inter-tile connectivity. The SRAM sub-blocks can be floorplanned within the overall EFLX accelerator for optimal performance — the figure below illustrates a complex example. The graphic on the left is a floorplan of a full accelerator block, comprised of array tiles embedded SRAM’s. Flex Logix offers both a logic and a specialized DSP tile, as illustrated in the graphic on the right. (Specific accelerator examples described shortly have a simpler SRAM floorplan.)

The EFLX compiler integrates the Verilog model connectivity to the SRAM’s with placement configuration information to assemble the full design. The app note includes EFLX code examples for integrating SRAM blocks — a crucial requirement for high-performance accelerators. The app note also describes how to manage the synchronization of data inputs to the accelerator.

The accelerator examples that Tony briefly reviewed were very informative — there are more in the app note. The implementation of the AES encryption algorithm utilizes the AXI4-Stream protocol definition, with the master/slave protocol logic included within the eFPGA array Verilog model.

The figure above shows architecture options when considering an accelerator implementation — note that information such as the encryption key could be provided directly as part of the eFPGA programmability, or (optionally) sent separately from a processor core (over the APB interface). The throughput of the AES implementation compiled by the EFLX compiler from source Verilog to the TSMC 16FFC technology is illustrated below, compared to the same algorithms executing in program code running on a Cortex-M4 core.

Two EFLX array performance results are quoted, at the same published frequency for the Cortex-M4, and the 16FFC frequency realizable in the physical implementation.

Another accelerator example is a FFT calculation engine, as illustrated below. The figure depicts the integrated SRAM sub-blocks included with this implementation, and how the EFLX tile I/O connectivity to the SRAM is implemented. (6 of the EFLX 2.5K LUT tiles and 18 SRAM sub-blocks are used.)


Embedded FPGA technology will provide SoC architects with compelling options to include application-specific accelerators to the design, with the added flexibility of programmability over a hard, logic cell-based implementation. A critical feature is the ability to integrate SRAM with the accelerator, as part of the compilation and physical design flows.

Flex Logix has prepared an app note describing how their eFPGA technology is a great match for accelerator designs — it is definitely worth a read. And, the Verilog examples, are great, as well — they clearly illustrate how to attach to the various AMBA protocols. The app note and Verilog code are available here.

-chipguy


Something New in IP Lifecycle Management

Something New in IP Lifecycle Management
by Daniel Payne on 07-19-2017 at 12:00 pm

Last month at DAC I met up with Michael Munsey of Methodics to get a quick update on what has been happening over the past 12 months within his company, and he quickly invited me to watch an archived webinar on their latest tool for IP Lifecycle Management called Percipient. I love to play the board game Scrabble, so i had to Google the word Percipient to learn its meaning, “having a good understanding of things, perspective“. OK, that’s my new word for the day then.

We often blog about new and updated point EDA tools on SemiWiki, so it’s refreshing to learn more about the category of EDA tools that works throughout all of the tools and IP used on a SoC project. System-level complexity has become so large that the days of using Excel to track semiconductor IP usage or EDA tool usage fall woefully inadequate.

The webinar was introduced by Daniel Nenni from SemiWiki, then quickly handed over to Michael Munsey.

Related blog – New Concepts in Semiconductor IP Lifecycle Management

Methodics started out back in 2007 helping Analog IC designers using Cadence tools to manage their design data with a tool called versic. Their next tool was an IP Management Platform called projectic, followed by a content-aware NAS optimizer and accelerator called warpstor.

So why introduce a new tool? Well, because when an electronic system is being designed today we have silos of information that really don’t integrate or talk to each other:

  • Design Models
  • Infrastructure Models
  • Program/Project Models

So the goal for creating Percipient was to tie all of these three domains together using a platform that models the entire ecosystem, independent of EDA or IP vendor.


So Percipient continues to take the proven Design Model features and objects from projectic, like:

  • Workspace tracking
  • IP usage tracking
  • Release management
  • Labels and custom fields
  • Bug tracking
  • IP versions
  • Design files
  • Permissions
  • Hierarchy
  • Hooks into design process

Expansions in Percipient include:

  • Hierarchical releases
  • Moving aliases
  • Snapshots
  • Improved IP and IPV usage tracking
  • New infrastructure model using an events platform to track all CPU usage per tool
  • Optimized workspaces
  • Tracking all tool operations in context
  • warpstor to be added in Percipient, stay tuned
  • Integrates with many other tools for dynamic, realtime tracking (NetApp, DellEMC, Perforce, JIRA, Bugzilla, jama, Jenkins, etc.)
  • Model extensions for customizations of data mining and dahsboarding

Simon Butler of Methodics did a live demo of how users work each day with Percipient in an object-oriented fashion on libraries, IP blocks, IP versions and workspaces. The user interface is a familiar web app with widgets:

Workspaces were examined and modified using a command line approach:

Related blog – Achieving Requirements Traceability from Concept through Design and Test

Vishal and Michael did the final Q&A session:

Q: How do I find where a particular IP is being used in my organization?

A: A few ways, we can see IP used inside of a hierarchy for a project, or which workspaces contain that IP.

Q: Can a user in one workspace use a different IP than the top level?

A: Yes, you can use any IP version as needed in your workspace.

Q: Can one SoC use two different versions of the same IP in the hierarchy?

A: Yes, you could use different IP versions in the same hierarchy and the tool will highlight this version difference.

Q: Can percipient report which workspace belongs to a user, and where is that?

A: Using the “pi” command there’s a column with the username owner for each IP. It’s easy to filter by username too.

Q: Is it possible to use percipient on top of another DM?

A: What Methodics brings is the most sophisticated way to manage IP, and we can migrate some or all of your IP into percipient (manually, import with templates, scripting, etc.) Keep your data in Perforce, no migration required.

Q: Can you connect with IC Manage, ClioSoft or DesignSync?

A: Our competitors don’t publish their internals, but we can manage IP by a filesystem approach. It’s best to move your new work into something like Perforce though.

Q: Can percipient tell me the number of IPs being used, and the number of standard cells per IP?

A: Yes, you can see how many standard cells are being used, or any IP block.

Q: How do you handle IP permissions?

A: You need to specific which users can view which IP blocks based on user capability. For example, contractors would be limited, while employees would have more access. Permissions can be managed at the command line or the web GUI, which ever method you prefer.

Q: Does permission system relate to Linux permissions or the DM system?

A: Users and groups in Linux can be imported into percipient, and then you can give individuals access to IP as needed per project in a centralized fashion.

Related blog – CTO Interview: Peter Theunis of Methodics

Summary

Methodics has carved out a very important piece in the flow for SoC design by focusing on IP Lifecycle Management. There approach has been field-tested by big names in the semiconductor design industry, and with the new release of percipient they have increased features to keep pace with industry requirements.

Watch the complete 48 minute archived webinar online, after requesting a password.


High Density Advanced Packaging Trends

High Density Advanced Packaging Trends
by Mitch Heins on 07-19-2017 at 7:00 am

Thursdays at the Design Automation Conference (DAC) are always a good time to catch up on areas of technology which are adjacent to that which you normally work. The exhibit floor is over and you have more time to spend in seminars. At this year’s DAC, I took advantage of a half day seminar put on by Mentor, a Siemens business, entitled High Density Advanced Packaging Trends. While this seminar was focused on electronic systems packaging, my work with integrated photonics has been clearly pointing to the need for system-in-package (SiP) solutions and I was interested to get a snapshot on the current state-of-the-art for IC packaging.

The seminar started off with a talk by Dick James, senior analyst for TechSearch International. Dick gave an excellent talk on FO-WLP (Fan-Out Wafer-Level Packaging) trends. Probably the biggest driver of this packaging technology is the explosion of mobile devices where wafer-level packages (WLPs) provide a significant advantage for thinner form-factors. Other areas driving WLP use include high performance computing, and automotive millimeter-wave and RF applications that benefit from the shorter, less lossy connections. Add to this list the ever-growing Internet-of-things (IoT) market that is starting to demand integration of multiple heterogeneous devices into a single low-cost package (e.g. MEMs-based sensors, analog, RF, digital and memory). Any one of these end-applications markets drive enough volume to make WLPs economically interesting.


Dick showed an fascinating comparison of the original Apple iPhone which had two WLPs vs the iPhone 7 which had 44 WLPs. One of the 44 is a TSMC InFO-WLP (Integrated Fan-Out) that includes a Package-on-Package (PoP) configuration with the application processor in the bottom package and multiple side-by-side memories in the top package. Apple isn’t the only phone manufacturer using WLP technologies as evidenced by Samsung (Galaxy 7 ; 14 WLPs), Sony (Xperia ; 13 WLPs), Sharp (Aquos Zeta ; 13 WLPs), Huawei (P9 Plus ; 16 WLPs), ZTE (Goophone ; 3 WLPs) etc. On average, there are 15 WLPs per smartphone and the numbers continue to increase.

The shift to WLPs is disruptive on multiple fronts. WLPs remove the need for a laminate substrate which means substrate suppliers are removed from the supply chain. Since WLPs use “wafers” as carriers and traditional thin-film processes for metallization of interconnect, packaging can take place at the foundry as opposed to an outsourced semiconductor assembly and test (OSAT) shop.

Don’t count the OSATs out however. Suresh Jayaraman, of Amkor also presented at the seminar. He gave an overview of Amkor’s Silicon Wafer Integrated Fan-out Technology (SWIFT) offering which is a WLP based on a 300mm wafer carrier. SWIFT uses a “die-last” process that enables processing of the fan-out Redistribution Layers (RDL) in parallel to the die being run in the foundry. This helps Amkor to reduce cycle time once the dice arrive at their shop to be packaged. Suresh also reminded us that they provide a neutral place for die sourced from different foundries to make their way into the same package; remember the need for heterogeneous integration of processors, memory, MEMs, analog, RF etc.

WLP-based SiPs are also disruptive for the CAD tool vendors. Till recently, board and package design were handled by completely different tools than those used for integrated circuits (ICs). SiPs, enabled by WLPs, demand much greater collaboration between board, package and IC designers and a shift to more SiPs could mean new battle lines being drawn between entrenched tool competitors. WLPs also bring many new technical challenges that stress the typical board and package level tools including the advent of very high pin-count packages (> 10,000) and fine-pitch (5um line/space) non-orthogonal routing. With stacking also comes 2D/3D interactions requiring tools to have coordinated views from all three domains (die, package and board).


The biggest challenge seems to be the desire to bring data from the disparate domains together to ensure correct connectivity throughout the stack (remember we are now dealing with tens of thousands of signals) and a myriad of new manufacturing design rules from all three domains. To make matters worse, there are almost no standards established between the different domains, let alone the various foundries, OSATs and CAD tool vendors.

Mentor Graphics rounded out the seminar by giving a presentation and live demo of the solutions that they are bringing to the WLP space. The presentation focused on three tools, those being Mentor’s Xpedition Substrate Integrator, Xpedition Package Designer and Calibre 3DSTACK. Xpedition Package Designer focuses on package design and it already includes real-time 2D/3D design viewing and editing. Mentor seems to have this process well in hand.


Similarly, the Mentor Calibre team has done a good job of working to bring IC methodologies into the packaging domain by partnering with foundries and OSATs to create what they call an assembly design kit (ADK). The ADK serves much the same purpose as a process design kit (PDK) in the IC world. Using characterized ADKs, Calibre 3DSTACK is able to verify design rule checks (DRCs) and connectivity (LVS) both within the package and across the die / package / board boundaries. Calibre 3DSTACK is also able to do DRC checks on non-orthogonal shapes found on the various RDL layers. The design rules and specifications for these checks are all stored in the ADK.

Mentor’s Xpedition Substrate Integrator tool combines information from all three domains to enable visualization and optimization of complex SiPs by letting designers see the impact of design changes in one domain on the other two domains. This is extremely helpful in an environment where co-design is so essential. The demo given by Mentor was pretty amazing as you could literally see interdependencies of all three domains and trace signal connectivity through the full stack from die pins through various redistribution layers (interposers, WLPs and package), micro-bumps and through-silicon vias (TSVs) all the way to the balls on the ball grid array and finally to the board. It was impressive.

In summary, as systems-in-packages become more prevalent we will continue to see innovation and changes in the both the design and manufacturing eco-systems. Keep an eye out in this area as these changes can and will be disruptive and could cause some shifts in the ecosystems as we know them. Mentor seems well positioned for the new opportunities given their strengths in board, package verification areas.

See also:
Mentor Xpedition IC Packaging Products
Mentor Calibre 3DSTACK


Applying ISO 26262 in a Fabless Ecosystem – DAC Panel Discussion

Applying ISO 26262 in a Fabless Ecosystem – DAC Panel Discussion
by Tom Simon on 07-18-2017 at 12:00 pm

The fabless movement was instrumental in disaggregating the semiconductor industry. Vertical product development at the chip and system level has given way to a horizontal structure over the years. This organization of product development has been doing an admirable job of delivering extremely reliable products. However reliable for a phone is not reliable enough for an autonomous vehicle with a service life of up to and over a decade. This issue was recognized years ago and lead to the development of the ISO 26262 standard in 2011.

ISO 26262 deals with the electronic systems in a car, with the goal of avoiding systematic errors and faults, as well as helping to deal with random errors. It applies to non-critical systems such as infotainment and also to critical systems like brakes, steering and autonomous operation.

Electronic systems in cars are often produced by fables semiconductor companies and frequently incorporate 3[SUP]rd[/SUP] party IP. Applying ISO 26262 to products developed in a dispersed manner is leading to changes that are affecting every member of the supply chain. To explore these impacts Mentor hosted a panel discussion at DAC in Austin. The panel had representation from members of each element in the supply chain affected by ISO 26262.

At first glance it makes sense that Mentor would be on the panel as an embedded OS supplier, but in the context of ISO 26262, the design tool providers are also an essential link in the chain. Rob Bates, Chief Safety Officer for the Embedded Division, spoke on behalf of Mentor. Also on the panel was Volker Politz, VP Segment Marketing at Imagination Technologies, who talked about the changes necessitated for IP developers. Jim Eifert Automotive Architect at NXP provided insight from the automotive system integration perspective. Lastly, Lluis Paris, Director, Worldwide IP Alliance at TSMC shed light on how the foundries for fabless semiconductor companies have shifted the way they work in the automotive sector.

There was a round of introductory comments by each panelist. Jim from NXP said that a big benefit of the standard is that there is common terminology that buyers can use when speaking with suppliers. Lluis from TSMC talked about how the role of a fabless foundry has shifted from just supplying silicon to developing an automotive platform to enable and encourage ISO 26262. This stems from the need for more extensive sustaining engineering and additional product documentation among other things. TSMC has added the role of safety manager to their organization as part of this endeavor.

Volker from Imagination pointed out that now the IP provider is in the middle. In some ways they are partnered with their customer’s engineering department. Products fuse together external and internal IP and design work. The biggest change for him is that there is now a more formal way for them to work together. Rob from Mentor added that prior to ISO 26262 companies were just continuing to engage in their previous practices. The standard has really changed the way the companies involved operate. He cited the example of TSMC, who is rebuilding many aspect of how they deal with automotive designs from the ground up.

The first question put to the panel was, does IP have to be certified?

TSMC was quick to point out that IP does not have to be certified, but that the process for making the chip does. This arises because in many cases the applications for the IP are so large that the IP vendor can’t possibly know the use-case that is applicable. Imagination added that there is not really a certification for IP, but the vendors can help by delivering the necessary documentation with their IP. IP vendors can help contribute. NXP said that the car is what is certified, and the key is to turn over at each step of the development the correct documentation to facilitate this and create traceability.

The next question asked if embedded software should be considered IP.

Mentor responded by saying that the embedded software resides on the chip which is closer to the customer. Development tools are closely linked to the embedded code, so they too are tied to the standard in some way. NXP agreed that the embedded code needs to comply with the standard and wished there was more explicit requirements for development tools.

The next question asked if the car is ‘certified’ then what documentation is needed to create traceability up the chain from the components?

TSMC stated that the foundry has to do a larger number of things under ISO26262. These include special SPICE and reliability models, along with aging models. The car needs to be traceable after 18 years in case there is an issue down the road. The foundry needs to keep the process viable for a long time and may even potentially need to go back to the wafer info after many years. NXP added that the standard requires a quantitative approach to quality. As part of this it can go to the level of looking at parts per billion failure rates.

Mentor sees that ISO 26262 puts a burden on the tool users to qualify the tools they are using for design. However, realistically this is not something the tool users can take on by themselves. The tool vendor must play a role. This is why they created the Mentor Safe Program.

The panelists were asked, despite the increased level of work required, whether or not they saw benefits in following the process in terms of improved reliability and safety.

TSMC said that they were already doing many of the things that are needed. They feel that instead of a quality increase, what they are seeing is better lifecycle planning. Imagination answered by saying that they are seeing some improvements, but they are also seeing improved reusability. NXP followed by saying that their safety process was already working, but they now have a better documentation process. Mentor feels that the standard helps people look at what they were doing, and that it can only help.
Then came the question of how the standard should evolve. TSMC feels that the hardware side of the specification is comprehensive and stable. However, there is more work that will need to be done on the software side. Imagination would like to see more focus on real integrity. They feel it is important that people are committed to the process and this is the only way the data is reliable. They also expect additions in the area of security, which is at its core a safety issue. NXP amplified that concern by saying that security is absolutely a safety issue, and they are very concerned about hacking. Mentor also concurred that security is something that needs to be addressed more fully in the future versions of ISO 26262.

The panel closed with a question on what new things would be beneficial to the ecosystem. TSMC feels that there is a good ecosystem in place for silicon. They see further ecosystem work occurring in the customer infrastructure. Imagination reiterated the point that they felt that security should be a priority. Imagination said that the EDA companies can also do a lot to help. The more EDA players do – for instance fault injection – the easier it will be to meet the spec. NXP really wants to see chip level design flows made easier to qualify. If they can get a packaged solution it will reduce their need for spreadsheets.

Mentor agreed with this perspective and feels that EDA vendors can help make it possible to adhere to the standard more easily. For instance defect tracing analysis could be added, rather than it being an after-the-fact activity. Mentor sees value in adding capabilities to make it easier to qualify. At the end of the day Mentor feels these same practices and features have wider applicability. They want to move the process out of the automotive space. It could improve customer satisfaction in a wide range of products. Mentor did a great job of pulling together the panel participants and facilitating the discussion. With their Mentor Safe program, it is clear they are serious about automotive safety. For more information on Mentor’s work in this area, please look at their website.


Why Ansys bought CLKDA

Why Ansys bought CLKDA
by Bernard Murphy on 07-18-2017 at 7:00 am

Skipping over debates about what exactly changed hands in this transaction, what interests me is the technical motivation since I’m familiar with solutions at both companies. Of course, I can construct my own high-level rationalization, but I wanted to hear from the insiders, so I pestered Vic Kulkarni (VP and Chief Strategist) and Joāo Geada (Chief Technologist and previously CTO at CLKDA) into explaining their view of the background and benefits.


Ansys already has a strong position in integrity/reliability for electronic systems, from chip design up through the package and the board level. This is important, for example, in design for total system reliability and regulatory compliance in ADAS systems under challenging constraints such as widely varying temperature environments (in front of a windscreen in Death Valley, or in an unheated enclosure in Barrow, Alaska). Particularly at the chip-level, through their big data/elastic compute approach to fine-grained multi-physics analytics rather than margin-based analysis, they assert that they can offer higher-confidence in meeting integrity and reliability goals in reduced area with faster design turn-times. What they presented at DAC seems to bear this out.


This is an important advance, but as always in semiconductor design, the goal-posts continue to move. Vic and Joāo said that customers are finding increasing difficulty in getting to timing signoff at 16nm and below, often starting with 10k+ STA violations, which can take 3-4 weeks to close. They are managing process variation factors (OCV) though use of the latest standards for timing in Liberty models but there are also dynamic factors to consider. At these feature sizes, operating voltage can drop to 0.6-0.7V, but threshold voltages don’t drop as much, so sensitivity to power noise increases and that can lead to intermittent timing failures in what would otherwise appear to be safe paths. Equally, clock jitter as a result of power noise can cause intermittent failures.

Of course you could follow the standard path and over-design the power distribution network (PDN) to margin for worst possible cases across the die. But that over-design becomes increasingly expensive and uneconomic, especially at these aggressive nodes, so you settle for a compromise between area and risk in which you hope you covered the most likely corner cases. Based on presentations at DAC this year, Ansys has already demonstrated that they can replace this uncertain tradeoff in related problems with multi-physics analysis delivering low risk integrity/reliability across the die with significantly less over-design than the traditional approach.

Which bring us to the reason Ansys wanted the CLKDA FX tool-suite. To address these dynamic timing problems, they needed to fold a high-accuracy timer into their multi-physics analysis to enable analysis of timing side-by-side with dynamic voltage drops (DvD). The point here is to analyze locally across the design to guide local adjustment of the PDN where appropriate. In an area where there’s enough slack, maybe you’re not too worried if timing sometimes stretches out a bit, so there’s no need to upsize the local PDN. Where timing is tight, you upsize enough to ensure that DvD will not cause the path to fail. Similarly, multi-physics analysis will highlight where clock jitter sensitivity could cause failures and may require mitigation in the local clock distribution.

Getting this right requires an accurate timer, better than a graph-based STA, closer to Spice-level accuracy but much faster than Monte-Carlo Spice (MC Spice) so it can be effective on large designs. That of course has been the value-proposition of the FX family for many years – a technology which can propagate statistical arrival times and true waveform shapes while also maintaining correct distribution, yet run 100X faster than MC Spice. Since FX already has a bunch of customers, I have to believe that’s not just a marketing pitch 😎.

So where does this go next? I’m sure there will be continuing integration and optimization with the SeaHawk and Chip-Package-System flow. Joāo also pointed out a number of additional opportunities. Ansys’ strength is multi-physics analysis so there are opportunities to cross other factors in analysis – variability and timing or aging and timing for example. CLKDA has tilted at the variability windmill before, lobbying for a transistor-level static timing analysis approach, for example to better model the influence of accumulating non-Gaussian distributions along a path. Perhaps their concept will start to gain traction in this new platform. Their work in signoff for timing aging is also very intriguing and I think is likely to attract significant interest in high reliability/ long lifetime applications (automotive maybe?)

So now you know why Ansys acquired CLKDA. For me this seems like an even better home for the FX technology. You can learn more about ANSYS solutions, including the FX products, HERE.


HLS update from Mentor about Catapult

HLS update from Mentor about Catapult
by Daniel Payne on 07-17-2017 at 12:00 pm

I recall back in the late 1980’s when logic synthesis tools were first commercialized, at first they could read in a gate-level netlist from one foundry then output an optimized netlist back into the same foundry. Next, they could migrate your gate-level netlist from Vendor A over to Vendor B, giving design companies some flexibility in negotiating the best foundry terms. Finally, they could accept RTL code as an input, then create foundry-specific gate-level netlists that were optimized.

Since Synopsys came to dominate the RTL logic synthesis market in the 1990’s, many competitors have aimed to sit on top of logic synthesis with their own High Level Synthesis (HLS) tool. EDA vendors have tried over the years with varying degrees of commercial success to grow the HLS market. All of the big three in EDA have made inroads with HLS tools and sub-flows, so I met up with Badru Agarwala, GM at Mentor during DAC in Austin last month to get an update on what’s been happening with their Catapult product line.

Q: What are the industry trends with HLS these days?

A: HLS is only one pice of an ecosystem moving up to use the C++ language now, so one goal is to make C++ design as robust as RTL verification is.

Q: What customers are using HLS, has it gone mainstream yet?

A: Some big company names that you should recognize that are using HLS now include: NVIDIA, Qualcomm, Google and STMicroelectronics.

Q: Is there a sweet spot for using an HLS tool flow?

A: Yes, segments like machine vision, computer vision, 3D TV all enjoy an HLS methodology.

Q: What about design capacity with HLS, is that an issue?

A: We’ve seen a 25 Million gate design completed with Catapult, so the capacity is there.

Q: What benefits do HLS designers enjoy the most?

A: Three things that are common benefits include: Verification is more thorough, the design process is much faster than RTL, and designers can make more changes at the last minute of their projects.

Q: How is HLS impacting the verification process?

A: Now with HLS you can start verification much earlier than before, in parallel with the design process, then when the code is stable start to do RTL design.


Q: What are some of the challenges of coding with C++ for an HLS language and how is Catapult changing?

A: Most existing C++ tools are for SW developers, not really well suited for HW developers, so there’s not much linting or property checking going on. HLS users really need coverage tools, along with bit accuracy and loop unrolling. We just announced Catapult DesignChecks to help an HLS user find bugs while coding, so they don’t have to debug as much with simulation and synthesis. There’s both a static mode of DesignChecks for fast linting plus a formal engine for checking. These approaches don’t even require a testbench to be coded.

A second new tool we’re talking about is Catapult Coveragewhich gives you code coverage for C++ and enables faster closure of synthesized RTL. Designers can reach 100% C coverage, then start to do HLS synthesis. We’ve had coverage tools for gate-level and RTL, so it only makes sense to raise that up to the C level too.

We also have SLEC (Sequential Logic Equivalence Checking) HLS a new C to RTL equivalence tool, so that you know that the RTL coming out of Catapult really is the same as the C++ that went into it, without having to run simulation and verification cases. Setup is more automated now.

Summary
Mentor with the Catapult family of tools has been at this HLS methodology for a long while now and have continued to invest in making the whole tool flow more integrated and easier to use for digital designs. I was impressed with the three StreamTV engineers in the Mentor booth who showed a 3D TV design using Catapult in their 15 month project because of how few people were required to do such a complex design so quickly, and consumers view 3D without glasses.


NetSpeed’s Pegasus Last-Level Cache IP Improves SoC Performance and Reduces Latency

NetSpeed’s Pegasus Last-Level Cache IP Improves SoC Performance and Reduces Latency
by Mitch Heins on 07-17-2017 at 7:00 am

Memory is always a critical resource for a System-on-Chip (SoC) design. It seems like designers are always wanting more memory, and the memory they have is never fast enough to keep up with the processors, especially when using multi-core processors and GPUs. To complicate matters, today’s SoC architectures tend to share memory across heterogeneous environments including temporal (isochronous, steady, random) and functional (fully coherent, IO-coherent and non-coherent) features. System designers are forever trading off performance and latency against the sharing of memory resources, power reduction and ensuring system timing integrity.

Many designers are now turning to what is known as last-level cache (LLC) to avoid SoC memory access bottlenecks and congestion among heterogeneous masters. Some designers primarily think of a LLC as simply a L3 cache, however the people at NetSpeed have taken the LLC concept to a much higher level. Pegasus is the product name for NetSpeed’s LLC IP and when used in combination with NetSpeed’s NocStudio design software, Pegasus can be used to optimize circuit throughput, reduce latency and improve memory efficiency.

Pegasus is a parameterized and configurable LLC architecture that is layered over NetSpeed’s Orion and Gemini Network-on-Chip (NoC) technologies. Pegasus leverages NetSpeed’s core NoC capabilities to ensure quality of service (QoS) and deadlock avoidance while also allowing the designer to support coherent, non-coherent and mixed architectures. The IP provides support for design-time programmable coherent-cache and coherent-memory modes along with memory error-correcting code (ECC) capabilities.

Pegasus also features runtime modes that enable soft partitioning of cache associativity per master. This allows designers to better utilize their cache resources through integration of some system IPs and by enabling the use of some or all their LLCs as directly addressable RAM. This latter capability is particularly useful when having to support multiple applications processors on the SoC that differ in their RAM requirements.


A key feature of Pegasus is that the IP is both design and runtime programmable and it supports multiple modes including, coherent cache, reduced coherent latency, power reduction, memory cache, increased bandwidth through locality and improved software coherency by avoiding the need for flushes. Not all slave agents are created the same and having multiple LLCs, based on address ranges, gives designers the ability to customize each LLC based on the slave characteristics.

Pegasus also works with NocStudio’s understanding of the SoC’s floorplan to enable distributed LLCs which can aid in connectivity-congestion mitigation and improve overall system bandwidth and latency through locality. In addition, Pegasus can do all of this while working in coordination with ARM’s Trust Zone capabilities to ensure complete protection of a device from bad outside actors trying to breach the SoC’s key memory and code blocks. This is an essential feature for designs targeted at the Internet-of-Things (IoT) market.

Since Pegasus is used with NetSpeed’s NocStudio software many steps to configure and program the LLCs are automated making it easier for designers to incorporate Pegasus into their designs. Using NocStudio, designers can simulate their SoCs with different configurations of LLCs over different work-loads and expected traffic patterns to ensure latency and performance are optimized. This is important as coherent SoCs often have processors with conflicting needs and which may have some traffic flows that are highly sensitive to CAS latency while others are more sensitive to bandwidth requirements. Striking a balance in a heterogenous SoC is non-simplistic and Pegasus along with NocStudio makes the job a lot easier.


Not only can the Pegasus LLC IP help with system throughput, bandwidth and latency, it can also be used by designers to reduce their overall SoC power consumption. Pegasus enables better memory efficiency resulting in lower power consumption by removing unnecessary look-ups and making intelligent decisions on managing, optimizing, and accessing memory. A configurable LLC can be tailored to the specific characteristics of the DDR controller and can augment the optimizations of the controller by making smart decisions within the LLC.

Additionally, LLC RAM banks are relatively easy to selectively power down. Pegasus allows runtime configuration of the LLCs to enable designers to selectively shut down all or part of the LLCs when the SoC goes into low-power mode. This feature can be exploited to squeeze out the last bit of spare power in the system.

So, to summarize, NetSpeed’s LLC is not a simple L3 cache. If you are building complex SoCs with a heterogeneous mixture of coherent and non-coherent IPs, the use of NetSpeed’s Pegasus LLC IP can help you to buffer and smooth out conflicting NoC traffic, eliminate memory bottlenecks, boost overall system performance, reduce traffic-dependent latencies, improve overall memory efficiency and reduce overall system power. That’s a whole lot of functionality for one IP.

See also:
NetSpeed Pegasus product web page
NetSpeed Gemini product web page
NetSpeed Orion product web page