Banner Electrical Verification The invisible bottleneck in IC design updated 1

Solido Debuts New ML Tool at TSMC OIP!

Solido Debuts New ML Tool at TSMC OIP!
by Daniel Nenni on 09-08-2017 at 7:00 am

The TSMC OIP Ecosystem Forum is upon us and what better place to debut a new tool to prevent silicon failures. Solido Design Automation just launched its latest tool – PVTMC Verifier – and will be demonstrating it in their booth at OIP. This is the third product that was developed within its Machine Learning Labs and is available in their Variation Designer suite of products.

Request a Variation Designer demo here:

http://www.solidodesign.com/products/variation-designer/

I will be there as well during the breaks giving away books (Fabless: The Transformation of the Semiconductor Industry AND Mobile Unleashed), SemiWiki pens, and networking with the semiconductor elite, absolutely.

PVTMC Verifier solves a problem that anyone who’s had an unforeseen silicon failure knows well – PVT and statistical effects interact – but no one knew of a solution to this problem that wasn’t extremely expensive or take a long time to complete.

The brute force approach to address PVT+statistical variation requires hundreds of thousands or millions of simulations. For example, a typical netlist at 3 sigma with 45 PVT corners = 26.9K Monte Carlo samples * 45 corners = 1.2 million simulations. This is not possible to complete in a typical production timeframe. Alternatively, running PVT corners, then MC at the worst-case corner, is error prone because in many cases the worst-case PVT at nominal isn’t the worst-case PVT at your target sigma. Circuits would go to silicon where the failure would be found there, resulting in costly re-spins and increased design cycle time.

Using proprietary machine learning technologies, Solido PVTMC Verifier is able to provide brute force level PVT+statistical variation coverage in only 100’s to 1,000’s of simulations.. You load a netlist into the tool, specify the target sigma and PVT corners you want to test at, and PVTMC Verifier is able to fully verify your design across operating conditions and process variation.

Solido already has several of their customers using PVTMC Verifier in production. One large customer in the automotive space ran PVTMC Verifier on a chip that had already failed in silicon, and the tool correctly identified the failure in just 310 simulations where it previously required 10,000 brute-force Monte Carlo simulations. It replicated their silicon failure even though the customer thought it couldn’t be done.

A second IDM customer of Solido’s used PVTMC Verifier on a known problematic circuit with 9 environmental conditions where a failure was already found in silicon test but missed in simulation using their traditional variation-aware tools. They ran PVTMC Verifier and it also found the problem, and it took only 45 minutes (1,050 simulations). They then fixed the design, confirmed it was fixed in silicon, then re-ran PVTMC Verifier. The problem corner was no longer present. What this means is that PVTMC Verifier was fast enough to utilize for verification, it revealed variation problems before going to silicon, and it eliminated failure risk in the verification stage.

Solido PVTMC Verifier is also being utilized for automotive verification to higher sigma. A common flow involves quickly covering all PVT conditions at 5 sigma with PVTMC Verifier, then verifying the worst-case condition with Solido’s High Sigma Monte Carlo (HSMC) to tighten confidence intervals at the worst-case PVT.

Solido’s new PVTMC Verifier delivers unprecedented coverage of PVT and MC space. In the above case, it solved for 4.1 sigma statistical variation across all 45 PVT conditions. Brute-force Monte Carlo across all PVTs would deliver perfect accuracy but would have taken 45 million simulations. PVTMC Verifier is able to cover this full space using just 1,515 simulations.


Webinar: Aiding ASIC Design Partitioning for multi-FPGA Prototyping

Webinar: Aiding ASIC Design Partitioning for multi-FPGA Prototyping
by Bernard Murphy on 09-07-2017 at 4:00 pm

The advantages of prototyping a hardware design on a FPGA platform are widely recognized, for software development, debug and regression in particular while the ultimate ASIC hardware is still in development. And if your design will fit into a single FPGA, this is not an especially challenging task (as long as you know your way around FPGA design). More commonly your ASIC won’t fit into one FPGA and you’ll have to distribute it across multiple devices. Then this task gets a lot more interesting.


REGISTER HERE for this webinar on September 14[SUP]th[/SUP] at 11am PDT or 3pm CEST

First, you’re going to need a board of FPGAs or even a set of boards into which you can map your design. And second you have to figure out how to optimally split the design across those FPGAs. If you’re not getting worried at this point, you should be. Wherever you split, signals have to cross through board traces and sometimes even between boards. Those signals have to travel through device pins, the board traces and possibly backplane connections so they’re going to switch more slowly than signals within a device. Where you thought you had manageable timing, it just got messed up.

Clock signals pose another timing issue; you’ll discover all kinds of unexpected post-partitioning clock skews as clocks cross between devices. FPGA design tools will help you close timing within an FPGA but closing timing across the whole design is going to be your problem.

You also have to deal with IO resource limits on FPGAs. Aside from timing, however you divide up your design you will probably need more IO signals on a device than there are signal pins. Handling this requires some clever logic to bundle/unbundle signals to meet pin limitations; a lot more work if you’re trying to hand-craft your own mapping.

Or you could join Aldec on September 14[SUP]th[/SUP] to learn how you can avoid most of this pain, using their boards and backplanes along with their guided partitioning software to quickly explore partitioning alternatives and confidently map to an implementation which you can close with predictable and usable timing.


Can the iPhone rollout lift the industry?

Can the iPhone rollout lift the industry?
by Robert Maire on 09-07-2017 at 12:00 pm

We are in the midst of a number of cross currents buffeting the industry. The Korea risk seems to have escalated again and our government has thrown fuel on the fire by threatening trade agreements at the most inopportune timing possible. However we are also a week away from the roll out of one of the most anticipated Iphones ever which we all know is a huge driver of semiconductor sales and technology. All of this comes after a relatively lame August of postpartum Semicon and less than stellar earnings outlooks.

Add to this a hurricane or two and the news flow is certainly mixed. Seasonally we usually see an uptick as back to school and anticipation of holiday sales tends to lift us out of summer technology spending doldrums.

Korean Krisis Kalamity Kontinues
We were among the first analysts to put out our concerns about Korea months ago. We suggested that a couple of well placed artillery rounds into semiconductor fabs in South Korea could do a lot of damage to the South without a wider conflict or loss of life but perhaps we underestimated the situation as the North has made a lot more progress on bigger toys since we published our note.

Our first note on Korea in April

Our second note on Korea in July

The risk has clearly increased and has moved well beyond a localized conflict that would be felt primarily in South Korea. Aside from the potential loss of millions of lives in South Korea and beyond, the impact on the economy, especially technology would be huge. The stock market obviously figured this one out today. Semiconductor companies would be front and center in this calamity.

There is no way to protect a portfolio other than going to cash as almost any company is linked in one way or another, but reducing specific exposure to Korea might be a prudent first step.

Korean trade pact Kancellation – a Coup de Grace?

Like there weren’t enough concerns about Korea but canceling a trade pact seems very ill timed. We doubt that there would be that much damage as we don’t see that much effect that a cancellation would have.

At best we could see a temporary blip or shipments slow as rules get re-written or un done but its not like the US will stop buying Korean phones or chips or they will stop buying US semi equipment.

There could be some mildly positive benefit around the periphery for non- Korean phone/chip makers or non-US tool makers but we doubt it would be impactful.

We see little potential for damage other than the continued degradation of the current administrations image.

Iphone to the Rescue?

Amid all the bad geo political news there remains the shining light of the new Iphone launch scheduled for Sept 12th.

The stocks started to get warmer at the end of August likely in anticipation as the news and rumors continued to heat up. Much is anticipated. We have heard rumors of a 512GB model as well as other things very positive for the semiconductor and tech industries.

I still have an Iphone 6 and I will be one of the first buyers no matter what, even at a stupid price point. There may be those out there who will be disappointed that the new phone won’t cure cancer or walk on water but the odds are high that sales will be very good given current expectations.

Certainly TSMC has already bought a lot of equipment to make the new 10NM processor and Apple must be buying a lot of NAND further driving that market.

This new Iphone is perhaps the core of the current semiconductor “SuperCycle” (as mentioned by TEL and others). We don’t see any potential disappointment other than Apple not being able to ship enough. The Korean issue could cause a ripple in the supply chain but probably not in the first shipments as that phone stock is already built.

The euphoria of the new Iphone should lift most boats in the technology space especially semiconductors. The only thing that could drown it out would be an all out conflict in Korea which is less likely in the next week at least….but all bets are off after that.

A seasonal bounce back
Normally, technology sales slow down over the summer as people are on vacation or spending money on bathing suits and surfboards not phones and laptops. August is especially bad as all of Europe is on vacation. Back to school is the first positive wave followed by holiday spending later in the fall.

The semi stocks have shown some signs of life over the past week or so perhaps sensing a seasonal change which typically brings buyers out.

August was not great as we went through a bit of a Semicon postpartum dip which was exacerbated by less than stellar earnings reports which spotlighted slowing growth. We did end up on a better note as Applied gave a more positive view helped out by display sales.

A geopolitical conflict could drown out the positive seasonal effect and could be very chilling on holiday sales if we were embroiled in a conflict with people buying “prepper/survivalist” supplies rather than the latest electronic device.

In a similar sense, we could see spending on electronics slow this holiday season in Texas as many are more focused on rebuilding their homes than buying the latest Iphone.

Fundamentals remain rock solid
Even with all these cross currents, the underlying fundamentals of the semiconductor industry remain very solid. Memory pricing and demand are great. Equipment sales remain very strong. Technology continues to be pushed along though one means or another. IOT, big data, AI, VR & AR are all pushing technology and semiconductors at an ever increasing pace.

This creates a bit of a “reverse duck metaphor” scenario where things appear to be unhinged & stormy above the surface of the water but technology is calmly paddling along under the waves.

All this suggests that when things calm down again that we will still be in a very positive position for technology and semiconductors over the long run.

Collateral impact and ordinal investing the stocks
We continue to like Micron as a winner no matter what and we might like it even better in a pair trade teamed up with a Samsung short.

Equipment stocks could get rocked in the near term but could then be a buying opportunity as things recover. Although the stocks have behaved better of late (except for today) we would still wait for a better buying opportunity as the negative news is far from over.

About Semiconductor Advisors LLC


A Delicate Choice – Emulation versus Prototyping

A Delicate Choice – Emulation versus Prototyping
by Bernard Murphy on 09-07-2017 at 7:00 am

Hardware-assisted verification has been with us (commercially) for around 20 years and at this point is clearly mainstream. But during this evolution it split into at least two forms (emulation and prototyping), robbing us of a simple choice – to hardware-assist or not to hardware-assist (that is the question). Which in turn sometimes encourages us to force a choice – which is the better option? The answer to that question unfortunately (at least for your capital budget) is that it depends. There are multiple factors to consider.


The most obvious way to look at this is cost. Emulators are expensive and prototypers are much cheaper. But cost, while a good secondary metric, is rarely a good primary metric. An off-road UTV is cheaper than a car but I wouldn’t want to drive it to New York. Equally, I wouldn’t want to drive a car up a goat trail. These solutions each have strengths in different applications. Also, emulation providers are starting to offer cloud-based access, presumably with much more modest usage-based costs.

Another consideration is where/when you can use the solution. Conventional wisdom holds that you use emulation earlier in design and prototyping when the design is pretty much frozen. But this isn’t quite right. Prototyping can actually be very useful before you even commit to hardware implementation for at least a couple of reasons. The first is the classic reason you build any prototype – to check it out in situ so you can refine design concepts. This approach is also useful when you want to bid on a project request for proposal or when you’re trying to raise funding. A demonstrator is a very credible way to convince a prospect that your proposal is viable.

There’s another very important reason to consider an early prototype – to start software development early. Virtual prototypes (the purely software kind) are also valuable here but are limited where the prototype needs to interact with real-world connections, through MIPI or ADCs/DACs for example. An FPGA prototype can realistically interact (through extension boards) with these interfaces.

Another consideration is real-time performance. Typical emulator clock speeds are in the sub-5MHz range, whereas prototypers will run in the 5-10Mhz range with hands-free setup and can run even faster (I have heard of 50MHz or higher clock speeds) with hand-tuning. Both options are usually well below real silicon speeds but the higher performance of the prototypers makes them more effective for certain tasks. Supporting software development is one obvious example. Few software developers will accept test/debug cycles at emulation speeds, but they are willing to work with prototyping speeds.

Performance is also important when testing functionality with video and audio streams. It’s difficult to test a meaningful segment of a 4K video stream at emulator speeds. On a prototyper, testing will still not be real-time but will be much more practical.

Performance is also important in the late stages of design if running big regression/ compliance test suites is a significant consideration. Here the time to cycle through these suites can be a major gate in the schedule, so reducing that time becomes very important. And obviously if you can deploy multiple low-cost prototyping platforms to parallelize regression testing, you can further reduce that time.

So far it might seem that I’ve made a very strong case for FPGA prototyping being the solution of choice for all needs. But emulation has some significant advantages which shine during design development. The first is setup time. A prototype can take days (best-case) to weeks to setup. That’s OK when you’re in the concept phase or in final functional verification / signoff. But it’s not OK when you’re cycling through RTL drops. Here you need a day or less for setup time, something emulators can support even on very big designs.

The there’s hardware debug. FPGA prototypers map your design into FPGAs with an emphasis on performance and utilization rather than design debug. This is because adding significant debug support logic would significantly expand the size of your design (so you’d have to buy bigger prototyping systems) and also would reduce performance. There are still ways in prototypers to get fast access to some of the logic. You can even dump all state but very, very slowly. So prototype users look at their boxes primarily as support for software debug, not hardware debug. Where a potential hardware bug shows up in software testing, the prototyper team will kick it back to the emulator team for detailed diagnosis.

Emulators on the other hand have mechanisms to access and debug all state in the design across expansive time windows because that capability is intrinsic to the hardware design of the emulator. So emulators are significantly more effective in supporting hardware debug, which is obviously of primary importance during the hardware development phase.

So there’s your answer. Emulators and prototypers both have significant and complementary value during product development. You can learn more about this topic by reading the S2C/SemiWiki book on prototyping, which you can request HERE.


CTO Interview: Ty Garibay of ArterisIP

CTO Interview: Ty Garibay of ArterisIP
by Daniel Nenni on 09-06-2017 at 12:00 pm

ArterisIP has been a SemiWiki subscriber since the first year we went live. Thus far we have published 61 Arteris related blogs that have garnered close to 300,000 visits making Arteris and NoC one of our top attractions, absolutely.

One of the more newsworthy announcements this week is the addition of Ty Garibay to the Arteris executive team. Ty is a decorated semiconductor executive with 30+ years experience and has his name is on 34 patents through his architect and design leadership roles at Motorola, Cyrix, SGI, Alchemy Semiconductor. He also managed ARM’s Austin Design Center and ARM cores development and IC engineering for Texas Instruments’ OMAP application processors group. Most recently, Ty was Vice President of IC Engineering for Altera, and led FPGA IC design at Intel after the acquisition in 2015.

What brings you to ArterisIP? What is your history with the company?
I worked with Arteris for more than 10 years as a customer, at both Texas Instruments and Altera, now acquired by Intel. I’ve known CEO and Founder Charlie Janac for many years. My engineering team worked closely with Arteris’ team in the original development of the NoC generation tools.

When I felt it was time for me to do something different, after the Intel acquisition of Altera, I looked around and decided to try something completely new. I am making a transition from being an engineering manager for a large organization to a technical role as an individual contributor at a small, private company. I was looking for something different, challenging, and innovative. This was a group I thought I could fit into and enjoy working with.

What technology trends do you see? How will you help address these at ArterisIP?
I’ve been in the semiconductor business for more than 30 years. In the early part of my career I focused on processor development, and the last 10 years I focused on SoC development. The original mobile phone megatrend that created the Network-on-Chip (NoC) opportunity for Arteris is continuing into other markets.

SoCs are becoming more complicated and complex, and more IP blocks are coming from different providers, both internal and external. There are different interfaces, and more power management is required. Moving from 28-nanometer process technology to 16 nm, 10 nm, and 7 nm technologies increases challenges exponentially. It is harder to get the interconnect wires where they need to be and to meet ever-increasing performance goals. All of these challenges are converging, creating a need for a new generation of NoC IP tools.

Another trend that we’re starting to see for the first time since the early 2000s is a considerable number of new entrants into the semiconductor market. ASIC players and system companies are now designing their own chips: Microsoft, Google, Facebook, and Amazon to name a few. The variety of chips and new design teams coming into the industry is greater than I have seen in over a decade. Configurable NoC interconnect IP and the associated IP generation tools can play a critical role in enabling new teams entering new markets. The teams of people involved in these new endeavors often don’t have a long track record in chip design, but with our tools and features we can help new teams be successful. There is a significant, growing market for ArterisIP.

What have you seen at ArterisIP over the years as a customer, and now a partner?
One of the things that made working with Arteris pleasant and positive over the years was the commitment the team had to their customers’ success. Arteris was always extremely flexible in regard to supporting different functionality, meeting different requirements, using IP tools in a variety of ways, and adapting Arteris’ offerings to our methodology. As a customer I appreciated this, and now being part of the team, it is something that I can help carry on. One of the things I’m trying to do in my role as CTO is represent the customer in discussions of what we are developing for the future, how we create our user experience, and the expectations we have for how the customers use our IP and network generation tools. Those are things we can improve on by bringing my customer experience inside the company.

Why do you think interconnect is important for today’s SoC landscape?
It goes back to the two trends I discussed before of increasing complexity, the number of IPs and relative lack of cohesive design team experience. In other words, engineers in a company are experienced in and of themselves as individuals. However, there are a lot of new teams forming at new companies and this is a challenge. I think we can add more value with ArterisIP tools by helping new teams be productive and more successful with their first iterations.

What do you see as the most innovative technology directions in the industry within the interconnect area?
We see growth in both the number of participants and dramatically growing volumes in automotive ADAS (Advanced Driver Assistance Systems) and autonomous driving chips, where ISO 26262 functional safety and resilience capabilities are critical. These chips cannot be sold unless manufacturers can guarantee their suitability for safety-critical applications. In that realm, we are looking to innovate by adding unique features to our resilience offerings. The goal is to enable our customers to take their products to functional safety certification much more efficiently, and get to market more quickly.

Another area where interconnect can add value is in the high-end, high-performance networks on-chip. Currently, we don’t offer what the networking industry thinks of as key networking functions in the interconnect. We do provide packetized communication, but we don’t have a lot of routing capabilities that a protocol like PCI-Express has.

I think there will be a need for some of those higher-level protocols to come into the silicon to enable the incredibly complex systems with 10, 20 or even 40 different high-performance masters. This will be essential so that the software for safety-critical and mission-critical applications have a guaranteed quality of service on the network, enabling their deterministic forward progress. This poses a big challenge regarding area, power, and timing. It is an opportunity for us to enable a whole new system that leverages these capabilities, to extend into the chip capabilities that are currently only available at the board and chassis level. And to do it in a manner that is more efficient from a PPA perspective than is done today.

What do you see as the most important technology being implemented at ArterisIP?
As part of the chips becoming larger and more complex the biggest challenge for our customers is to implement what we generate for them into silicon. They need to place it, route it, and get it all working at their desired frequency of operation. These tasks have become a massive challenge for them, which is why we are trying to help with our new toolset called PIANO, which eases the implementation of our network-on-chip interconnect into physical design. The customer can consider the evolving floor plan, the timing of the chip, and all the different interfaces together. Early analysis and planning can provide a much more deterministic path to final closure, reducing the customer’s total design time and time to market.

With the PIANO technology, we believe we can help our customers accelerate their design processes and improve their productivity. We are planning to integrate this functionality into everything we do. As designs become more complex at 10 nm and 7 nm, our tool flow has to become completely physically-aware. Our goal is to help our customers make the right decisions about interconnect all the way from the beginning of the design cycle to the final product. To me, the most important thing is to enable our customers to tape out a real chip. We can add features, and functionality, but none of it matters unless it works at speed, the right power envelope, and within the customer’s development schedule.

How do you see the role of ArterisIP’ technology evolving into the future? Are there opportunities to build on the foundation of interconnect technology?
The technology we have today, and what we are developing for the next generation, is foundational to chip design. It provides a fundamental ability for engineers to create more complex chips, faster. It is an infrastructure we can build on to add value in different ways for our customers. For example, one could imagine that if we can see all the traffic on the chip, that there are potential opportunities in terms of security. A customer may need to provide adversarial security within the silicon and interconnect. This could allow you to watch for certain events, detect patterns, and implement within the chip the diverse types of monitoring chip-to-chip interconnect allows in today.

There is also a role for interconnect in debug and performance monitoring. Since we are transporting all the traffic, we can create and derive metrics from that traffic. The more data that can be made available to our customers, the better that all systems operate. Facilitating that and allowing people to implement it more easily is an area where we can have an impact, because the interconnect touches every part of the chip, creating an opportunity to use the interconnect as the basis for the organization of a design process. If we can work with all the IP providers, perhaps we can support standards for how IP can be re-used, and what protocols can be used for the communication of bits through the NoC, or how the information about the IP is used by hardware and software design teams. This is a way to continue making a design process easier for our customers by building around our network.

I personally think there are a number of different opportunities for us to add value to our customers, with the network-on-chip at the center of the system.

Also Read:

CEO Interview: Michel Villemain of Presto Engineering, Inc.

CEO Interview: Jim Gobes of Intrinsix

CEO Interview: Chris Henderson of Semitracks


Breakfast with Aart de Geus and the Foundries!

Breakfast with Aart de Geus and the Foundries!
by Daniel Nenni on 09-06-2017 at 7:00 am

Being the number one EDA and the number one IP company does have its advantages and the resulting foundry relationships are a clear example. One of the DAC traditions that I truly enjoy is the Synopsys foundry breakfasts. Not only does Synopsys welcome scribes, they reserve a table up front for us and Synopsys CEO Aart de Geus has been known to join us for fresh fruit and candid conversations. Breakfast conversation with Aart is quite easy due to the wide range of topics he can speak to. Remember, Aart has his finger on the semiconductor pulse like no other. We had a very interesting chat about autonomous cars and of course an update on his band Legally Blue (my beautiful wife and I are fans).


The videos for the foundry breakfasts are up on the Synopsys website. The interesting thing about the foundry people is that their collective knowledge about the fabless semiconductor ecosystem is staggering. Take Willy Chen from TSMC for example, Willy has a masters degree in electrical engineering and more than 20 years experience, most of which are with TSMC . If I remember correctly, Willy started at TSMC in the PDK group and is now Deputy Director, Design Infrastructure Marketing. Bottom line: Willy sees more in a month than most of us do in a year, absolutely. Willy is also a very nice guy, a snappy dresser, and a great speaker, so definitely watch this first video:

Arm, Synopsys and TSMC kicked off DAC 2017 with an event to share the results of their collaboration to enable design on TSMC 16-nm and 7-nm process technology with the new Arm® Cortex®-A75 and Cortex-A55 processors and the Synopsys Design Platform. In this event video, they introduce the new Synopsys QuickStart Implementation Kits (QIKs) for the Arm cores that take advantage of Arm POP™ technology and Synopsys tools, and the collaborative design enablement for TSMC 16-nm and 7-nm process technology. HiSilicon concludes the video by describing their impressive mobile product success designed by taking advantage of the Arm/TSMC/Synopsys collaboration.

Collaborating to enable design with Arm’s latest processors (Cortex-A75, Cortex-A55), TSMC 16-nm and 7-nm processes and Synopsys’ Design Platform Watch the video replay


This next one features one of my favorite foundry people Kelvin Low. Unfortunately, Kelvin left the foundry business for IP and now works for ARM as Vice President of Marketing, PDG (Physical Design Group). So sadly this is the last you will hear from Kelvin on behalf of Samsung Foundry:

On June 20th of this year Samsung Foundry and Synopsys hosted a breakfast event and talked about their multi-year collaboration to develop the next-generation process nodes and enable advanced SoCs for the next wave of design innovation. Mamta Bansal, Sr. Director of Engineering at QUALCOMM delivered a spirited presentation on their use of Samsung Foundry 10nm node and Synopsys Design Platform tools for their recent design success.

“Relentless” multi year collaboration between Samsung Foundry and Synopsys enabling the next wave of design innovation
Watch the video replay

The GLOBALFOUNDRIES breakfast was actually a dinner so my beautiful wife joined me. Greg Northrop is the featured guest speaker on this one. I had not met Greg before but his candid responses to questions were very enlightening. Greg spent 30+ years at IBM before joining GF as a Fellow in the Design Enablement Group so he knows where all of the dead technologies are buried, absolutely.

On June 20, 2017, Synopsys and GLOBALFOUNDRIES hosted a dinner event at DAC. Attendees heard how the two companies are collaborating on enablement of Synopsys’ design solutions and IP on GLOBALFOUNDRIES’ leading-edge dual roadmap process technologies.

Advanced Design Enablement and Ecosystem Readiness of GLOBALFOUNDRIES Dual Roadmap Technologies, Using the Synopsys Design PlatformWatch the video replay

All three videos are definitely worth your time….. If you want more commentary hit me up in the comments section.


Project Management Tools for Analog IP Verification

Project Management Tools for Analog IP Verification
by Tom Dillinger on 09-05-2017 at 12:00 pm

Large SoC design teams typically have a cadre of project managers to oversee all facets of functional verification — e.g., specification, reviews, directed testbench development, automated (pseudorandom) testcase generation, HDL coverage measurement and reporting, and bug identification/tracking database management. The relatively small engineering teams developing analog/mixed-signal IP commonly have little of that infrastructure at their disposal. The designer of an analog sub-block macro is usually also its “verification lead” and its “project manager”. The platform is a Spice-like circuit simulator. Each macro within the IP block is characterized by a set of measurement targets (or acceptable target ranges) — those specifications define the validity of the design. There is currently a dearth of tools available to assist the AMS IP designer with the PM aspects of verification, and ultimately sign-off.

I recently had the opportunity to chat with Nicolas Williams, Product Marketing Manager at Mentor, a Siemens company, about this AMS IP verification task. Nicolas described a component of the Tanner EDA tool suite — Tanner Designer — developed specifically to aid with analog verification project management.

.measures

The general flow for using Tanner Designer is illustrated in the figure below.

The setup is straightforward. The analog designer directs Tanner Designer to the (hierarchy of) IP block and macro component simulation results directories. Tanner Designer pulls measurement results from the analog simulations into a database, from which the project “dashboard” is presented. These measures can originate from schematic properties or be added as part of the simulation flow. Although the flow diagram above focuses on Mentor tools, a recent feature add for Tanner Designer (Release 2017.2) supports the import of measurement data from other sources, such as other simulators or test equipment, to provide an easy-to-use (Windows) environment for analog IP project management.

Dashboard and Excel

The graphic below is a screenshot of the Tanner Designer dashboard, which tracks each simulation testcase status, aggregates the data, and serves as the interface to Excel for calculation and final reporting of the comparison between specification and simulation results.


Tanner Designer dashboard, and corresponding Excel spreadsheet

Individual analog macro simulation measurement data is present on separate Excel spreadsheet tabs. The “Results” information links the summary of the project verification status.

Nicolas explained, “Key measurement data from the individual macros are represented in different Excel tabs. This simulation measurement data can range from simple scalar values to a vector of measurements associated with parameter values from sweeps or Monte Carlo sampling. The Results tab in the Excel spreadsheet contains the references to individual cells among the tabs. Any additional Excel calculations or complex expressions can be added to quickly summarize the overall Pass/Fail status of each IP specification.”

“Why Excel?”, I asked.

“We chose to interface the Tanner Designer dashboard to Excel for simplicity, familiarity, and efficiency. The interface is seamless. When simulations are rerun, an ‘Update’ command automatically re-loads the Tanner Designer project database and the related Excel spreadsheet with the new simulation measures, and re-calculates formulas/expressions for the current status of specification checks.”, Nicolas replied.

He added,“And, we developed a simple query language for analog designers to describe the specific database information to place into the dashboard and Excel tab spreadsheet.” Examples of database queries are shown below.

There are data links from Excel back to Tanner Designer to complete the project management loop. The Excel Results tab information is reflected back to the dashboard, to serve as the single (high-level) project management status view.

There are “hot links” from Excel to Tanner Designer, as well — if a measurement specification created in the Results tab is not correctly populated after an update, there is a highlighted notification in the dashboard.

Tanner Designer can also embed more complex data, such as detailed simulation waveforms, and (especially) graphs/charts generated by Excel — see the figure below for an example.

Building upon the extensive capabilities in Excel to leverage ‘templates’, Tanner Designer can also serve as a very efficient means to generate IP block datasheets, for internal design reviews and/or external documentation.

The analog IP designer typically wears many hats, with verification lead and project manager responsibilities among them. There has typically been little focus on tools and flows to assist with the PM tasks, or to establish a formal and consistent methodology for defining and reporting project status and specification pass/fall results. Mentor’s Tanner Designer is a significant step forward to address this requirement.

For more info on Tanner Designer, please follow this link. There is an excellent Webinar with an introduction to Tanner Designer — here’s a direct link.

-chipguy


Embedding FPGA IP

Embedding FPGA IP
by Bernard Murphy on 09-05-2017 at 7:00 am

The appeal of embedding an FPGA IP in an ASIC design is undeniable. For much of your design, you want all the advantages of ASIC: up to GHz performance, down to mW power (with active power management), all with very high levels of integration with a broad range of internal and 3[SUP]rd[/SUP]-party IP (analog/RF, sensor fusion, image/voice recognition and many more). But for an increasingly important range of applications requiring significant configurability, such as hardware accelerators, software configurability alone can’t meet performance needs and pure FPGA solutions fall short on power and system cost. In these cases, keeping all the ASIC advantages while also having the advantages of an embedded FPGA (eFPGA) block for configurability can look pretty attractive.


Sounds interesting, but how do you work with an eFPGA IP, especially during the chip design process? This is a specialized piece of functionality – a hard macro customized to your objectives, along with a customized version of the software to design/program the FPGA and the logic you must add to you design to support programming. Achronix recently released a white-paper detailing a typical flow you would follow in evaluating then building their Speedcore eFPGAs into your design.

They note up-front a couple of important consideration to ensure a streamlined flow. Physical constraints should be finalized early, including resource mix and size for the instance, also metal stack constraints. They also note that while your ASIC may run at GHz, the eFPGA is going to run at speeds around 300-500MHz so you need to think about separate clock domains and speed matching/clock domain crossing management at interfaces.

Your first step starts before you commit to a contract. Here you want to benchmark representative logic designs you expect to see in the eFPGA. You can do this through their ACE design tools, where you can perform all the usual FPGA design tasks starting from an RTL design and you can also size requirements for LUTs, memories and DSP blocks. ACE will provide feedback on size and performance as you perform these experiments. And naturally you would compare these stats with experiments you would perform on standalone FPGA alternatives. Once you complete your resource selections, Achronix will provide you with a finalized size, power and performance specification.


If you proceed to a contract, Achronix will then provide you with preliminary instance data, including a preliminary LEF file for floorplanning and a .v file including configuration and debug pins as well as signal pins (I think a pin-accurate black-box model at this stage as far as ASIC simulation is concerned), also a list of signal pin locations which you can change and feedback to Achronix for the next phase delivery. You also get a preliminary version of the ACE toolkit for your instance and a preliminary power estimator which they recommend you use in your ASIC power planning even at this early stage.

In phase 2 you get a final LEF file (remember to get your pin locations final in phase 1) and an updated version of the ACE toolkit, offering timing models accurate to within 5% of silicon. At this stage they also support RTL simulation of the eFPGA instance with the full ASIC. And you get preliminary models of the configuration circuity which you will need to fold into your ASIC design (how you want to drive this is up to you).


In the final phase of deliverables, Achronix provides a .v file for simulating the programming sequence, files for full timing closure (you can also develop these yourself using the ACE toolkit), a DFT simulation netlist and ATE vector files and a final version of your custom ACE toolkit, now supporting bitstream generation.

For timing closure the white paper notes two possibilities – cases where signals are registered in the eFPGA boundary ring and cases where register boundaries may be inside the core. In the first case Achronix will provide a .lib file with timing for the instance and you can use standard tools to close timing at the ASIC level (they note a couple of additional constraints). In cases where paths are registered inside the core, timing closure depends on a collaborative effort between you and Achronix involving an iterative analysis between your standard timing tools and ACE-analyzed timing.

There’s a lot more detail in the white paper which you can access HERE. This looks like a compelling option where you must have some hardware configurability in your ASIC solution. Certainly design with an eFPGA looks like a very manageable task through this flow (I’d personally try to avoid the second timing closure option, but that’s me).


Webinar: Mobile Device Companies Get New Sensor Interconnect Standard

Webinar: Mobile Device Companies Get New Sensor Interconnect Standard
by Daniel Payne on 09-04-2017 at 12:00 pm

I’ve been a mobile device user since the 1980’s when the Motorola brick phone was introduced, so I’ve seen an increasing amount of sensors added to each new generation of mobile phones over the years. One big challenge to both sensor companies and fabless semiconductor companies designing SoCs for mobile devices is how to efficiently connect all of that sensor technology without resorting to proprietary interface schemes. Back in 2003 the MIPI Alliance was founded by four companies (ARM, Nokia, ST, TI) to specifically address this challenge, now they have some 280 member companies, so this is an important alliance to keep up to date with.

There’s a webinar on September 12th, Getting to Know the New MIPI Alliance I3C Standard, from 10AM-11AM PDT, hosted by NXP and Silvaco that I will be attending, and invite you to join me online.

Webinar Description

Mobile devices today contain a rich assortment of sensors that need to communicate their information as quickly as possible at the lowest possible power. To achieve this the MIPI Alliance has gathered industry leaders to create a new interface standard around connecting sensors.

I3C is a new standard that has advantages in reducing pin count, increasing performance, and decreasing power while achieving some level of backwards compatibility with the long established I2C interface.

Webinar Content

 

  • Background on the development of I3C
  • Basic MIPI I3C signaling and protocol
  • Comparison of I2C vs I3C
  • Key features of I3C
    • Lower power
    • Hot pluggable
    • High Data Rate (HDR) modes
    • Dynamic Addressing
    • In-band- Interrupts
    • Common Command Codes (CCC’s)
  • I3C roadmap features
  • Integrating I3C cores


NXP Presenter

Michael Joehren is currently working as a product definer in NXP’s Business Line Secure Interface & Power focusing on MIPI I3C equipped devices. Michael is an active participant in the MIPI I3C SWG and chairing the MIPI-JEDEC I3C liaison sub-group. During his past 23+ years in the semiconductors industry Michael held positions in mixed mode IC design, technical marketing, and system architecture in NXP Semiconductors (formerly Philips Semiconductors) in Germany and the US.

Michael holds an electrical engineering degree from University of Dortmund, Germany (1993) and is the author/co-author of seven patents.

Silvaco Presenter

Warren Savage serves as the General Manager of the IP Division at Silvaco. He has spent his entire career in Silicon Valley with engineering and management roles in leading companies including Fairchild Semiconductor, Tandem Computers, Synopsys, and most recently with IPextreme which he founded in 2004 and was acquired by Silvaco in June 2016.

Warren holds a BS in Computer Engineering from Santa Clara University, an MBA from Pepperdine University and is the author/co-author of three patents.

Webinar Details

When: September 12, 2017

Where:Register Online

Time: 10AM – 11AM PDT


CEO Interview: Michel Villemain of Presto Engineering, Inc.

CEO Interview: Michel Villemain of Presto Engineering, Inc.
by Daniel Nenni on 09-04-2017 at 7:00 am

One of the many advantages of being part of SemiWiki is the interesting people we get to meet. As I have mentioned before, the semiconductor industry is home to many brilliant and successful people and Dr. Michel Villemain is certainly one of them. Michel is the founder and CEO of Presto Engineering and it is interesting to note that he has a Doctorate in Computer Science, Artificial Intelligence, one of the trending terms on SemiWiki today.

You have been in the semiconductor business for more than 25 years, what are the recent changes in the industry that seem the most game-changing?
We are seeing the emergence of a fourth wave of semiconductor growth: first was defense and large frame computers; then PCs; then smartphones; and now Internet of Things (IoT). The first three waves saw primarily a race to performance, epitomized by Moore’s law. Now, connected objects are not essentially driven by performance, but by application fits. Whereas a wearable may need advanced fab processes in order to fit into a small formfactor and run as along as possible on a light battery, industrial or infrastructure applications do not. Interestingly, a lot of legacy 8-inch fabs are currently on allocation.

What we will see is a split in our industry between, on the one hand smartphones/consumer, driven by performance and volume and going vertical (in an interesting pendulum: we’ll see the resurgence of Integrated Device Manufacturers/IDM) and, on the other hand, a multitude of IoT projects with limited volumes and harder access to production resources, but with significantly higher ASPs. For instance, we served 200 customers last year: 30 of those are or will be in production this year, with average volume around 1Mu/year and Average Selling Prices (ASPs) north of $1.

IoT has become the latest catch-all buzzword; what does it mean for you?
Last year we conducted a comprehensive analysis of the IoT market and we established a segmentation that aligns with our services. Like most in the industry, we’re seeing growth in sheer unit volume–although mainly driven by RF tags, payment deployment and consumer devices. We have decided to focus on segments whereby our capabilities bring more value: industry 4.0, automotive, conditional access, and smart cities and buildings. Those segments see more system OEM players integrating chips into existing applications, as opposed to merchant semiconductor companies. While historically, we have catered to IDM and fabless (which we still do, of course), now more than half of our sales funnel is non-semi. Unquestionably, we are seeing semiconductor demand being increasingly driven by system companies.

From your vantage point, what are the most significant challenges facing those new entrants?
System companies are used to designing their product internally and then outsourcing production (the electronics industry created the Electronics Manufacturing Services/EMS market). Most have typically seen ICs in their products as an off-the-shelf device from an IDM or fabless semiconductor provider, and have thus, considered this IC as a mere BOM line item and hence cost element of their product.

However, they are now starting to shift their perspective and see that using custom devices within their product is a way to reduce costs, and protect the IP that they have in their application and application data. Most are not targeting advanced System on a Chip (SOC) devices where design cost (esp. verification and mask set) expenses are in the $10Ms; rather they are working in the field of ASIC devices where design, mask set and verification cost can be very an affordable investment that adds value to their product and enables the capture of significant value from their IoT application.

By moving from standard ICs to custom ones (and for some by internalizing chip design) they start to be exposed to the complexity of industrializing an IC then running production, from starting wafers to managing yield and complex backend processes. To do these activities at Presto, we have more than a dozen different skillsets in-house, including: device, product, test, reliability and failure analysis along with IT, quality, supplier management, planning, logistics, contracts… All supported by a comprehensive, semiconductor-custom ERP and, of course, complete lab with a substantial accumulated investment in test and analysis equipment.

Additionally, IoT applications require RF (for connectivity to the Internet), embedded non-volatile memory, analog—all somewhat simpler to design and harder to yield predictably. We execute about 50 industrialization projects every year, so we see our fair share of custom IC development, and when things are not done properly, a 6-month project can easily turn into a multi-year debugging exercise.

How can Presto help?
Most of those issues are preventable if addressed early in the development cycle. The challenge for new entrants is therefore early access to the supply chain, since in order to design properly, you need to target the right process, IP, route; you need to choose the right package and (most critically for RF and analog) target the right test solution (close to 50% of the overall cost for some IoT ICs). With our history, platforms and relationships, we provide our customers with a qualified window into all those areas, regardless of volume or expectations. This can make the difference between a 6-month project and much longer challenge.

We are confident in our ability to perform since, quite uniquely in our market space, we developed our business by integrating existing operations from larger semiconductor companies (e.g., Cypress, NXP, Inside Secure). Interestingly, having been on both sides, we know that large organizations often spend massively more than smaller ones on post-design activities (sometimes, astonishingly, more than 10 times more). It is then part of our added value to leverage this accumulated expertise in order to bring efficiency (and predictability) in what we do. Our customers value this ability that we have to de-risk an ASIC project.

What would you like to share with SemiWiki’s audience?
I think more and more of our customer-base will come from non-semiconductor companies that want to get onto the IoT bandwagon; and my main message to them is: “semiconductor is cheaper and easier to access than you think.”

These days it is straightforward to create a smart IoT device with an ASIC that uses mature, relatively inexpensive semiconductor technologies to turn what was a $50 product into a $5 device. This fuels a much wider adoption of the IoT application and makes for a much simpler and more profitable ROI. Working with an organization like ours, just like you have used EMS for a long time, you can focus on integrating an ASIC into your products while leaving its post-design complexity to us. This will foster the fourth growth wave that we are all looking forward to enabling!

Also Read: Is an ASIC Right for Your Next IoT Product?


About Presto Engineering, Inc.

Presto Engineering, Inc. provides outsourced operations for semiconductor and IoT device companies, helping its customers minimize overhead, reduce risk and accelerate time-to-market. The company is a recognized expert in the development of industrial solutions for RF, analog, mixed-signal and secured applications – from tape-out to delivery of finished goods. Presto’s proprietary, highly-secure manufacturing and provisioning solution, coupled with extensive back-end expertise, gives its customers a competitive advantage. The company offers a global, flexible, dedicated framework, with headquarters in the Silicon Valley, and operations across Europe and Asia. For more information, visit: www.presto-eng.com.

Also Read:

CEO Interview: Jim Gobes of Intrinsix

CEO Interview: Chris Henderson of Semitracks

CEO Interview: Stanley Hyduke, founder and CEO of Aldec