Synopsys IP Designs Edge AI 800x100

Tela Innovations, DAC Update

Tela Innovations, DAC Update
by Daniel Payne on 06-13-2013 at 12:16 pm

Lawsuits in EDA are common, and Tela Innovationsfiled a huge complaint back in February with the U.S. International Trade Commission (USITC) against HTC Corporation; HTC America, Inc.; LG Electronics, Inc.; LG Electronics U.S.A., Inc.; LG Electronics MobileComm U.S.A., Inc.; Motorola Mobility LLC; Nokia Corporation; Nokia, Inc.; Pantech Co., Ltd.; and Pantech Wireless, Inc.

At DAC last week I met with Neal Carney to get an update on what Tela Innovations is up to this year.


Dhrumil Gandhi (left), Neal Carney (right)

Discussion Notes

Power reduction benefits – gate length biasing to reduce leakage, (Blaze MO) analyze slack in non crtical paths then swap out for lower drive cells and lower dynamic power. Non disruptive to design, because the layout is smaller. Can reduce up to 15% on dynamic power. Works on a timing-closed design

SOld to COT users or provided as a service. Published users are LSI Logic, Mellanox – network processing, Mindspeed – now defunct.

TSMC (Power Trim) – using Tela technology.

Litigation ongoing now.

FinFET – still could use this approach with multi VT.

All the data in chart is bulk CMOS, not FinFET or FD SOI (yet).

Leakage Reduction


Concurrent Leakage and Dynamic Power Reduction

Litigation – no news until settlement or agreement reached.


GPU-Based SPICE Simulator for Library Characterization

GPU-Based SPICE Simulator for Library Characterization
by Daniel Payne on 06-13-2013 at 11:55 am

Jeff Tuanis the CEO and President of an EDA startup called G-Analog, founded in May 2012. His background includes working at: Cadence, Epic, Synopsys, Nassda, Chartered Semi and GLOBALFOUNDRIES. Jason Lu is the R&D manager. We met at DAC last week to talk about his company’s new product called Gchar for IC library characterization using a GPU-based SPICE circuit simulator.


Jeff Tuan, Jason Lu
Continue reading “GPU-Based SPICE Simulator for Library Characterization”


Intel Plays to the 4 Horsemen of the Mobile Software World

Intel Plays to the 4 Horsemen of the Mobile Software World
by Ed McKernan on 06-13-2013 at 1:00 am

Just at the moment we look for the mobile market to consolidate, it fractures along new fault lines as old allies become enemies and new business models appear in order to spur the ecosystem giants forward. It was not long ago that Android was let loose in an attempt to prove that the Mobile World is Flat. Ah but Samsung decided that it had the hardware pieces in place to leverage Android into a semi-monopoly threatening all including Apple and to some extent Google. As always, adjustments are made and a new round of counterattacks are launched to outflank leaders. This time Microsoft and Intel join Google and Apple in attempting to sway the market their way. Interestingly, Intel’s new Bay Trail Processor Continue reading “Intel Plays to the 4 Horsemen of the Mobile Software World”


Missed #50DAC? See Aldec Verification Sessions Online

Missed #50DAC? See Aldec Verification Sessions Online
by Daniel Nenni on 06-13-2013 at 12:00 am

Aldec, Inc. is an industry-leading Electronic Design Automation (EDA) company delivering innovative design creation, simulation and verification solutions to assist in the development of complex FPGA, ASIC, SoC and embedded system designs. With an active user community of over 35,000, 50+ global partners, offices worldwide and a global sales distribution network in over 43 countries, the company has established itself as a proven leader within the verification design community.

Aldec is offering the most popular Technical Sessions from this year’s Design Automation Conference online:

DO-254 Requirements Traceability
Date:Thursday, June 20, 2013
EU: 3:00 PM – 4:00 PM CEST Learn more
US:11:00 AM – 12:00 PM PDT Learn More

Accelerate DSP Design Development: Tailored Flows

Date:Thursday, July 25, 2013
US Time:11:00 AM – 12:00 PM PDT Learn more

CyberWorkBench: C-based High Level Synthesis and Verification

Date:Thursday, September 12, 2013
EU: 3:00 PM – 4:00 PM CEST Learn more
US:11:00 AM – 12:00 PM PDT Learn more

Hybrid SoC Verification and Validation Platform for Hardware and Software Teams
Date:
Thursday, September 26, 2013
EU Time: 3:00 PM – 4:00 PM CEST Learn more
US Time:11:00 AM – 12:00 PM PDT Learn more

Here is a section of A DAC Report from Dmitry Melnik, Product Manager Software Division @ Aldec:
I just returned back to the office from the 50[SUP]th[/SUP] Design Automation Conference (DAC) which took place in Austin, TX, on June 2—6. As I began compiling my trip report, I thought that I might share some of my observations, especially for those who couldn’t attend this industry event but still wanted to gain some insight.

Functional Verification trends
Since Aldec’s core competency is functional verification, I was keeping an eye on this particular domain… and just by looking at the exhibiting companies, I can tell that both interest and presence in the functional verification space keep growing from year to year. This is no surprise to any major EDA vendor, as our customers have been designing complex multimillion SoCs for quite a while now.

  • We all know that verification is becoming more and more challenging as design complexity keeps growing. No one is surprised to see customer designs that target sub-28nm process technology, have billions of transistors, multiple ARM Cortex processors, a number of switchable power domains, and hundreds of IPs talking to each other via high-speed AXI interconnects.

  • Verifying a large-scale SoC is a process that requires careful planning and execution. SoC verification teams put a lot of pressure on EDA vendors and expect us to have their backs in achieving verification closure. While simulation remains one of the key verification methods, the tool efficiency has had a huge impact on project schedules. For this reason, there is an on-going collaboration between Aldec customers and R&D to fine-tune Riviera-PRO™ based on the current cutting edge projects. We are constantly improving the compilation times for RTL code and gate-level netlist (with and with no SDF), simulation runtimes, and memory footprints.

  • Customers manage their requirements and test plans in dedicated tools such as Aldec’s new Spec-TRACER™, and test new features with several constrained random sequences instead of thousands of directed tests, improving productivity by orders of magnitude. Functional coverage has always been going together with constrained-random stimuli, but today we are dealing with hundreds of functional coverage points, cross-coverage, and merging results from many test runs. Fortunately, standards such as Accellera’s UCIS help all vendors to address this challenge in a consistent and effective way.

  • SystemVerilog-based UVM and reusable Verification IPs (VIPs) is becoming an industry standard. Some teams reported that they were able to reduce design time by 2—3X based on deployment of UVM-compliant environments. UVM itself presents a challenge for EDA vendors – industry-wide adoption of this OOP-flavored framework requires us to shift some of the traditional paradigms and expand hardware engineer’s toolbox with new debugging tools, which used to be specific for software programming domain.

  • When it comes to an SoC design there are usually both software and hardware design teams, each using several techniques to be able to work in parallel and shorten time-to-market. These techniques include virtual prototyping, FPGA-based prototyping, and in-circuit emulation. Based on how customers use our HES-DVM™, we see that virtual prototyping is typically used for early software development and driver software verification; whereas primary application for emulation is verification of a sub-system or entire SoC in its system environment (connections to peripherals and software running in processors).

Well, I could go on and on… as I haven’t even mentioned the challenges associated with low power design, multiple clock domains (CDC), high-level synthesis (HLS), and formal verification. Aldec has been around for 30 years, and we have seen designs evolve from few thousand gates to millions of gates… and today’s multimillion gate SoC will eventually become building blocks (or IPs) for the future SoC designs. These are certainly exciting times for the EDA industry!

lang: en_US


Increase Your Chip Reliability with iROC Tech

Increase Your Chip Reliability with iROC Tech
by Pawan Fangaria on 06-12-2013 at 9:00 pm

As we have moved towards extremely low process nodes with very high chip density, the cost of mask preparation also has become exorbitantly high. It has become essential to know about the failure rates and mitigate the same at the design time before chip fabrication, and also to make sure about chip reliability over time as it is constantly exposed under cosmic rays and natural environment of radiation. Chip failure due to soft errors can prove to be costly in systems where human safety is of utmost importance such as medical, automotive, aerospace etc.

Although I had read about iROC tools (TFIT, SOCFIT) for detecting Soft Errors and methods for mitigating the same; last week was a fortunate time when I came across a webinar, moderated by Paul McLellan from Semiwiki and presented by Adrian Evans, Principal Engineer from iROC.

This was a nice opportunity to know all about Soft Errors, their causes, impact on chips and systems, prediction and cure. So, what is the source of soft errors? As evident from the following picture, they are caused due to neutron particles present in nature and impurities in packaging material emitting alpha particles.

These alpha particles can upset charge at particular points in the circuit and cause temporary logic failure as shown in figure below.

How is Soft Error Rate (SER) expressed? It’s in terms of Failure in Time (FIT), i.e. 1 failure in 10[SUP]9[/SUP]hrs. Considering 10s of chips in a system and 100000s of systems in the field, the number of soft errors in a week or month can be significant. Trends show that at SoC level at lower nodes, as the number of transistors multiplies the SER increases many fold with multi cell upsets.

So, how to avoid soft errors? What’s the cure? Many times there are masking effects which block soft errors. Below are examples of low level masking.

Also, there are high level functional masking (inverted pixel or corrupted frame in video applications, corrupted packet in networking application etc.).

As soft errors are more prone in FFs and Memories, there are specific techniques to remove those. In case of memories, standard procedures with Error Correcting Codes (ECC) and other techniques are used. For FF, hardened Flip-Flop designs are made.

How to detect the soft errors? That’s an expensive proposition. The chip has to be tested for its sensitivity to radiation. iROC provides radiation testing services. It can test sensitivity to alpha particles in-house using a radioactive substance.

Soft errors can occur at cell, circuit, or system levels under any user environment. iROC has special tools to detect them at design time before manufacture.

TFIT is transistor level soft error simulation tool, suitable for library designers and SOCFIT is circuit level tool for chip architects. TFIT uses SER response models supplied by foundries on circuit design (SPICE netlist, GDSII), runs circuit simulation and generates FIT rate for each cell for the designer to find the weakest transistors. Similarly, a weak chip in the system can be determined and worked upon for improvement or replacement. As SER data can vary with voltage, logic state and StdCell; it’s desirable to get SER data in terms of error bars from the foundry.

In the overall eco system, iROC is well connected with Foundries (GF, TSMC), fabless design companies (for SER analysis and simulation during design), Radiation Test Facility, System houses and IP suppliers (for radiation testing services).

It was an interesting session keeping me engaged throughout to know about what next. A listening to the detailed presentation can be attended at the recorded link here.


So, where are all the EMBEDDED guys?

So, where are all the EMBEDDED guys?
by Don Dingee on 06-12-2013 at 8:15 pm

Roaming the aisles at #50DAC the past week left me with one unmistakable impression: there were two shows going on at the same time. Oh, we were all packed into one space together at the Austin Convention Center and neighboring hotels. But we weren’t quite all speaking the same language – yet.

Continue reading “So, where are all the EMBEDDED guys?”


A Call to ARMs!

A Call to ARMs!
by Daniel Nenni on 06-12-2013 at 7:00 pm

It sure has been an interesting experience watching Intel enter the semiconductor foundry business! While I credit Intel for increasing the exposure of the fabless semiconductor ecosystem to the financial markets, the attention from the Intel biased press is a bit overwhelming. The TSMC and ARM bashing is reaching new levels so let’s take a look back at how this all started.

The first quote I remember on the subject was from Nvidia CEO Jen-Hsun Huang suggesting Intel go into the foundry business. “Why not be a foundry for all the mobile companies? There’s no shame in that.”This is back when 40nm silicon was in short supply and Nvidia was having yield ramping problems, which he first blamed on TSMC but later shared responsibility for. Unfortunately Nvidia, Qualcom, and the rest of the mobile companies will be competing with Intel Atom based SoCs so I do not see them buying wafers from Intel anytime soon. Apple learned the hard way what happens when you buy wafers from a competitor (Samsung), right?

The next memorable quote was from Intel Fellow Mark Bohr: “Being an integrated device manufacturer really helps us solve the problems dealing with devices this small and complex,” Bohr said “the foundries and fabless companies won’t be able to follow where Intel is going.”

Mark went on to predict that: “TSMC’s recent announcement it will serve just one flavor of 20 nm process technology is an admission of failure. The Taiwan fab giant apparently cannot make at its next major node the kind of 3-D transistors needed mitigate leakage current, Bohr said.”

Of course we now know that statement is false but this type of rhetoric continues today. Not so much by Intel themselves, as I think they have learned their lesson, but by the Intel Fanboy Press. On the positive side it has further bonded the fabless semiconductor ecosystem and has boosted our drive to innovate, collaborate, and in the words of the Chairman, Dr. Morris Chang, create the “Grand Alliance” that we see today.

One example of Intel bigoted press is Seeking Alpha’s Ashraf Eassa who not only bashes ARM, but also bashes TSMC and the fabless semiconductor ecosystem on a whole. Ashraf is a fame and fortune seeking college student that gets paid $.01 per click for his articles, which certainly explains his tabloid worthy titles:

  • Intel Breaks ARM, Sends Shares Down 20%
  • ARM Holdings Starts To Sound Desperate
  • Sell ARM: Intel Mobile Progress Is Major Threat To Lofty Valuation
  • Intel Puts The Final Nail In The ARM Cost Myth Coffin
  • ARM Holdings: The Real Reason That It Ends Badly

  • Intel Beats ARM At Its Own Game With Avoton
  • Sell ARM As CEO’s Departure Signals The Top
  • ARM Proven Wrong, Intel Vindicated
  • 61% Downside Ahead For ARM Holdings As Lofty Expectations Meet Harsh Reality

Just to name a few… I didn’t include links because they really are not worth reading if you are a semiconductor professional. This is the start of a series of blogs I will do on authors who think Googling semiconductor topics takes the place of first hand experience.

Disclaimer:I will share my experiences, opinions, and observations based on my day job as an internationally recognized industry expert and my night job as SemiWiki blogger, administrator, and observer of analytics.I’m completely out of the stock market and have no desire to reenter itso don’t expect opinions that selfishly support my stock positions.Please consider this series of blogs a starting point for further discussion and clarification, and not the final word on anything.

lang: en_US


Hardware Assisted Verification

Hardware Assisted Verification
by Paul McLellan on 06-10-2013 at 9:00 pm

On the Tuesday of DAC I moderated a panel session on Hardware Assisted Verification in 10 Years: More Need, More Speed. Although this topic obviously could include FPGA-based prototyping, in fact we spent pretty much the whole time talking about emulation. Gary Smith, on Sunday night, actually set up things by pointing out that emulation is the heart of how EDA is going to take over more and more of embedded software development, in his view. And Wally, at DVcon (I think), pointed out that emulation is now the cheapest verification on a cycles per $ basis compared to plain old simulation. Of course with Synopsys’s acquisition of Eve, all 3 major EDA vendors now have an emulation solution. The panel was organized by Frank Schirrmeister of Cadence.

The panelists were:

  • Dave Bural, of Texas Instruments in Dallas
  • Alex Starr of AMD in Boston
  • Mehran Ramezani of Broadcom, standing in at the last minute for Vahid Ordoubadian who had an urgent meeting and couldn’t make it.

Since I was moderating the panel and keeping things moving, i couldn’t take detailed notes. So this is more of a stream of what I remember than any attempt to be comprehensive.

The general opinion of everyone is that capacity of emulation was keeping up with what they needed. While everyone would always love more, they were managing with what they had. Since the underlying technology of emulation are FPGAs and since they improve on the same Moore’s Law trajectory as the designs being emulated, this will probably continue. However, all the panelists felt that performance was not improving as fast as they needed. Unfortunately, nobody was forseeing any miracle breakthrough, just gradual improvement.

Emulation has clearly gone mainstream. Not that long ago, emulation was very expensive and very hard to use, taking weeks if not months to bring up a design satisfactorily. As a result it was used almost entirely by the most advanced designs with the biggest design budgets, on a dedicated project basis. Now, all the companies represented on the panel had emulation farms which were shared among many designs. Although emulation can be used to simply “run vectors”, the primary use was to enable early software and firmware development.

One challenge is that chip designers don’t really understand embedded software development, and software developers don’t understand chip design. Clearly they use different tools in their day-to-day environments. But emulation based software development clearly straddles this divide and there is a need for engineers with a better understanding of both sides of the coin.

I asked about how they and their teams went about justifying the investment in emulation. Although it is still a lot cheaper than it used to be, it is still not cheap. Generally, fear of a bug leaking out into the field was a big driver, especially for the initial investment in emulation when it was getting added to the standard design flow. Later one, once established, incremental investment was easier to justify since it scaled with the size and number of designs just like any other EDA tool.


Custom Physical IC Design update from Cadence

Custom Physical IC Design update from Cadence
by Daniel Payne on 06-10-2013 at 8:05 pm

Custom IC design and layout is becoming more difficult at 20nm and smaller nodes, so the EDA tools have to get smarter and work harder for us in order to maintain productivity with the fewest iterations to reach our specs. Dave Stylesand John Stabenow of Cadence met with me last Monday in Austin at the DAC exhibit area.


John Stabenow
Continue reading “Custom Physical IC Design update from Cadence”