RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Methodology for Assertion Reuse in SoC Designs

A Methodology for Assertion Reuse in SoC Designs
by Daniel Payne on 02-21-2014 at 4:24 pm

As your SoC design can contain hundreds of IP blocks, how do you verify that all of the IP blocks will still work together correctly once assembled? Well, you could run lots of functional verification at the full-chip level and hope for the best in terms of code coverage and expected behavior. You could buy an expensive emulator to accelerate your verification process. You could try an Assertion-Based Verification (ABV) methodology and learn to manually write assertions. Or, you could consider using a methodology for assertion reuse in SoC designs.

Two years ago I started hearing about Assertion Synthesis, the process by which assertions could be created for an IP block automatically instead of by hand-coding. It sounded interesting, and it turned out to be valuable enough that Atrentabought a smaller company called NextOp to acquire a tool called BugScope. Paul McLellan blogged about that acquisition here.

Ravindra Aneja talked with me this morning by WebEx and brought up this notion dubbed MARS – Methodology for Assertion Reuse in SoC. Design reuse is well-known and a widely accepted method to save time and improve quality, so why not assertion reuse? Here’s the promise of MARS then:

  • It flags any IP that is incorrectly configured at the SoC level
  • Any IP feature failure is flagged at the SoC level
  • Any coverage targets missed by IP verification are pin-pointed

The whole idea is to provide a significant reduction in SoC Integration and debug time, while having a minimal impact on IP and SoC simulation run times. Plus, the MARS approach works with both emulation and FPGA-based verification environments. Here’s the proposed methodology flow:

A design or verification engineer would run BugScope on each IP block using functional tests and then look at the progressive test coverage report to understand:

  • When to start generating assertions
  • Are my IP tests mature enough yet

These automatically generated assertions are then ready to be re-used when the SoC has been assembled. Any violations of assertions would then be reported during SoC verification so that your team can fix SoC configuration bugs, fix specific IP design bugs, or fill in any IP coverage holes. Here’s an example of a progressive test coverage report:

This MARS approach is different than just ABV because BugScope is using the input tests to formulate the assertion automation. You can use assertion synthesis is you have ready access to the RTL code and tests for your internal IP, new IP, modified IP or even some 3rd party IP. Plan to target any control-intensive logic for generating properties: Arbitration, data flow control, interrupt controller, schedulers, etc.

Case Studies

The good news is that customers are using the MARS approach already, and in one case a customer had a critical IP block with about 1,400 tests defined. By using BugScope about 145 assertions were automatically generated, and when these assertions were re-used at the sub-system level there were 2 assertions fired. The first fired assertion uncovered an IP configuration error and the second fired assertion indicated that an important coverage hole was expected to be tested at the IP level.

In the second case a customer had some immature IP with a low verification confidence level, and BugScope generated 1,203 properties for re-use at the SoC level. With just 34 tests there were 105 properties fired, or about 10%. By looking through these properties they found that 48 unique and high priority IP coverage holes were exposed. The verification team now knew that their bus functional model needed to consider more realistic scenarios, that they needed to inject errors on multiple lanes simultaneously, and that clock skews was note test well enough at the IPO level.

Verification Management

To assist in the area of verification management there are three helpful feedback metrics:

  • Verification and Testbench Grading – which tests are most effective
  • Verification and Testbench Distribution – how exhaustive is each test, which coverage point needs more tests
  • Verification and Testbench Balance – are there enough hits on my most complex modules

Summary

It is possible to accelerate SoC verification by finding IP configuration issues, IP design bugs and IP coverage holes. The Methodology for Assertion Reuse in SoC designs (MARS) is in use today. The BugScope tool from Atrenta also can cut time with emulation and FPGA-based debug approaches.

The evaluation process typically takes between one week to a month, based on your team size and schedule.

Marvel presented a paper at DAC in 2013 from Marvel about their experience with BugScope, and another customer has submitted a paper to DAC 2014.

lang: en_US


2014 Semiconductor Growth Could be 2X 2013 Rate

2014 Semiconductor Growth Could be 2X 2013 Rate
by Bill Jewell on 02-21-2014 at 10:00 am

The fourth quarter 2013 semiconductor market declined 0.8% from the third quarter, according to World Semiconductor Trade Statistics (WSTS). Full year 2013 growth was 4.8%. Our most recent 2013 forecast at Semiconductor Intelligence was 6% in November 2013, based on expectations of positive growth in 4Q 2013. Who had the most accurate forecast for 2013 semiconductor growth? We compared publicly available forecasts for 2013 released in the few months prior to the January 2013 WSTS data release. The most accurate was IDC at 4.9%. Other close forecasts were WSTS and Gartner at 4.5% and Mike Cowan at 5.5%.

Key semiconductor companies reported 4Q 2013 revenue change versus 3Q 2013 ranging from +42% at Micron Technology (driven by revenues from the Elpida acquisition) to -18% at SK Hynix (due to a fab fire). Seven of the fourteen companies in the table below showed revenue growth in 4Q 2013 and seven had declines. Revenue guidance for 1Q 2014 indicates an overall decline in revenue from 4Q 2013. Of the twelve companies which provided guidance, nine expect declines in revenue ranging from -2% from Micron Technology (estimated based on bit growth and price guidance) to -16% from AMD. Toshiba’s semiconductor group, Infineon and Freescale all expect growth in 1Q 2014. The weighted average guidance for 1Q 2014 is a decline of about 5%.

What is the outlook for year 2014 semiconductor market growth? Forecasts in the last few months range from about 4% (WSTS and Mike Cowan) to 20% (Objective Analysis). Half of the forecasts are in the 7% to 9% range. Our latest forecast at Semiconductor Intelligence is 10%. This is down from our November forecast of 15% due to a negative 4Q 2013 and an expected 1Q 2014 decline of about 5%.

Our 10% growth forecast is largely driven by an expectation of accelerating World GDP growth. The International Monetary Fund IMF January outlook calls for World GDP growth of 3.7% in 2014, up from 3.0% in 2013. Acceleration is driven by developed economies, with the U.S. expected to accelerate to 2.8% in 2014 versus 1.9% in 2013. The Euro Area should recover from a 0.4% decline in 2013 to 1.0% growth in 2014. The strongest growth continues to be in emerging and developing economies, growing 5.1% in 2014, up from 4.7% in 2013. Although China is forecast to decelerate slightly from 7.7% to 7.5%, most other developing economies are projected to show accelerating GDP growth in 2014.

Semiconductor Intelligence’s model of the semiconductor market based on GDP for 2014 predicts 11% growth. We are reducing this to 10% based on a weak 1Q 2014. The upside growth in 2014 could be as high as 14%. 2015 GDP growth is forecast by the IMF at 3.9%, a slight acceleration from 2014. Based on this outlook, we expect double-digit growth for the semiconductor market to continue into 2015.

More Articles by Bill Jewell …..

lang: en_US


6 reasons Synopsys covets C/C++ static analysis

6 reasons Synopsys covets C/C++ static analysis
by Don Dingee on 02-20-2014 at 5:00 pm

By now, you’ve probably seen the news on Synopsys acquiring Coverity, and a few thoughts from our own Paul McLellan and Daniel Payne in commentary, who I respect deeply – and I’m guessing there are many like them out there in the EDA community scratching their heads a little or a lot at this. I’m not from corporate, but I am here to help.

Coverity and other purveyors of C/C++ static code analysis tools come from my happy place – embedded – and I’m a relative noob in EDA circles who can’t carry laundry for my colleagues here. However, I’ve been carefully observing the disciplines of EDA and embedded coming together for a few years now, way before coming to SemiWiki; it is why I was excited to be invited here and participate in the dialog, and maybe help shape the future.

From my perspective: this is more than just a sidestep by Synopsys into a new space to diversify beyond EDA roots, and I don’t see this solely as a competitive response to Mentor Embedded efforts, which are broader right now with a range of embedded software tools and operating systems. No, this is certainly a strategic maneuver, not just a tangential probe.

First, a bit of explanation: what is static code analysis? EDA types might recognize its foundation by another name: design rules checking, in which a tool looks at source files and relationships against a set of rules to find defects in code. Many folks are familiar with lint, the most basic of tools for checking C/C++ code. Coverity and other embedded tool sets go much farther than that, with heuristics and algorithms to not only shake out defects in C/C++ code, but reduce the annoying “false positives”. These tools provide control over the rule sets in play, and what gets checked and reported on, so actual errors are highlighted and warnings and other benign differences of opinion based on requirements and experience can be categorized or filtered out entirely.

Now, I’ll go back to the comments on “not firmly in EDA space” and “zero synergy” posed by my esteemed counterparts. I agree with them, this is not traditional EDA territory; allow me to make the case for the JJ Abrams-esque alternate strategy timeline. Aart de Geus and Anthony Bettencourt both focused on quality and security as their message, but it goes deeper than that – way deeper. Here are 6 reasons to think about the vision and where Synopsys is likely headed.

Time is money. Of course, you write perfect C/C++ code. Sure you do. I know I do. (Ha ha. HAHAHA. HAHAHAHAHA .…) Okay, maybe every once in a while, it’s not perfect. More than likely, you have policies involving coding standards, peer reviews, and objective testing to expose errors. Those all likely involve one thing: a human READING code, line by thousands or millions of lines. Eyeballs. Caffeine. LASIK. Late nights. Wasted days. Time that could be spent better. Static analysis tools not only read through code, but they catalog and check interprocedural relationships and other constructs for less obvious errors, pointing reviewers to the problem areas quickly.

All code is critical. We often relate code quality with safety critical applications. Certainly, the first advocates of embedded static code analysis have come from defense, industrial, medical, automotive, and other areas with stricter compliance and liability issues, and in some cases defined industry-wide coding standards. However, any code defect can make or break any application, and as the LOC count rises, the risk goes way up. This gets magnified in a typical SoC today, with different types of cores all running together. A high profile bug can torpedo any product quickly, something no developer can afford.

Code is the product. Microcontrollers, SoCs, and microprocessors do nothing but sit there and burn watts without software running on them – silicon is just the enabler, not the product. Synopsys may not be huge in the operating systems and tools business yet, but they are big in the embedded business; the popularity of the ARC processor core and DesignWare IP means there has to be verified software, somewhere. Synopsys has to create and deliver quality C/C++ code making this stuff work, and provide confidence it has been checked.

Co-verification and the golden DUT. Most folks think of EDA testing as RTL simulation, pattern generation and scan chains, but in today’s world, that is just the beginning. Real SoCs are co-verified, with the actual software running on a simulator or emulator. Think Apple A7 running iOS 7. In a complex part, without actual code running, errors can sneak through. Here’s a question: if you have new IP with both new hardware and new software, which is the problem? That golden software may not be as golden as you think, and many users report running static code analysis tools spotted actual problems they missed in software review and test.

IP is coming from everywhere. This is not a make-versus-buy world anymore; it’s build-borrow-buy. Here’s my favorite chart I stole from Semico Research via Synopsys, with the message that a complex SoC is approaching 100 or more IP blocks with both hardware and software, and reuse is key to productivity. If you write all your own IP, congratulations, but more than likely you get some IP from either open source communities or commercial suppliers. Guess what? The software IP from outside sources very likely doesn’t conform to your coding standards. Is it broke? Will it break, or will it break your IP, at integration? Would you like to read all that code line-by-line, or would you rather have it scanned – using your internal rule set, filters, and customized reports – to pinpoint where potential problems may lie?


In the end, there can only be one. “Yeah, but we’re talking about C/C++ here; we don’t design chips with C/C++.” (There are RTL static code analysis tools out there; same idea, but a story for another time.) True, but you likely design chips for C/C++, and again, your chips don’t do much without software. While the end game may be a generation away, at some point silicon will be optimized for the code it runs. If we believe in the ultimate vision for high level synthesis, design realizations will come directly from C/C++ application code – in order to do that with confidence, the code has to be not only defect-free, but well-formed and very well-understood. In the interim, it would certainly be interesting for Synopsys to adapt the Coverity tools for SystemC, not a gigantic stretch.

Like I said, this may be the JJ Abrams version of the story – but I think it will play out as the right one in an evolving EDA industry, and I strongly suspect Synopsys has already seen the benefits of Coverity tools internally as well as externally. A big congrats to the Coverity team, and to Synopsys for being brave enough to step out of the box further into embedded space.

More Articles by Don Dingee…..


lang: en_US


Mounir Hahad Rejoins Silvaco

Mounir Hahad Rejoins Silvaco
by admin on 02-20-2014 at 4:16 pm

Mounir Hahad just joined Silvaco as VP engineering. And when I say joined I really mean rejoined. I had a call with him to find out how that happened.

Mounir studied in France for a PhD in computer science on numerical computing. In 1995 the then-director of TCAD at Silvaco called him up having read some of his published papers. Silvaco had a problem in that simulation times were getting very long, especially in TCAD but also in SmartSPICE. Maybe Mounir could come and help them…

So a few months later Mounir joined Silvaco as a development engineer working on the parallelization of Atlas and SmartSPICE. It took a few years and he built up a team of experts in the space.

Ivan Pesic, the founder and then-CEO (who passed away in 2012) had an idea of a different licensing model for EDA/TCAD that made it easier to charge for peak use rather than just giving good customers a lot of extra licenses for free, which was what typically would happen. The idea was not just to do this for Silvaco but for other EDA and TCAD companies. Mounir went to be VP engineering at the new company EECad which did all the licensing and split revenue with their EDA partners. But Ivan didn’t seem to want to pursue it as aggressively as Mounir so he decided to move on.

EECad technology was not specific to EDA really but to any industry licensing software. Mounir thought that it would be attractive to do a shrink-wrapped appliance-based product for the whole market including email and security. So he joined IronPort and did, indeed, learn lots about security. IronPort was acquired by Cisco and Mounir had various management roles there but he also realized that in a large company, he’d have a limited opportunity to influence strategy in a big way. So when Silvaco heard he was loose they brought him back on board as VP Engineering.

Mounir believes that Silvaco really has a good opportunity to make it big. He felt that Ivan had wanted to keep the company reasonably small so he controlled it, and clearly Ivan didn’t believe in doing any serious marketing.

Mounir’s focus going forward is to take Silvaco’s engineering to the next level for operational excellence: enhance product quality, close up gaps between the products and improve release predictability and roadmap adherence.

The 2014 baseline release comes out in a couple of weeks and is more focused on end-to-end solutions. Lots of enhancements that customers have been wanting. Going forward Mounir thinks that Silvaco will need to become more open to industry standards and, as a result, partner more than it has in the past.


More articles by Paul McLellan…

 


Before SPICE Circuit Simulation Comes TCAD Tools

Before SPICE Circuit Simulation Comes TCAD Tools
by Daniel Payne on 02-20-2014 at 3:19 pm

I’ve run SPICE circuit simulators since the 1970’s and they use transistor models where the device parameters are provided by the foundry. These transistor and interconnect parameters come from an engineer at the foundry who has characterized silicon with actual measurements or by running a TCAD (Technology CAD) tool that is physics-based.

In the 1970’s at Intel the process engineers would come up with an idea to improve the speed or power of our DRAM technology, run a lot through the fab, and about 7 days later they would start to measure the results of their idea to validate it. Today, there’s not enough time to physically run through new process ideas as silicon and then measure the results, instead these process engineers are using TCAD tools to simulate the behavior of the transistors before fabrication and then predict what the SPICE parameters will be.

A long-time TCAD tool provider is Silvacoand their engineers recently wrote abouthow to simulate single-crystal gallium oxide (Ga[SUB]2[/SUB]O[SUB]3[/SUB]), a new material aimed at power device applicationsbecause it has a wide bandgap. Their device simulator is called Atlas, and here are the tool inputs and outputs:

Researchers from NICT in Japan had published their work on this new oxide compound semiconductor in June 2013, so the folks at Silvaco used that experimental data to then build the device structure and doping profile with the Atlas tool:

Once you have the device structure and doping profile, some assumptions are needed about the channel layer and dopant concentration levels.

TCAD Simulation Results

The Japanese researchers had published their Drain Current versus Voltage curves, so at Silvaco they compared Atlas results (green) versus measurements (red) and saw excellent correlation:


Simulated ID-VD curves compared with experimental data


Simulated ID-VG curves compared with experimental data

Another Silvaco tool for process simulation called Athena was used for reproducing the multiple Si implantation profile using a BCA amorphous material implants model. Once again, the simulated results were compared with the reported experimental results to confirm the accuracy:


A simulated Si depth profiles compared with experiment

Summary
For new types of devices like Ga[SUB]2[/SUB]O[SUB]3[/SUB] MOSFETs, you can run a TCAD tool like Atlas to simulate, experiment and even optimize the DC and transfer characteristics prior to actual fabrication or after initial fabrication experiments. This emerging oxide compound semiconductor may soon be in production for power device applications because of its properties being superior to GaN and 4H-SiC materials.

Further Reading
Silvaco publishes a quarterly newsletter all about TCAD advancements which you can find here.

lang: en_US


ISSCC: Analog-Digital Converter in FD-SOI

ISSCC: Analog-Digital Converter in FD-SOI
by Paul McLellan on 02-20-2014 at 11:50 am

The International Solid-State Circuits Conference (ISSCC) was last week in San Francisco. Stéphane Le Tual, Pratap Narayan Singh, Christophe Curis, Pierre Dautriche, all from STMicroelectronics presented a paper on A 20GHz-BW 6b 10GS/s 32mW Time-Interleaved SAR ADC with Master T&H in 28nm UTBB FDSOI Technology.

Modern wireline communication devices whether over copper or fiber require a high-speed analog-digital converter (ADC ) in their receive path to do the digital equalization, or to recover the complex-modulated information. A 6b 10GS/s ADC able to acquire up to 20GHz input signal frequency and showing 5.3 ENOB in Nyquist condition was presented at ISSCC. It is based on a Master Track & Hold (T&H) followed by a time-interleaved synchronous SAR ADC, thus avoiding the need for any kind of skew or bandwidth calibration. Ultra Thin Body and BOX Fully Depleted SOI (UTBB FDSOI) 28nm CMOS technology is used for its fast switching and regenerating capability. The core ADC consumes 32mW from 1V power supply and occupies 0.009mm[SUP]2[/SUP] area. The Figure of Merit (FoM) is 81fJ/conversion step.


Let’s focus on the implementation which is in ST’s 28nm FD-SOI process. Just as a reminder, FD-SOI is an alternative to FinFET which has some big advantages in being architecturally very similar to a “normal” planar process. FinFET has quantized transistor sizes which makes analog design challenging. ST have picked this transistor architecture and a couple of other manufacturers are in the FD-SOI consortium, most notably GlobalFoundries and UMC (but not TSMC which is committed completely to FinFET). This is a very high performance ADC and thus an example of complex high-precision analog design in FD-SOI.


Previously at ISSCC and other conferences, earlier designs have been presented in processes ranging from 65nm CMOS to 32nm SOI. Looking at the table above, you can see that while having similar characteristics as sampling rate or resolution, with the plus of having the smallest implementation (even if not an apples to apples comparison due to technology scaling), the best power consumption, best characteristics and, big advantage vs the earlier results, no need of gain/skew calibration for reaching such state-of-the-art results when for all the others it is mandatory.

To summarize, the block uses the efficiency of the pure passive “sampling and redistribute” concept for signals up to 20GHz. Together with the low-power capability of the 28nm CMOS UTBB FDSOI technology, ST could reach 10GS/s operation while keeping the power consumption at 32mW under 1V supply with a block that is just 0.009mm[SUP]2[/SUP].

The ISSCC website is here. If you have access to the proceedings then it is paper 22.3.


More articles by Paul McLellan…


$1 Billion IP & VIP sales by 2017?

$1 Billion IP & VIP sales by 2017?
by Eric Esteve on 02-20-2014 at 9:58 am

We are not talking about ARM Ltd., as the IP vendor has already passed the $1B sales in 2013. In fact, we are not talking about a single IP vendor; this $1B mark will be passed by two IP market segments: Interface and Verification IP. In fact these two segments are very close together. When an IP is developed to support a specific Interface protocol standard, a related Verification IP (VIP) is needed at the same time. This VIP is firstly used by be design team in charge of the related IP development, to verify the IP compliance in respect with the specific protocol. And, by the way, we understand why the VIP has to be designed by a different team, not necessarily from a different company, but the architects should be two different persons. This strategy is the only way to avoid the equivalent of the “common mode error” in aeronautics.

If you list every protocol standard, and each new release of this Interface standard, you can identify the related VIP in the vendor port-folio:

  • USB (USB 2.0, USB 3.0 and 3.1, HSIC, SSIC)
  • PCI express (PCIe gen-1, gen-2, gen-3 and yet to be released gen-4, M-PCIe)
  • MIPI (D-PHY, CSI-2, DSI, M-PHY, CSI-3, DSI-2, LLI, SlimBus, UniPro, etc.)

Then, adding Ethernet, SATA, SAS, HDMI, DisplayPort, I2C, JTAG, NVM Express, protocols, you have covered most of the potential VIP products. It’s interesting to notice that the Design IP and related VIP are acting as the hand and the glove: it’s complementary. Thus, if you have to verify a PCI Express Root Port IP, the VIP will act as an Endpoint agent, and conversely.

IPNEST is the well-known analyst expert of the Interface IP market (see: Interface IP Survey), it was a natural move to analyze the Verification IP market (this was IPNEST’s customer opinion). In fact, the market dynamics for Design IP and VIP are quite different. When starting a SoC design, the project manager easily identify the functions (IP) which may be outsourced, in order for the design team to focus on the company differentiators and speed up the SoC release for Time To Market (TTM) enhancement. Then the make versus buy analysis is run, the IP outsourced when it makes sense.

The Verification IP outsourcing follows different rules. The project manager may decide to buy VIP externally when his team is developing the related Design IP. In this case, the primary goal is to validate the IP itself, and the task is known to be CPU intensive, consuming as well many VIP Licenses (or token, or seats) and leading to a high VIP cost. The project manager may again decide to outsource or develop internally this VIP. Interesting for the VIP vendor, even if the Design IP is outsourced, and reputedly 100% functional, the project manager may have to outsource VIP, but the goal is different. In this case, he will need the VIP to run the complete chip verification, during the functional simulations. It’s to be noticed that when a Design IP functionality implies that most of the communication with the SoC will pass through it, the related VIP will become crucial, which translate in term of EDA expenses, as the team will need more VIP tokens (or seats) than for another function within the SoC (which could be more complex from a functional view point). This example help introducing the concept of “VIP Expenses” rather than “VIP License” cost. IPNEST has decided to segment each VIP segment in respect with the following parameters:

  • IP Internally designed, or outsourced
  • On the edge IP (first use), or re-used IP
  • Chip maker: Tier 1, or Tier 2

Thus the VIP expenses (not license or token) are evaluated for every protocol standard, and every segment (the A, B, C and D in the above table.
By using the “Interface IP survey”, we can extract the number of design starts by protocol standard, as we have evaluated the IP License Average Selling Price (ASP), covering the first segment: IP externally sourced. Then, we have to make an evaluation of the design starts including internally designed IP to cover the second segment. In the VIP survey, most of the intelligence is inserted in the various VIP expenses by protocol. To comfort this evaluation, we have run interviews with chip makers, representative of the Tier 1 and Tier 2 segments.

At this point, we realize that we have only covered 50% of the VIP market! In fact, a very important segment of this market, in term of business, is the sale of “Memory Models”. This was the initial Denali business, and this is an ever increasing segment. If you make the acquisition of the “Verification IP Survey”, you will see how IPNEST has dealt with this part of the VIP market. But this was not enough: the internal bus, like AMBA or OCP, also generates a VIP need, to speed up the SoC validation. These various VIP segments are also evaluated.

Maybe you don’t care about the methodology, and just want to know the bottom line result? This is a human behavior, and IPNEST had to deal with such request! If you consider that such an answer (VIP market size in 2013) is the result of a real work, running segmentation, design start evaluation, spending time to verify, as far as possible the various steps, you understand why only the happy few buying the VIP Survey will benefit from this information…

That I can tell you, free of charge, is that the cumulated Interface IP and VIP market segments will weigh more than $1 billion in 2017.

Eric Esteve from IPNEST

Table of Content for “Interface IP Survey 2008-2012 – Forecast 2013-2017” available here

Table of Content for “Verification IP Survey” available here

More Articles by Eric Esteve…..

lang: en_US

You probably better know why IPNEST is the leader on the Interface IP and VIP dedicated surveys, enjoying this long customer list:

Synopsys, (US)
Cadence, (US)
Rambus, (US)
Arasan, (US)
Denali, (US) now Cadence
Snowbush, (Canada) now Semtech
MoSys, (US)
Cast, (US)
eSilicon, (US)
True Circuits, (US)
NW Logic, (US)
Analog Bits, (US)
Open Silicon,(US)
Texas Instruments, (US)

PLDA
, (France)
Evatronix,(Poland)
HDL DH, (Serbia)
STMicroelectronics (France)

Inventure
, (Japan) now Synopsys
“Foundry” (Taiwan)
GUC, (Taiwan)
GDA, (India)
KSIA, (Korea)
Sony, (Japan)
SilabTech, (India)
Fabless, (Taiwan)


Synopsys Acquires Coverity

Synopsys Acquires Coverity
by Paul McLellan on 02-19-2014 at 5:27 pm

Synopsys announced this afternoon that they are acquiring Coverity for $375M subject to all the usual reviews.

There are a couple of other big EDA connections. Aki Fujimora, who was CTO of Cadence, is on the board. And Adreas Kuehlmann is the VP of R&D. He used to run Cadence Berkeley Laboratories before moving to the other end of the bay bridge. Before I moved to the Mission District in San Francisco, the building I backed onto was Berry Street and Coverity are based in offices just across the street. I interviewed him for DAC.com. He was the president of CEDA despite no longer being really in EDA, but as a software guy I’m interested in software devleopment methodology. I think it must be the shortest distance I have ever had to go for an interview.


Although I’m sure Coverity sells software to groups developing software to run on large SoCs, their market is not so restricted and they serve the general software development market. The heart of their technology is a static analysis engine for software called SAVE (static analysis verification engine). Their main products do a full static analysis of large code-bases and finds quality and security defects, including full interprocedural analysis, not just one source file at a time. Another product finds holes in test and prioritizes how to fix them.

This is an interesting acquisition since it isn’t really firmly in the EDA space. Of course, Mentor has had product in the software space for a long time, but focused on embedded and software for SoCs, so not so far from their mainline business.

Or as the press release puts it:Software complexity and the resulting quality and security issues are dramatically increasing. Today, more than six million professional software developers across the world write more than 60 million lines of code every day, deployed to fulfill mission-critical, safety-critical and security-critical tasks. Many of those deployments are fragile or even failing, resulting in delayed or lost revenue, recalled products, loss of customer trust, and even safety issues. Since spinning out of a Stanford research project 10 years ago, Coverity has been developing revolutionary technology to find and fix defects in software code before it is released, improving software security. Bringing together the Synopsys and Coverity teams opens up opportunities to increase penetration into the semiconductor and systems space where Synopsys excels. The acquisition also enables Synopsys to enter a new, growing market geared toward enterprise IT and independent software providers that Synopsys doesn’t currently address.

Synopsys press release is here.


More articles by Paul McLellan…


One SPIE session not to miss

One SPIE session not to miss
by Beth Martin on 02-19-2014 at 4:19 pm

The time is nigh for another meeting of the practitioners of the lithographic arts, dark and otherwise, at the SPIE Advanced Lithography symposium.

I love this conference for the engagement you see, both in the sessions and in the hallways. People actually meet and talk and argue. There’s always interesting gossip, exciting technologies, and spirited debate about the future of lithography. Each year, you also see more DFM topics. SPIE is like the bridge of a great ship from which you can witness the merging of two, once-separate, seas. Just take a peek at the program and you’ll notice the significant presence of DFM, or more broadly (as one of the conferences is eloquently titled), “design-process-technology co-optimization for manufacturability.”

In fact, one-third of the plenary presentations on Monday, February 24, covers the topic of the design-manufacturing-test flow, specifically, dealing with patterns throughout the design and manufacturing flow. The presenter is Joseph Sawicki, VP of the Design-Silicon division at Mentor Graphics, and the premise is that design style-based or systematic defects have become major challenges to yield ramp. The defects are driven by the difficulty in lithography at advanced nodes. Part of the solution is to be found in EDA software. He will discuss some of these EDA-based yield solutions that span design, manufacturing, and test. He refers to this set of EDA tools as a “pattern-aware” EDA flow and says it will minimize risk and enhance manufacturing.
For example, there are powerful new methods of identifying pattern failures hiding in yield loss. While diagnosis-driven yield analysis has been around for a while, the new generations of this software-based diagnosis of test failures includes integration with DFM tools and new algorithms that remove the noise, or ambiguity, from the statistical analysis. In practice, this means finding the offending defect quickly and with high confidence.

Another EDA technology Sawicki will mention is new OPC methods. Mentor engineers have a number of papers at SPIE about model-based OPC, SEM-countour based OPC model calibration, resist toploss modeling, and neighbor-aware fragment feedback with “matrix” OPC, among others. Sawicki will also talk about technologies that will be ready to help find and fix failure mechanisms in emerging process nodes and tools that give designers visibility into the risks of production.
Sawicki is a dynamic speaker, and the topic is timely. Process ramp and yield ramp is under pressure at the emerging nodes, and I can verify that this trend has kicked EDA innovation into high gear. SPIE is February 23-27 at the San Jose convention center. Pre-registration ends Feb 19, so sign up now online. You can still register in person the day of the event.

More articles by Beth Martin…


Xilinx: Delivering a Generation Ahead

Xilinx: Delivering a Generation Ahead
by Paul McLellan on 02-19-2014 at 4:15 pm

Last week was Xilinx’s investor day. Xilinx believe they are now a process generation ahead. They did over $100M in 28nm designs in FY2013 (Xilinx FY ended March 2013) and did over over $100M in Q4 2013 calendar year alone (and this is almost all true production volume, with only about 5% prototypes) with a plan greater than $350M for the whole fiscal year 2014 (which ends in March 2014) and twice that in fiscal year 2015. That’s revenue momentum.
Continue reading “Xilinx: Delivering a Generation Ahead”