100X800 Banner (1)

Using "Apps" to Take Formal Analysis Mainstream

Using "Apps" to Take Formal Analysis Mainstream
by Daniel Payne on 02-02-2012 at 12:47 pm

2760d1328206331 dvcon 2012.png

On my last graphics chip design at Intel the project manager asked me, “So, will this new chip work when silicon comes back?”

My response was, “Yes, however only the parts that we have been able to simulate.”

Today designers of semiconductor IP and SoC have more approaches than just simulation to ensure that their next design will work in silicon. Formal analysis is an increasingly popular technology included in functional verification.

DVCon 2012


I received notice of DVCon 2012 coming up in March, and saw a tutorial session called: Using “Apps” to Take Formal Analysis Mainstream. I wanted to learn more about the tutorial so I contacted the organizer, Joe Hupcey III from Cadence and talked with him by phone.


Joe Hupcey III, Cadence

Q What is an App?
A: An app is a well documented capability or feature to solve a difficult, discreet problem. An App has to be more efficient to use (like how formal can be more efficient than a simulation test bench alone). An app has to be easy enough to use without having a PhD in Formal analysis.

Q: Who should attend this tutorial?
A: Design and verification engineers that could benefit from the use of formal. Little coaching and documentation is needed to get up to speed. Also Formal experts can benefit. Design and verification engineers that want to quickly and easily take advantage of the exhaustive verification power that formal and assertion-based verification has to offer.Formal experts that what to branch out, and make all their colleagues more productive, plus in the case of the apps they tie into the Metric-Drive Verification flows the contribution made by formal can be mapped to simulation terms.

Q: Does it matter if my HDL is Verilog, VHDL, SystemVerilog or SystemC?
A: All languages benefit from formal, PSL or SystemVerilog Assertions are talked about and used.

Q: What are the benefits of attending this tutorial?
A: Everyone on the design and verification team gets some value out of formal tools and methodology. We’ll be showing 5 or 6 apps that are available for use today. As I noted above, the “apps” approach starts with hard problems where formal, or formal and simulation together, are more efficient than simulation alone – then structures a solution that’s laser focused on the problem. There are quite a few apps available today – so if you are a Cadence customer this tutorial will help you get the most of the licenses you already have.

… plus we are hoping to include a bonus, guest speaker from a world-wide semiconductor maker who will speak about the app he created for a current project. (The engineer is working with his management to get approval now)

Our lead example app is one for SOC connectivity – we show how to validate the connectivity throughout the entire SOC, adding BIST, plus using low-power mode controls. You could create a test bench, simulate and verify that connectivity is correct (couldn’t test exhaustively all combinations). SOC Connectivity app accepts input as an Excel spreadsheet with connectivity, it then turns that into assertions, finally the Formal tool verifies that assertions are true for all cases (finds counter-examples where design fails). This takes only hours to run, not weeks to run like simulation. This is just part of the Cadence flow – assertion driven simulation is kind of unique to Cadence (take formal results, feed into coverage profile to help improve test metrics).

Q: Why should my boss spend the $75?
A: Because these apps can help save you design and verification time faster than running pure simulation alone. Case studies are use to provide measured improvements. You can leave the tutorial, go back to work, and start using the formal approaches. The main presenters are experts in each area.

Christopher Komar – Formal Solutions Architect at Cadence Design Systems, Inc.

Dr. Yunshan Zhu – Presdient and CEO, NextOp Software. They have an assertion synthesis tool that reads the TB and the RTL for the DUT, then creates good assertions (not a ton of redundant assertions). BugScope will be shown along with case studies.


Vigyan Singhal – CEO at Oski Technology, they make formal apps for both the design and verification engineers and will talk about assertion-based IP.


Source: Oski Technology

Summary
To learn more about formal analysis applied to IP and SoC design then consider attending the half-day tutorial at DVCon on March 1 in San Jose. You’ll hear from people at three different companies:

For just $75 you receive the slides on a USB drive and they provide coffee and feed you lunch.


Design & Verification of Platform-Based, Multi-Core SoCs

Design & Verification of Platform-Based, Multi-Core SoCs
by Daniel Payne on 02-02-2012 at 11:16 am

Consumer electronics is a new driver in our global semiconductor economy as we enjoy using Smart Phones, Tablets and Ultra Books. The challenge of designing and then verifying the electronic systems to meet the market windows is a daunting one. Instead of starting with a blank sheet for a new product, most electronic design companies are choosing to start with a platform then integrate ready-built IP.



Amazon Kindle Fire – Tear Down

An example of a platform-based consumer product is the Kindle Fire from Amazon. The ICs included in the design of the Kindle Fire are:

  • Samsung KLM8G2FEJA 8 GB Flash Memory

  • Hynix H9TKNNN4K 512 MB of Mobile DDR2 RAM

  • Texas Instruments 603B107 Fully Integrated Power Management IC with Switch Mode Charger

  • Texas Instruments LVDS83B FlatLink 10-135 MHz Transmitter

  • Jorjin WG7310 WLAN/BT/FM Combo Module

  • Texas Instruments AIC3110 Low-Power Audio Codec With 1.3W Stereo Class-D Speaker Amplifier

  • Texas Instruments WS245 4-Bit Dual-Supply Bus Transceiver
  • 1 GHz processor— a Texas Instruments OMAP 4430
  • Texas Instruments WL1270B 802.11 b/g/n Wi-Fi

So, how do you create an SoC like this and what are the costs and power challenges?

DVCon


I spoke with Stephen Bailey of Mentor Graphics this week to learn about a half-day tutorial that he is part of at DVcon called: Design & Verification of Platform-Based, Multi-Core SoCs. Platform-based design is when you create a new SoC with pre-defined processor subsystems (think ARM), semi IP, and then add some of your own new blocks (maybe as little as 10% of the design).


Stephen Bailey, Director of Product Marketing, Mentor Graphics DVT

Clearly SW integration is now the bottleneck and with an exploding amount of state space it makes verification an issue to automate.

We all love our mobile devices to have a battery life of at least one full business day, so we need to design with that constraint in mind.

Tools and Methodology
Here’s a methodology flow that can help address the design and verification challenges listed so far:

Specific EDA tools for each block show above:

  • Vista for SoC architectural design and SW development virtual prototyping;

  • Certe for register-memory map specification;

  • Catapult for HLS of the new subsystem and Calypto for sequential LEC;
  • ARM’s AMBA Designer for Fabric implementation
  • Questa for simulation (with Vista for SC/TLM, new subsystem verification pre/post HLS and sign-off verification of the SoC);
  • Veloce for sign-off verification (SoCs require far more cycles than is practical with SW simulation alone) and SW development;
  • We also use Questa/Veloce Verification IP, inFact with VIP to create traffic generators to verify (re-validate) performance at RTL, and Codelink for synchronized SW/HW debug in both Questa and Veloce
  • Codebench embedded software tools that can be used with SW virtual prototype, CDC and Power-Aware verification. However, due to time constraints, we only can only mention them as part of the complete flow.

Summary
To learn more about design and verification of platform-based, multi-core SoCs then consider attending the half-day tutorial at DVCon on March 1 in San Jose. You’ll hear from experts at three different companies:

The tutorial will cost you $75 and in return you receive the slides on a USB drive and they feed you lunch and provide coffee.


3D Standards

3D Standards
by Paul McLellan on 02-01-2012 at 5:06 pm

At DesignCon this week there was a panel on 3D standards organized by Si2. I also talked to Aveek Sarkar of Apache (a subsidiary of Ansys) who is one of the founding member companies of the Si2 Open3D Technical Advisory Board (TAB), along with Atrenta, Cadence, Fraunhofer Institute, Global Foundries, Intel, Invarian, Mentor, Qualcomm, R3Logic, ST and TI.

The 3D activities at Si2 are focused on creating open standards so that design flows and models can all inter-operate. In the panel session Riko Radojcic of Qualcomm made the good point that standards have to be timed just right. If they are too early, they attempt to solve a problem that either there is no consensus needs to be solved, or where the solutions are not yet known and thus cannot be standardized. If standardization is too late, then everyone has already been forced to come up with their own ways of doing things and nobody wants a standard unless it is simply to pick their solution. Riko reckons that 3D IC is about a year behind where he would like to see it and it risks the standards being too late and so everyone having to do their own thing. The Si2 open3D page is here.

One standards that does now exist, as of earlier in January, is the JEDEC wide IO single data rate standard for memories. This includes the ball positioning and signal assigments that allows up to 4 DRAM chips to be stacked on an SoC and permits data rates up to 17Gb/s at significantly lower power than traditional interconnect technologies with 4 128b wide channels. The standard is here (free PDF, registration with JEDEC required). This should allow memory dice from different DRAM manufacturers to be used interchangeably in the same way as we have become accustomed to with packaged DRAM.

Apache is most interested in power delivery and thermal issues of course. Multiple tiers of silicon mean that the power nets on the upper tiers are further from the interposer and the package pins. In a conventional SoC, the IO power may make up 30-50% of all power and the clock another 30% or so. There is a lot of scope in 3D for power reduction due to the much shorter distances, the capability to have very wide buses. Nonetheless, microbumps and TSVs all have resistance and capacitance that affects the power delivery network and general signal integrity.

Thermal analysis is another big problem. Since reliability, especially metal migration, is affected by temperature severely (going from 100 degrees to 125 degrees reduces the margin by 1/3) the overall reliability can be very negatively affected if the temperature in the center of the die stack is higher than expected and modeled.

The big attraction of 3D is the capability to get high bandwidth at low power. It has the potential to deliver 1-2 orders of magnitude of power reduction on signalling versus alternative packaging approaches, as much as 1/2 Terabit/s between adjacent die.

Everyone’s focus in 3D standardization at the moment is to standardize the model interfaces so that details of TSVs, power profile of die, positioning of microbumps and everything can work cleanly in different tool and manufacturing flows. Note that there is no intention to standardize what the models describe (so, for example, no effort to standardize on a specific TSV implementation).


21st Century Moore’s Law Providing Unforeseen Boost to Silicon Valley

21st Century Moore’s Law Providing Unforeseen Boost to Silicon Valley
by Ed McKernan on 01-30-2012 at 10:00 pm

It has been a great conundrum to many of the 20[SUP]th[/SUP] century trained economists and Harvard’s Kennedy School of Government folks as to why a government led massive spending spree and Ben Bernanke’s non-stop printing presses can’t at least engender a mediocre economic recovery.

I blame 21st century Moore’s Law!

Today’s process technology is not just 4 times that of when the downturn began in 2008, it is at a magnitude that has enabled companies the freedom to move beyond the tax and regulatory grasp of many Sovereign nations that are now having difficulty paying their bills. Moore’s Law is the Overwhelming Force that is bypassing the immovable object known as too big to fail governments. As we gear up for another election season, a realization has emerged that the place where things are going swimmingly and money is piling up is none other than Silicon Valley. For politicians and governments to get access to this pile of money they will have to play nice and offer significant tax cuts that allows the Trillions of dollars that sit overseas in to come home. Apple, Cisco, Google, Intel and the rest of the high flying Silicon Valley firms can unleash this tidal wave of cash in increased investments at home while paying off politicians from both parties. This, as Obama has recently communicated, will be the major storyline of the 2012 Presidential Campaign.

Winston Churchill once remarked, “You can always count on the Americans to do the right thing – after they have tried everything else.” Now that just about everything else has been tried, the politicians will try to do something completely different, letting the strong, thriving high tech companies to be America’s primary economic engine for the coming decade like it was in the 1990s.

The political dance that started 12 months ago between the politicians and silicon valley didn’t become serious until just recently. All expectations of economic revival were cut short by Europe’s Sovereign Debt Crises and Wall St tanked again. Elections are coming soon and politicians need money from new sources as the old ones dry up. The Trillions of Silicon Valley dollars sitting overseas is not an accident. It sits there because to bring it home would incur a 35% tax. If the rate were dropped to 5-10%, then the floodgates would open. Expect the miracle wrapped in a nice fig leaf story about exchanging lower rates for the promise that companies invest in new buildings, equipment and jobs. I say expect a miracle, because Apple and the rest of the mentioned companies are gushing cash at a rate that is astronomical and politicians would just hate to see it not end up in the pockets of the people who need it most. The well won’t run dry for years.

To give one a sense how times have change since the 65nm process node was in fashion, recall that at the beginning of Obama’s term, the focus was on saving the unions and investing in the future slam dunk industry called solar. Meanwhile, California continued to bleed companies, jobs and money. Without a vibrant Silicon Valley with lots of IPOs, California can’t afford to stay in business. The Democrats without a thriving California are out of office and out of money. Obama now realizes that he needs to show extreme favoritism to Apple, Google, Facebook, Cisco, Intel and the rest of the who’s who crowd.

The upside to the President’s need to win an election and to put in place a campaign funding source for many cycles is that the current Silicon Valley, not the one that wanted to be left alone in the 1990s (think TJ Rodgers), will likely get considerations beyond the tax cut. Taken at face value, Obama’s proposal calls for taxes to be reduced on companies who invest in the US and raised on those who invest overseas (think Fabless semiconductor companies). However, some companies with high R&D like Intel and Google will likely push for relief there as well. And why not, increasing the engineering head count in silicon valley is a good thing for the President’s Party. Apple though might counter and request a break for opening up retail stores or a Data Center. Google and Facebook would concur on the Data Center subsidy. Intel on the other hand would love to get a break on its new 14nm Fabs or future 450mm fab, especially since Paul Otellini says they cost a $1B more to build in the US than in Taiwan. This is where congressional sausage making gets to be interesting.

For Intel to remain at a 27% tax rate while fabless vendors are as low as 11% makes no sense. Nor does Apple’s 24% tax rate look fair up against Google’s 7%. When the bubble burst in 2000, Silicon Valley lost 200,000 jobs, many in the semiconductor industry. It was the IP of the valley that has kept it in the technology lead but those jobs are sorely needed. We may finally get the attention needed to turn Silicon Valley into a bigger driver of the economy, much bigger than 2000. Nothing can be more glaring of the opportunity waiting for us than the startling fact that Apple has $100B in the bank and is adding it at the rate of $15B a quarter. We should remove any and all roadblocks.

With Intel, the storyline gets much more interesting as Obama and Otellini have struck up a special bond in the past year. Two years ago Otellini was excoriating the President and now he is on Obama’s Council on Jobs and Competitiveness. The only other tech related person on the council is John Doerr. This council was formed after Obama visited Intel’s Oregon site last year. Last week, the day after the State of the Union, Obama paid a visit to the construction site of Intel’s new 14nm fab located in Arizona. The purpose of his visit was to emphasize his new tax proposal and to start broadcasting his election year economic theme of bringing jobs home.

Imagine during these visits that Otellini whispers in Obama’s ears that with the right incentives the whole future of the semiconductor industry can reside in the US and with it thousands of jobs and the associated tax revenue. When combined with the Bernanke printing presses depreciating the currency in a daily drip-by-drip manner, the US government is going to make life more difficult for fabless vendors to be invested in Taiwan instead of the US. This is why Qualcomm, with its $21B in cash has to consider building a fab in the US. AMD, Broadcom, nVidia, Altera, Xilinx, Marvell and others will be pleading for Morris Chang to build in the US or alternatively make peace with Intel and enter a foundry agreement. Unless, of course, the Obama tax agreement that develops only applies to US Multinationals, then it is a completely new ballgame for Fabless Vendors. The Silicon Valley playing field could end up being tilted towards Intel.

FULL DISCLOSURE: I am Long AAPL, INTC, ALTR, QCOM


The Future of Lithography Process Models

The Future of Lithography Process Models
by Beth Martin on 01-30-2012 at 4:02 pm

Always in motion is the future. ~Yoda

For nearly ten years now, full-chip simulation engines have successfully used process models to perform OPC in production. New full-chip models were regularly introduced as patterning processes evolved to span immersion exposure, bilayer resists, phase shift masking, pixelated illumination sources, and much more. The models, in other words, have kept up with and enabled the relentless march into the lithographic nanosphere.[SUP]1
[/SUP]

“Hello? 1983 calling.” Perhaps this is what Yoda was talking about—technology such as this Motorola DynaTAC 8000x ushered in the age of microelectronics.


We learned from Yoda that the future is not set, it is always in motion. Still, I feel confident that the industry can predict several areas where full-chip models will need to evolve and improve.[SUP]2[/SUP] As process margins continue to narrow at lower k[SUB]1[/SUB], models will need to more faithfully predict all failure modes which loiter at the process window corners. In addition to pinching and bridging, models will need to accurately predict behaviors you may be less familiar with: sub-resolution assist feature (SRAF) scumming / dimpling, side-lobe dimpling, and aspect ratio induced mechanical pattern collapse (Figure 1). These can all lead to defects in the etched layer.

Figure 1. Emerging patterning failure modes.

While full-chip OPC models based on a 2D contour simulation have so far been sufficient to meet the task of correction and verification, we may need some 3D awareness in these models. For example, we might need to account for underlying pattern topography/reflectivity for implant layer patterning, or want an etch model to predict bias as a function of lithographic focus (which imparts resist profile changes). One thing to be certain of – 3D mask topography effects will continue as target and SRAF dimensions shrink, and improvements in the accuracy of 3D mask models must keep pace.

Another emerging technology, Source Mask Optimization (SMO), may place greater demands upon the portability of process models. With SMO, the illumination source is dynamically changed on a design-by-design basis in manufacturing, yet a single calibrated resist model is preferred for optimum cycle time. Full-chip mask process models may be needed to facilitate portability, and to enable maximum flexibility for process evolution.

New processes are emerging for double patterning including litho-etch-litho-etch and litho-freeze-litho-etch, sidewall image transfer, and negative tone develop. Novel chemical and thermal pattern shrink processes will continue to find their way into manufacturing. These processes represent a wide range of complex physiochemical processes, but the phenomenological compact model approach, based upon relatively few optical parameter inputs, and empirical CD/contour outputs, will no doubt be able to accurately represent these processes for full-chip simulation.

Finally, another emerging process is EUV lithography. In order to accurately perform full-chip simulation, optical models that account for flare and field-dependent mask shadowing will be required. These models are already in mature development. It is important to highlight that “OPC” will indeed be required for EUV, despite the fact that the lower wavelength will deliver a substantially higher k[SUB]1[/SUB] factor than 193 nm lithography.

Model accuracy and predictive capability requirements will surely continue to shrink below today’s 1.0 nm, and additional requirements beyond simple single-plane CD will be required. Perhaps it’s time to increase our accuracy budget 10X by converting to units of Angstroms—it will make us feel like there is more room at the bottom of the scaling curve!

As a final note, the SPIE Advanced Lithography meeting in San Jose (12-16 February) has an ever-expanding conference focused on design for manufacturability through design-process integration. As the co-chair of this conference, I can say with certainty that the technical presentations are of the highest quality. If you want to engage more deeply in the interface between IC design and manufacturing, attend the keynotes, papers presentations, and poster session on Wednesday, 15 February, and the joint optical microlithography/DFM joint sessions on Thursday, 16 February.

— John Sturtevant, Mentor Graphics

1—Lots of interesting information about process models in my previous posts: Part I, Part II, Part III, and Part IV.

2—This series was inspired by this paper I presented at SPIE Advanced Lithography in 2011.


Semiconductor Packaging (3D IC) Emerging As Innovation Enabler!

Semiconductor Packaging (3D IC) Emerging As Innovation Enabler!
by Daniel Nenni on 01-29-2012 at 4:00 pm

The ASIC business is getting more and more complicated. The ability to produce innovative die at a competitive price to solve increasingly complex problems just isn’t enough. The technology required to package that die is now front and center.

Here, at the junction of advanced design, process technology and state-of-the art packaging is where real innovation takes place. Perhaps nowhere does the importance of advanced packing technology, such as System in Package (SIP) and emerging 2.5D and 3D capabilities, become clearer than when you talk with the team at Global Unichip Corporation (GUC) who is emerging as a leader in the newly defined “Flexible ASIC” space.

Before jumping into a packaging discussion, it is important to define “Flexible ASIC.” At GUC, Flexible ASIC defines what they do: Provide access to foundry design environments to reduce design cycle time, provide custom IP and design methodologies to lower the barrier to entry, and integrate it all (design, foundry, assembly and test) for faster time–to–market. And that is where the emphasis on advanced packaging technology comes in.

The trend toward SiP (System in a Package) has been brought about because of the need to cram more and more flexibility into a smaller and smaller footprint to satisfy the demand of today’s 24/7 always connected electronic consumer. While this is clearly a great strategy, the challenges can be overwhelming. Here’s what the packaging experts have to say:

  • First there is the issue of “known good die.” As more chips are integrated into a single package, the challenge of maintaining cost/effective yield grows almost exponentially.
  • Then too, the design has to take in three dynamics … chip, package, and ultimately the board.
  • Perhaps most critical are the thermal considerations created by stacking more die into a single package.


Two of the ways that GUC overcomes these challenges is through its Integrated Passive Device (IPD) technology, which is currently available, and a state-of-the art implementation of Through Silicon Via (TSV) Interposer Technology which will be available shortly.

Using IPD technology, GUC successfully integrated some of the stand alone passive components. In the example below, GUC reduced passives by 50% and integrated both DPX and BPF. Total package size was reduced 31% from 10mm x 10mm to 8.8mm x 7.8mm and package thickness was reduced 17%.

One of the barriers to the more advanced 2.5D IC technology is cost, and while the process is not mature at this point sometimes the TSV on interposer is a significant cost consideration.

While solving that thorny conundrum, GUC has been working on the technology for five years, the company is also making significant progress toward pure 3D ICs. What makes 3D IC technology distinctive from its 2.5 D cousin is that pure 3D ICs have their TSV structures running direct to the chip area rather than to the interposer. GUC estimates that true 3D IC production is still a year or two off.

Despite the daunting challenges, GUC claims that providing an SiP approach accounts for around 25% of all new projects and 30% of its revenues. The company has shipped over 17 million SiP units targeting applications in consumer, wireless, network and computer applications. Furthermore, the company is finding a niche for what it calls “Ultra Large SiP” with package dimension over 50mm x 50mm.

Some of the more pessimistic pundits long ago declared the ASIC era dead, myself included. But clearly the complexity of providing more functionality in smaller and smaller footprints have given rise for the need of a new kind of ASIC company, a Flexible ASIC company, that can bring to the market services BEYOND design and manufacturing excellence.

That said, the real question is: What role will advanced packaging technology play in tomorrow’s innovation?




Power Issues for Chip and Board

Power Issues for Chip and Board
by Paul McLellan on 01-29-2012 at 3:39 pm

Next week there are two Apache, a subsidiary of Ansys, events. At DesignCon there are a couple of workshops on chip-package-system (CPS). In addition to Apache themselves, each of the two workshops has a number of representatives of leading edge companies doing semiconductor design. I already blogged about this in more detail here. As a general note, to find blogs about seminars, workshops, webinars and so on, click on the “seminars” button at the top of the page.

The other event, on Tuesday, is a webinar on Power Issues for Chip and Board. Brian Bailey moderates Arvind Shanmugavel, director of applications engineering for Apache, and Randy Whitel, technical marketing manager for measurement solutions from Tektronix. The first part of the webinar is pre-recorded and then there is an opportunity for live questioning of Arvind and Randy.

The summary of the webinar is:Power used to be a secondary concern when it comes to chip or system design, but with the rapid rise in importance of mobile devices, increasing chip densities, and a rise in the levels of concurrency, power consumption, power dissipation, heat dissipation, and power integrity and becoming major primary design considerations at all stages in the design flow. Many chip design techniques are making this problem more difficult, such as multiple power domains and clock gating, while high speed interfaces are creating problems with board layouts and 3D packaging techniques are raising many kinds of new challenges. Power management is an important topic for every design company to remain competitive, to increase yields and to deal with the longevity issues required for emerging industries such as automotive.

The webinar is at 10am Pacific Time on Tuesday January 31st. Registration is here. After the event, a recording of the entire webinar, including the Q&A will be available.


Arteris vs Sonics battle: remind Clausewitz!

Arteris vs Sonics battle: remind Clausewitz!
by Eric Esteve on 01-29-2012 at 1:56 pm

I have bloggedbefore Christmas about the Arteris-Sonics war, initiated by Sonics, claiming that Arteris NoC IP product was infringing Sonics patent. We had shown in this post that the architecture of Sonics interconnects IP product was not only older but also different from Arteris’ NoC architecture: the products launched initially by Sonics, in the 1995-2000 years, were closer to a crossbar switch than to a Network-on-Chip. Having done this analysis, our feeling was, because Arteris solution is fresher (2005), that Sonics’ claim against Arteris was unlikely to be justified.


The answer from Arteris, coming January 27[SUP]th[/SUP], reminds me Clausewitz well known strategy: “Attack is the best way to defend”! From Arteris’ PR:
January 27, 2012 – Arteris, the inventor and leading supplier of network-on-chip (NoC) interconnect IP solutions, today announced that it has filed a complaint alleging that Sonics’ newest product, SonicsGN (SGN), infringes Arteris patents. In addition, Arteris responded to the lawsuit that was filed by Sonics Inc. on November 1, 2011, asserting that it has not infringed the Sonics patents, and further that the Sonics patents are invalid.

Pretty tough answer, isn’t it? This is clearly a two step manoeuvre: when Sonics was funding their attack on the past (they were the first on the market, so a new comer has “certainly” infringe their old patents to compete on the same market), Arteris bases the attack on the present. The company is claiming that Sonics latest products (SGN), based on a new architecture which is now similar to Arteris NoC (just remind that Sonics initial products were “crossbar switch” like), have “necessarily” infringed Arteris’ patents (in both cases, the quote reflect my interpretation, not a certified fact). The second step is a more classical defense (Arteris has not infringe the Sonics patents), but going further when saying that the Sonics patents are invalid!

I am not a lawyer, neither a patent expert, nor a specialist of NoC architecture… that said, it would be strange that the products developed by Arteris ten years after the introduction of the first products from Sonics, based on a different concept (Network vs Crossbar Switch) and consequently a different architecture, could have used features specific to Sonics products, covered by patents. To penetrate a market already occupied –by Sonics- Arteris had to innovate, not to duplicate. Similarly, it’s attractive to think that, when Sonics, the interconnect IP market leader, has realized that Arteris was making design-in after design-in (at customers previously working with Sonics), and was going to kick them out of this market, the company decided to develop a product similar to Arteris NoC! If the market is asking for a certain kind of product, better optimized in term of power consumption, performances (latency) and wire length (layout) and if you serve this market, you just want to satisfy your customer.

By doing so, Sonics appears to be follower, the company who duplicates when the direct competitor innovates… In this market configuration, when the follower need to close the technical gap and respect a tight Time-to-Market, the risk of patent in infringement is simply higher, because you need to go fast and can’t necessarily check that every single piece of a complex design is not infringing a patent. All of the above is pure speculation (let’s call it a feeling), and may not be true. But it possibly could be true…

I am happy to see that one of the Arteris arguments in the 27th January PR was already highlighted in my previous blogon the topic, dated November 4[SUP]th[/SUP]:

“The Sonics patents asserted in its November 1, 2011 complaint are related to the old Sonics Silicon Backplane product, and do not apply to Arteris’ true network on chip technology. Crossbar technologies were used in the semiconductor industry long before the existence of any Sonics patents, when on-chip crossbar switches were developed for communications applications in the 1980’s. Conversely, Arteris network on chip interconnect IP is a distributed packet switching network which is significantly different than older crossbar-based hybrid bus technologies used in products like Sonics’ SonicsSX (SSX) and SonicsLX (SLX).“

These few sentences synthesizes the problematic of the Sonics vs Arteris case: the historical player (Sonics) on the Interconnect IP market has entered and created this market by using a well known, proven technology: “on-chip crossbar switch”. Then when Arteris, the challenger, came on this market ten years after Sonics, they absolutely had to innovate to have a minimum chance to be considered, and gain market share. That they did by developing a new architecture: their “network on chip interconnects IP” is a “distributed packet switching network”. Innovation has allowed Arteris to best solve customers’ issues in SoC design, and Sonics understood this very well, so they decided to recently launch new products duplicating the successful architecture for NoC…

By Eric Esteve– IPNEST


SemiWiki and Mentor Graphics Seminar Series!

SemiWiki and Mentor Graphics Seminar Series!
by Daniel Nenni on 01-28-2012 at 10:49 am

For the greater good of the semiconductor ecosystem, SemiWiki and Mentor Graphics present SemiWiki Seminars, a free seminar and software demonstration series addressing the latest innovations in IC design. SemiWiki Seminars discuss interesting new challenges and potential solutions aimed at increased circuit density and functionality, higher performance, better yield, more cost effective test, faster design cycles, and other success factors. SemiWiki Seminars demonstrate specific methods and tools for the individual designer, as well as ways to help engineers work together more effectively across a broad and diverse ecosystem.

Join us for our first event:

Effective, Secure Debugging in a Fabless Ecosystem
January 31, 11:30am – 1pm @ a one of my favorite eating spots in Silicon Valley!

For more information check out these blogs by Paul McLellan, Daniel Payne, and myself:

Semiconductor IP Security Seminar (Free Lunch!)
Now that design revolves around intelectual property, IP security is a top concern of fabless semiconductor companies around the world. Modern SoC design and manufacturing requires geographically distributed teams and companies, such as EDA vendors, design houses, foundries, packaging houses, and other partners within the eco-system.

EDA Vendors Providing Secure Remote Support for an IC Design Flow
In my last corporate EDA job I had customers in Korea that were evaluating a new circuit simulator and getting strange results. When I asked, “Could you send me your test case?” the reply was always, “No, we cannot let any of our IC design data leave the building because of security concerns.”

Imera Virtual Fabric
Anyone who has worked as either a designer or as an EDA engineer has had the problem of a customer who has a problem but can’t send you the design since it is (a) too big (b) the company’s crown jewels and (c) no time to carve out a small test case. I’ve even once had a bug reported from the NSA where they were not even allowed to tell us what the precise error message was (since it mentioned signal names).

Agenda

  • Introduction by Daniel Nenni
  • Imera Presentation (by Bruce Feeney (15 minutes))

[LIST=|INDENT=1]

  • Who is Imera?

    [INDENT=2]Secure connections for collaboration with suppliers and partners
    [INDENT=2]Security, Export Control and Legal Compliance
    [INDENT=2]Statement of the problem we’re trying to solve

    [LIST=|INDENT=1]

  • Real-world Applications of Imera’s Products

    [INDENT=2]Remote source code debug
    [INDENT=2]Remote critical issue support
    [INDENT=2]Secure engineering collaboration

    • MGC Customer Support Presentation (30 minutes)

    [LIST=|INDENT=1]

  • How does Mentor use Imera in troubleshooting Calibre using remote debug and why, etc.
  • Demo of secure debug solutions

    [INDENT=2]Overview w/ benefits
    [INDENT=2]Use Case # 1 – Troubleshooting a SR between Support Engineer with customer
    [INDENT=2]Use Case # 2 – Debugging a DEI with Support Engineer, Customer, and R&D Engineer

    • Q & A (10-15 minutes)

    I look forward to seeing you there!


  • Premier International Gathering for … Application Developers!

    Premier International Gathering for … Application Developers!
    by Daniel Nenni on 01-27-2012 at 8:53 pm


    For the greater good of the semiconductor ecosystem, I have agreed to Co-Chair the 2012 International Conference on Engineering of Reconfigurable Systems and Algorithms (ERSA), the “Premier International Gathering for Commercial and Academic Reconfigurable Computing Application Developers”, July 16-19, 2012, Las Vegas, Nevada, USA.


    ERSA
    is a part of WORLDCOMP Congress bringing together more than 2,000 attendees from over 85 countries around the world. Companies, participating at ERSA/WORLDCOMP, get their products and organizations in front of this large number of attendees and countries.

    The ERSA Industrial Session (ERSA-IS) will assemble a coordinated research and commercial meeting held in same location and dates. It provides a forum between academic researchers and commercial entrepreneurs from developed countries and emerging economic markets.

    ERSA-IS facilitates communication among researchers and entrepreneurs from different sides of world and brings them closer together in same location and dates. This something that is very expensive to do any other way. It provides a meaningful return on investment that companies can evaluate – trading off one or two expensive trips for one trip and sponsorship of ERSA-IS.

    ERSA-IS is looking for attendees (researchers, developers, entrepreneurs, etc) from emerging market countries: Brazil (South America), India, and China (Asia), as well as from developed countries in North America and Europe.

    Las Vegas is a brilliant place to host an event like this. Las Vegas has a reputation for bringing technology oriented businesses together (CES) and is accessible from around the world. The longer-term goal is to establish the Industrial Developers’ Forum with thousands of attendees during next years.

    ERSA-IS Hot Topic:

    “Reconfigurable Computing Application Development for Heterogeneous Run-time Environments”

    Focus on challenges, tools, available technologies, and opportunities when it comes to developing and supporting applications, both academic and commercial, which involve reconfigurable computing systems, including mobile, heterogeneous and hybrid technology platforms for complex, intelligent embedded systems.

    ERSA-IS Proposed Featured Sessions:

    • Developing heterogeneous systems (CPU plus FPGA) using the OpenCL standard
    • Developing IP cores and scalable libraries for heterogeneous systems
    • Hardware security and trust in reconfigurable heterogeneous systems

    I strongly encourage companies, developers, entrepreneurs to arrange demos, exhibitions, talks, presentations etc., and to be sponsors for ERSA-IS. I strongly encourage employers, developers, students, and researchers to attend.

    Companies may host half or full day seminars to introduce and demonstrate their new technologies and products.

    Sponsor ERSA and raise your visibility, and show your support for advancing reconfigurable systems algorithms and systems in both academic and commercial applications!

    For sponsorship details, visit: Sponsorship Levels

    Conference Chair:
    Dr Toomas P Plaks

    London
    Contact the Chair

    Las Vegas is an incredible location, this will be an excellent experience, I hope to see you there!