DAC2025 SemiWiki 800x100

SOC Realization

SOC Realization
by Paul McLellan on 06-27-2011 at 5:28 pm

There are some very interesting comments to the last entry on SoC Realization and how more and more chips are actually assembled out of IP. There was clearly a lot of discussion in this area at DAC, although most people (Atrenta being an exception) don’t use the term SoC Realization, presumably because it was originated by Cadence. But just because there is a dysfunctional gap between what Cadence talks about in EDA360 and what they currently deliver doesn’t mean that the basic ideas are wrong. I’ve argued before that so far both Synopsys and Mentor are currently doing a better job at implementing the EDA360 vision than Cadence are themselves.

One of the point that several commenters made was that IP reuse is expensive, in the sense that if you design a block to be reused rather than just used in the chip you are working on then it will take more effort. Reuse seems to work best at the very low level (standard cells, memory blocks etc) and the higher functional level, the higher the better. Microprocessors obviously, but also large blocks like GPS or cable-modems.

One of the biggest challenges about re-use is that it really is expensive. Long before any of us had even thought about IP in semiconductors, Fred Brooks in his book The Mythical Man-month reckoned that a reusable piece of operating system code that had to run on a range of machines and be reliable, was 9 times more expensive than the quick-and-dirty bit of code that a programmer could come up with to solve their specific issue, a factor of 3 for flexibility and a factor of 3 for making it industrial strength. At a seminar in Cambridge (the real one!) where he was on sabbatical back then, I guess this must have been 1975, he said that he thought that the two factors of 3 should have been two factors of 5 and so it was actually 25 times more expensive.

When I was at VLSI Technology, we had blocks of reusable silicon that we called FSBs, for Functional System Blocks. We jokingly called the Fictional System Blocks since, in most cases, the work to make them reusable (models, vectors etc) didn’t yet exist. The block did genuinely exist, and had seen silicon, but it would be too expensive to make all the blocks resuable without a customer. Instead, we did a sort of just-in-time reusability, making the blocks reusable once someone showed up and wanted to use them. Making all the blocks truly available at all the process nodes would have made the whole idea unprofitable.

3rd party IP, of course, typically comes with all that is required to use the block. The idea is that the cost of making blocks reusable is amortized across a large number of customers/designs. One problem is that often that is not the case, the design is used in just a very small number of designs in different processes, with a result that many IP companies really have more of a service business model (porting IP) than stamping out lots of pre-standardized product.

Nonetheless, most of most chips these days is IP that was either from a previous chip, or from a 3rd party. Very little is RTL written just for that one chip. SoC Realization, by whatever name, is the way many chips are designed.


ARM and Mentor Team Up on Test

ARM and Mentor Team Up on Test
by Daniel Payne on 06-27-2011 at 2:31 pm

Introduction
Before DAC I met with Stephen Pateras, Ph.D. at Mentor Graphics, he is the Product Marketing Director in the Silicon Test Solutions group. Stephen has been at Mentor for two years and was part of the LogicVision acquisition. He was in early at LogicVision and went through their IPO, before that he was at IBM in the mainframe processor group.

Continue reading “ARM and Mentor Team Up on Test”


TSMC Versus Intel: The Race to Semiconductors in 3D!

TSMC Versus Intel: The Race to Semiconductors in 3D!
by Daniel Nenni on 06-26-2011 at 4:00 pm

While Intel is doing victory laps in the race to a 3D transistor (FinFet) @ 22nm, TSMC is in production with 3D IC technology. A 3D IC is a chip in which two or more layers of active electronic components are integrated both vertically and horizontally into a single circuit. The question is which 3D race is more important to the semiconductor industry today?

Steve Liebson did a very nice job in his blogs: Are FinFETs inevitable at 20nm? “Yes, no, maybe” says Professor Chenming Hu (Part I) and (Part II). DR. Chenming Hu is considered an expert on the subject and is currently a TSMC Distinguished Professor of Microelectronics at University of California, Berkeley. Prior to that he was he was the Chief Technology Officer of TSMC. Hu coined the term FinFET 10+ years ago when he and his team built the first FinFETs and described them in a 1999 IEDM paper. The name FinFET because the transistors (technically known as Field Effect Transistors) look like fins. Hu didn’t register patents on the design or manufacturing process to make it as widely available as possible and was confident the industry would adopt it. Well, it looks like he was right!

In May of this yearIntel announced Tri-Gate (FinFET) 3D transistor technology at 22nm for the Ivy Bridge processor citing significant speed gains over traditional planar transistor technology. Intel also claims the Tri-Gate transistors are so impressively efficient at low voltages they will make the Atom processor much more competitive against ARM in the low power mobile internet market. Intel has a nice “History of the Transistor” backgrounder HERE in case you are interested.

Time will tell but I think this could be another one of Intel’s billion dollar mistakes. A “significant” speed-up for Ivy Bridge I will give them, but a low power competitive Atom? I don’t think so. TSMC’s 3D IC technology on the other hand is said to achieve performance gains of about 30% while consuming 50% less power. Intel already owns the traditional PC market so trading the speed-up of 3D transistor technology for lower power planar transistors is a mistake. A mistake that will allow ARM to continue to dominate the lucrative smartphone and tablet market.

Intel also does not mention 22nm Tri-Gate manufacturing costs which is key if they are serious about the foundry business. I still say they are not serious and this is another supporting data point. Foundry capacity will soon outpace demand so low manufacturing costs will be a critical competitive advantage.

TSMC has chosen to wait until 14nm to bring 3D transistor technology to the foundry business. Given that TSMC is the undisputed foundry champion and the father of the FinFet is a “TSMC Distinguished Professor of Microelectronics at University of California”, my money is again on TSMC. I won my previous bet of Gate-last HKMG (TSMC) versus Gate-first (IBM/Samsung) so I’m letting it ride on 14nm being the correct node for FinFets.

Note: You must be logged in to read/write comments


HDMI vs DisplayPort?… DiiVA is the answer from China!

HDMI vs DisplayPort?… DiiVA is the answer from China!
by Eric Esteve on 06-26-2011 at 9:40 am

During the early 2000’s, when OEM starting to question the use of LVDS to interface with display devices, two standards has emerged: High-Definition Multimedia Interface (HDMI) and DisplayPort. HDMI has been developed by silicon Image, surfing on the success of Digital Video Interface (DVI), and was strongly supported by a consortium counting Hitachi, Ltd., Panasonic Corporation, Philips Consumer Electronics International B.V., Silicon Image, Inc., Sony Corporation, Technicolor S.A. (formerly known as Thomson) and Toshiba Corporation, the “founders”. Pretty impressive list, at least in the Consumer Electronic market! The Video Electronics Standards Association (VESA) decided to launch a competing standard, DisplayPort. DisplayPort was back-up by companies linked to the PC market, like HP, AMD, Nvidia, Dell and more… The competition looked promising, as both standards exhibits the same high level features: based on high speed differential serial signaling, packet based protocol, layered based architecture, allowing to increase bandwidth by using 1 to 3 lanes (HDMI) or 1 to 4 (DP) for the most significant.

But the expected big fight was deceiving. Is it because HDMI was promoted with a strong energy by a single company, who had to be successful or to die, when VESA was acting as a “non profit” organization? Or because the initial market for HDMI, Consumer Electronic, has allowed the pervasion of the standard in the Wireless Mobile segment and the Mobile PC segment, generating huge number of HDMI powered devices to reach consumers (and a strong cash flow for Silicon Image, based on a $0.04 per port)? Nevertheless, when DisplayPort started to reach the market, in 2008, with a mere 10 Million ports, HDMI already exhibits a strong 250+ Million!

This forecast from iSupply was issued in 2007 leaves no doubt about the winner! Even more interesting is the latest data from iSupply, dated March 2011: “High-Definition Multimedia Interface (HDMI), the de facto connection standard in liquid crystal display TVs and other advanced digital equipment, will be present in nearly 630 million consumer electronic items shipped during 2011…”. In the meantime, HDMI has become the “de-facto connection standard” (OK, for LCD TV looks restrictive, but the “other digital equipment” cover: Wireless Handset, Set-Top-Boxes, Notebooks and more… so de-facto looks appropriate!). Amazingly, these two forecast from iSupply, made with a 4 year time distance, including a major, worldwide, financial crisis, are pretty consistent with 615 Million forecasted in 2007, compared with 630 Million in 2011, for the items shipped in 2011. That’s not so often that analyst are right, so we can recognize it when it happen!

As of today, the status is that DisplayPort is finally emerging, almost exclusively in the PC segment (and with a threat coming from MIPI DSI in the mobile devices like Media Tablet, Netbook…). Ok, Apple has integrated DisplayPort in his Mobile PC, Intel has launched ThunderBolt which will use both DisplayPort and PCI Express, major Chipset manufacturers are supporting DisplayPort (but also HDMI…), but the HDMI powered items will pass the Billion in 2013… when DisplayPort, according with ResearchAndMarket survey in February 2011 will see “External DisplayPort Device Shipments to Increase 100% From 2009 to 2014”…to reach 10% of the HDMI number!

So, apparently, HDMI has won the battle. But, adopters of HDMI technology are concerned. At first, by the impact on their bottom line when selecting HDMI. The rights for HDMI standard belongs to HDMI Licensing, LLC… which is 100% owned by Silicon Image. Moreover, when a chip maker, or an OEM, wants to get HDMI certification, he has to submit his product for compliance to HDMI Authorized Testing Center (ATC) for testing. The ATC list includes 11 companies… 7 of these are Silicon Image subsidiaries. Should we mention that Silicon Image also market HDMI based ASSP? Thus, when you license HDMI, you will have to pay a per port royalty (to SI), if you decide to develop your own solution, you will have to obtain compliance for your IP or your ASSP to… Silicon Image again.

This monopolistic situation has made several chip makers or OEM pretty nervous. Amazingly, the riposte came from China: along with Sony Corp.—an original founder and supporter of the HDMI interface—and Samsung Electronics Co., several Chinese TV manufacturers have banded together in a consortium to develop the Digital Interactive Interface for Video and Audio (DiiVA). DiiVA charter members (called Promoters) include leading CE and home appliance manufacturers Changhong Electric Co., Haier Co., Hisense Electric Co., Konka Group, Panda Electronics Co., Samsung Electronics, Skyworth Group, Sony Corporation, SVA Information Industry Co., TCL Corporation, and chip developer Synerchip Co., Ltd. As a side remark, DiiVA is defined as a bidirectional protocol, opposite with DisplayPort and HDMI. Another good point for the end user, the consumer!

A robust alternative that would provide equivalent or even expanded high-definition capability while avoiding HDMI license costs, DiiVA remains on the horizon as a potential competitor to HDMI from 2011 onward, within China and potentially other Asian domestic markets. While still relatively early in the product launch phase, DiiVA remains a technology to watch closely … The Chinese market has shown a demonstrable propensity to gravitate to its own technical standards, and the DiiVA interface is likely to receive a boost from foreign brands looking to grow their share within the China market. And though the support enjoyed by HDMI among devices in general is impossible to ignore, the possibility that a China government agency mandates the use of DiiVA—even if only to promote regional interests in the technology sector—could spell trouble for HDMI in the years to come within the rapidly expanding China consumer market.

From an IP vendor perspective, HDMI is still a very promising market (evaluated by IPnest to grow to up to $100M by 2014, including per-port royalties), DisplayPort is taking of, and will generate a decent market (not yet evaluated in details, but DP IP market could represent a $25M by 2014) and DiiVA is just another market opportunity to develop sales for High Speed Serial Interface IP, the IP market size is still to be evaluated.

Eric Esteve

http://www.linkedin.com/publishers


Smartphones in the BRICs

Smartphones in the BRICs
by Paul McLellan on 06-24-2011 at 2:45 pm

The latest edition of GSA Forum has an article by Aveek Sarkar of Apache on system design for emerging market needs. The BRIC (Brazil, Russia, India, China) type countries are characterized by a small rich segment, a large and growing middle class and a large poor segment. One big trend is that smart phone use is expanding very fast. These countries don’t have a large installed base of PCs, and almost certainly never will, so smart phones are the primary way people there access the internet. Smart phones are growing over four times as fast as the overall mobile industry.

But smartphones are not just the same phones as we use in the US etc since different markets need different features. For example, mobile payments are huge in Kenya (search for M-pesa for details). In general, there is no bank-card infrastructure and phones are going to be the way financial transactions get done. Much of the Chinese market requires two SIM-card slots. And, although it is purely a software issue, different markets require different languages.

These markets move much faster with new designs tumbling over each other, so time to market is extremely critical. But the price point is probably the most critical, $600 is simply way too high. The way phones are largely designed is with collaboration between a chipset vendor like Mediatek, Broadcom or ST-Ericsson and a system integrator. Communication is very important.

Lack of communication leads to one of two problems: the phones don’t work or are unreliable, or else they are over-engineered and unnecessarily expensive. One problem is that because of time-to-market issues designs tend to get padded with the chip designers having to assume the worst about the board design (corners will probably be cut) and the board designers over engineering the board because they don’t know enough about the internals of the chips. Or worse, they don’t. If both groups over engineer then the design is more expensive than necessary. If both groups under-design then the design may not work at all due to, for example, power supply droop due to lack of decoupling capacitors. As clock rates continue to increase into the GHz range and voltage margins decrease, these issues are getting worse not better.

One solution to this problem it for board and chip designers to be able to provide models to each other so that the reliability can be analyzed and so the product can come to market one time and at the lowest possible price point. This allows for a full analysis of the system and avoids the $100M disaster of shipping a phone that is too unreliable to be successful.

Of course these problems of modeling chips, boards and packages, and analyzing noise and power distribution are all areas the Apache has been working on for years.

Aveek’s article is here.


OpenAccess

OpenAccess
by Paul McLellan on 06-21-2011 at 1:11 pm

Probably everyone knows that openAccess is a layout database. It was originally developed at Cadence (called Genesis) but has since been transferred to Si2. Strictly speaking, openAccess is actually an API and the database is a reference implementation. The code is licensed under a sort of halfway to open-source: you can use it, but you can’t incorporate it into a different implementation of the API. The FAQ on openAccess on the Si2 website is here.

Cadence never really planned to make openAccess that open. When I was there it was an internal project with the long-term aim of replacing the databases underlying the digital design and custom design tools and so, eventually, unifying them. However, several large customers had internal database projects of their own and were pushing back on switching to openAccess if it remained proprietary to Cadence. Apart from the general wastefulness of every semiconductor company investing in creating their own database so that they could interface their own internal scripts and tools, Cadence realized that it would be a maintenance and business headache if, gradually, their customers stored all their design data in proprietary databases with different semantics.

So openAccess was opened by donating it to Si2, and eventually the source code was available too. Note that at this point the real purpose of openAccess was to make the interface between Cadence’s customers’ internal developments interface cleanly with Cadence’s tools. I don’t think Cadence clearly though through the implications for the EDA ecosystem.

The dirty secret of the early days of openAccess was that, despite being developed at Cadence, no Cadence tools ran on openAccesss natively. There were translators in and out of it to the other Cadence databases. Cadence, being a large company with a huge installed base was also not the nimblest and so the first tools that ran on openAccess were from smaller companies.

Today, customers and especially other EDA companies, look at openAccess as a way to interface tools from different vendors. For example, SpringSoft’s Laker layout environment supports openAccess and uses it as the means to run Mentor’s Calibre DRC incrementally and interactively behind the scenes as editing takes place.

But openAccess on its own is only a part of the problem of interfacing the Cadence layout environment to the rest of the world. A key part of Cadence’s layout system are pCells, especially those that are in the physical design kits (PDKs) supplied by the foundries. pCells are not part of openAccess and, as a result, it is not really a complete interface. Of course Cadence regards this as a competitive advantage and it is clear that they are not going to make pCells open. I ran the custom IC product division of Cadence for a year or so and, without revealing any numbers (I can’t remember them anyway), you won’t be surprised to know it was the definition of a cash cow, making a large amount of money for a relatively small engineering investment (off topic: gateEnsemble was even better, with an 8-figure revenue and only about one bug report per year. Management would regularly try and kill it in an irrational desire to reduce the breadth of the product line).

The IC layout market really splits into two halves: Cadence, and everyone else. Cadence has a closed system (think Apple) whereas everyone else (SpringSoft, Synopsys and others) is open, competing on technology, service etc and not on lock-in. To an extent, any switch away from Cadence to one of the other players is a win for all of them in the short term. Once the tipping point away from Cadence is reached, if it happens, then the game switches to a land-grab for market share on a level playing field. Open systems usually win in the end even against entrenched lock-in: Windows-NT was surpassed by Linux for internet servers despite Microsofts advantages.


Can Your Router Handle 28 nm?

Can Your Router Handle 28 nm?
by Beth Martin on 06-20-2011 at 7:11 pm

attachment

With the adoption of the 32/28 nm process node, some significant new challenges in digital routing arise—including complex design rule checking (DRC) and design for manufacturing (DFM) rules, increasing rule counts, very large (1 billion transistor) designs. To meet quality, time-to-market, and cost targets, design teams must adopt new routing technologies that can solve for multiple design objectives within the scope of required tool capacity, memory footprint, and runtime.

A router must be flexible and robust to effectively deal with the growing DRC/DFM rule count and complexity at sub-nanometer nodes. To ensure optimization of all design parameters across all process and operational modes and corners, the router should be multi-mode, multi-corner (MCMM) aware. The router should also have signal integrity (SI) costing native to the routing kernel, to enable dynamic and incremental MCMM SI analysis. Incremental, on-the-fly extraction, polygon-based DRC analysis, and MCMM timing analysis are also essential to make quick decisions on issues such as wire spreading and rerouting critical nets.

Another key requirement for 28nm routing is the support for all of the DFM requirements including recommended rules, pattern matching, redundant vias, wire spreading and widening, timing-aware metal fill, and sophisticated non-default rules (NDRs). Finally, because of the growing design sizes, routers need to use multiple cores and CPUs and physical memory very efficiently. The requirements of a routing system for 28 nm are illustrated in Figure 1.

Figure 1. A convergent routing flow for advanced-node designs.

Nanometer Routing Challenges and Solutions

The primary challenges at 28 nm that require new routing technologies include:
• Growing number of DRC/DFM rules and rule complexity
• Poor estimates for global route resources
• Disconnected physical signoff and engineering change order (ECO) iterations
• Huge design sizes and long runtimes

Growing DRC/DFM Requirements

Design rules and DFM requirements correct for the manufacturing defects (parametric, systematic, and random) that occur when trying to print sub-wavelength features. The number of design rules has roughly doubled between the 90 nm and 28 nm nodes, and model-based DFM analysis is becoming mandatory. In addition to the mandatory DRC, for 28 nm foundries also provide “recommended rules” – soft rules that improve yield. Although the recommended rules are discretionary, if these rules are not honored by the router it can have a direct impact on the design yield.

The routing engines should support all the complex 32 nm and 28 nm design rules and at the same time control the impact on runtime. One method is to use algorithms that intelligently minimize the number of operations performed during routing. The router should make use of the full DRC/DFM models during all stages of routing for better accuracy and to minimize violations that need to be fixed during post route optimization and signoff. The DRC engine should use polygon shapes rather than edge-to-edge checks, which enables complex 28 nm rules to be represented and adhered to effectively.

In addition to the default hard rules, the router should also support recommended rules and corresponding rule priorities. Automatic routing repair should be performed based on the priority as defined by the foundry or the user to ensure the best DFM score.

Global Routing Estimation

Before creating detailed routes, routing tools perform a ‘global routing’ step to estimate the available routing resources. These global routing estimates must be accurate, which means more than simply counting the number of routing tracks across the chip that meet minimum spacing requirements. Some routing engines use only a subset of the foundry design rules in a simplified form for global routing, and invoke the full set of DRC rules only for detail, or final routing. The result is poor correlation between early estimates and final routing results and ultimately, routing closure problems.

A timing- and congestion-aware 3D global router is best at estimating routing layer resources. The global router should use the complete set of DRC/DFM rules, including recommended rules, to avoid intractable DFM problems that typically are found as late-stage surprises. The router should use new modeling technologies to ensure that the resources consumed by vias, stacked via patterns, blockages, and staggered macros are accounted for when calculating resource availability.

Efficient Physical Signoff and ECO Iterations

Another key challenge at 28 nm is a result of the traditional decoupling of the routing and the signoff verification engines. Typically, a router uses simplified DRC and DFM models to provide the optimal trade-off between runtime and accuracy during routing. Once the implementation is complete, the GDSII layout is verified using signoff-quality DRC/DFM models and Standard Verification Rule Format (SVRF) rule decks. For previous nodes, this worked adequately because the number of violations discovered at signoff was relatively low.

Designers are also finding that DFM techniques, including metal fill/CMP, litho, and critical area analysis, are starting to affect the traditional design metrics like timing, power, and signal integrity. These challenges are made worse by the fact that there is no automated way to repair the DRC/DFM violations, and the traditional flow requires the transfer of huge ASCII files between the implementation and signoff environments, which slows the design process. In summary, the design-then-verify flow that has worked in the past is increasingly unmanageable and unpredictable.

Advanced IC designs need the physical signoff engines to be directly integrated in the place and route environment to natively perform SVRF-based DRC and DFM analysis. Access to the actual signoff engines running golden SVRF rule decks is the key to the effectiveness of the platform. This ensures that all manufacturability issues are addressed without introducing new ones, and without degrading the performance of the design. It significantly speeds up the manufacturing signoff process, and delivers higher quality results with faster time to market.

Capacity and Turn-Around-Time

A routing solution must also have an extremely efficient and scalable data model to handle huge design sizes. The number of operations the router must perform at 28 nm is nearly four times more than what was required at the 65 nm node. One technique for maintaining the routing runtime is a method for clustering and filtering rules. Rather than applying each rule separately, a more intelligent tool can detect rule commonalities and group them for more efficient processing.

Another performance factor is the efficient use multiple CPUs. Figure 2 illustrates the speedup that can be achieved for different CPU configurations when the router architecture has a very efficient data model and is built for maximum parallelism.

Figure 2. Routing speedup with Multi-CPU runs using the Mentor Graphics’ Olympus-SoC place and route system.

Conclusion

Advanced process node designs face a raft of significant routing challenges due to the increased number and complexity of DRC/DFM requirements, increased design sizes, and multiple design goals. Routers for 28 nm must offer a flexible and powerful architecture to address these concerns and achieve optimal QoR across all design metrics in the shortest time.

— By Alexander Volkov, Principle Technologist, Mentor Graphics Place and Route Division.

For more information about Mentors Graphics’ routing technology, see the whitepaper “Routing Technology for Advanced-Node IC Designs“.


Semiconductor IP State of the Union

Semiconductor IP State of the Union
by Daniel Nenni on 06-19-2011 at 10:15 am

After the mega IP acquisitions last year by Cadence (Denali) and Synopsys (Virage) a lot of people are wondering what is next for the commercial Semiconductor IP market. Let me offer my opinion as a person who works closely with foundries and their top customers and the opinion of Dr. Eric Esteve, an expert on interface IP.

The commercial semiconductor IP industry will experience exponential growth due to both the mobile internet explosion and the shortened shelf life of the end products we serve. On the foundry side, the complexities of shrinking geometries and increasing design requirements for high speed and low power devices will keep commercial IP vendors growing for years to come, my opinion.

At the 48[SUP]th[/SUP] Design Automation Conference commercial semiconductor IP vendors dominated the foundry partner space for a reason, foundries cannot succeed without them. The next generation of fabless semiconductor companies cannot succeed without them. The semiconductor design ecosystem cannot survive without commercial IP.

Eric Esteve recently completed a four part interview with Synopsys IP manager Hezi Saar which can be found HERE. I worked with Hezi at Virage Logic and now work with Eric at SemiWiki.com. You will be hard pressed to find more IP savvy guys than Hezi and Eric. If Interface IP touches your profession you will definitely want to read this interview and spend more time with Eric on SemiWiki.com.

Q: Eric, give us a quick introduction about your background as it relates to interface IP
A: I have spent 20 years working as a designer, then FAE, then Marketing for TI and Atmel, before working as a WW Marketing Director for PLDA, where I have launched…
Q: What are your high level thoughts about the semiconductor industry in general and mobile segment in particular?
A: The semiconductor industry is still growing, with an 8% CAGR for the last 20 years or so, but it is a matter of fact that there is a consolidation, and the ASIC or ASSP design starts are slightly….
Q: What do you believe are the challenges facing the mobile electronics industry?
A: I think some of the challenges the mobile electronic industry is facing are almost the same than for the other….
Q: What can you tell us more about the evolving time to market pressures?
A: Time to market for handset applications is probably the more stringent of the industry….

Q: Since you (and Synopsys) focus on interface IP what do you see as the overarching trends for interface IP?
A: Being strongly focused on Interface IP, since 2005, I have seen the massive adoption of the differential, high speed, serial communication techniques inside and outside…..
Q: What are the most promising interfaces used by semiconductor SoCs targeting mobile market segments and why?
A: Lets take TI’s OMAP5 an an example, we have pretty much the list of most promising interfaces for Application Processor SoC targeting mobile market segments…..
Q: You are projecting a very strong growth for these interfaces, almost 100% from 2010 to 2015. How do you explain this growth with your prediction you made about lower number of design starts going forward?
A: First, I should precise that different predictions, from different analyst, like Gartner has proposed during IP-SoC in December 2010, show a
Q: That’s interesting distinction between Gartner’s growth numbers and yours, what are the reasons for doubling the interface IP market while design starts decline?
A: The first reason is structural: yes the number of design starts per year declines at a 2 to 4% yearly rate, but the nature of the ASIC or ASSP SoC designs strongly change….
Q: Last thoughts about the mobile computing space as compared to traditional computing, storage, enterprise segments? How do you think the future will look like?
A: During the last few years, the most drastic change we have seen in the mobile computing space has been the democratization…..

The most frequent calls for help I get now are from fledgling IP companies. Literally a dozen of them and more to come as semiconductor company consolidation continues. Downsized silicon proven IP groups on the street and ready to go. The question is how will they differentiate and thrive? The answer of course starts with Eric’s industry reports on IP-Nest.

Note: To read/write comments you must be logged in



Circuit Simulation and IC Layout update from Mentor at DAC

Circuit Simulation and IC Layout update from Mentor at DAC
by Daniel Payne on 06-17-2011 at 7:06 pm

Intro
On Monday evening I talked with Linda Fosler, Director of marketing for the DSM Division at Mentor about what’s new at DAC this year in circuit simulation and IC layout tools.

Notes
IC Station – old name for IC layout tools

Eldo – Eldo Classic- Cell characterization
– ST is the early customer and teaching customer, their Golden Simulator
– Widely deployed worldwide

Eldo Premier (January 2011 introduced, free transition from Eldo customers, new option, uses 2X the licenses) – Multi core, multi cpu
– Accuracy driven
– More accurate than Berkeley (they focus on PLL)
– FineSim from Magma
– XA from Synopsys
– Developed in Grenoble, all new kernel, native MT
– Average of 2.5X faster than Eldo at same accuracy, up to 20x faster
– Input netlists: Eldo, HSPICE
– Some analysis missing in Premier and will be released in next 12 months
– DAC Session at 9AM on Tuesday AM

ADiT – Fast SPICE simulator- Analog blocks up to 50 Million devices
– Adding new capabilities
– MediaTek standardized on ADiT
– Similar to other Fast SPICE tools
– Macro tuning capability and new partitioning in development

Questa ADMS – Single kernel AMS simulator- Number one or two market share per EDAC, Gary Smith EDA
– Close to Cadence in market share

Grenoble – Eldo/Eldo Premier R&D
Taiwan – ADiT team/Design Kits
Armenia – CICD R&D
Cairo – models, PDK
Fremont – Division Headquarters
Wilsonville – Custom IC Design R&D
Austin – Custom Router R&D

Innovate In IC physical design, stay close to silicon design.

Technical Advisory Board – multiple initiatives
– Quarterly meetings
Simulators – all work within Cadence Virtuoso (Artist Link)

Analog within Intel microprocessors

Challenges
– Variability (Physical, Electrical)
– Design Risk, AMS is 75% of the risk for failure and cost for design and verification
– Need MS verification (SPICE, HDL, Analog HDL, RTL)
– Questa AMS (Analog Real Number modeling)

Questa ADMS – C/C++, Matlab, VHDL-AMS, …

IC Station (Version 9) – New name is: Pyxis Custom IC Platform (Version 10)

Pyxis – OA database compliant (available now)
– OA native for some functions
– Schematics, Layout, Floorplanning
– Launch simulators
– Concurrent design, multiple designers can edit in the sam cell at the same time
– Custom router
– Multiple designers can edit in the same cell at the same time
– Interface with Clio Soft
– Can be used on LAN, not so tested on WAN yet
– Custom Router (Native OA), easily go back and forth
o Transistor, Cell, Block, Chip, Proven (Used at Marvell) [not related to Olympus – big digital, different division]
o Interactive or batch routing
o Uses Calibre RealTime deck, good integration

Design Kits – founding member of IPL
– Part of Open PDK
– Can help to translate Development Kit formats
– Pcell translator: Robust, accurate, fast (1 foundry, 1 customer using it too)
– Create new PDK’s in a few weeks, able to QA libraries quickly

Summary
Mentor updates their tools for IC layout through the Pyxis acquisition and enhances circuit simulation with a speedier Eldo Premier. AMS co-simulation between HDL and SPICE simulators is a strong point for Mentor.


DRC tool guns for Calibre at DAC

DRC tool guns for Calibre at DAC
by Daniel Payne on 06-17-2011 at 6:48 pm

Intro
Across the aisle from the Mentor booth at DAC sat a DRC tool competitor to Calibre. I received an update from Randy Smith of Polyteda on Wednesday afternoon, my last EDA vendor of the week.

Ravi Ravikumar, Randy Smith

Notes
Randy Smith – CEO (February 2011) [former founder is gone]- 1979 at HP developing internal tools
– Trilogy
– Tangent, Acquired by Cadence
– Bought 4 times
– Celestry->Cadence
– Gambit->Synopsys
– Japan consulting business
Before – big performance claims

Now – about 2 to 3X faster than Calibre while running in flat mode, PowerDRC– Look for a new hierarchical announcement by end of year, look for a new name
– Smaller memory footprint
– Easy to scale across multi-processors
– TSMC has a reference flow, while larger companies can use a new DRC tool during design process
o 3 way NDA between: Polyteda, TSMC, Client. Tune the rule deck. 40nm deck. Takes 18 months to reach sign off, stay tuned.
– OEM relationship with AWR – single CPU limitation
– IHP – customer, AMS client at 180nm (Pricing of Calibre seems too high)
– Price/Performance – produce more results with less cost than Calibre
– Learning curve: batch oriented, easy to learn, debugging is more of the issue, something similar to RVE called RDE (still internal)
– Time based licensing, tied to the number of CPUs
– Mentor has two licenses: Flat or Hierarchical
– Polyteda will have one license: Flat and Hierarchical
– Over 40 people in the company
o R&D in Moscow
o HQ in Santa Clara
Summary
Polyteda has reset expectations about their DRC tool performance and will have to battle against the entrenched Calibre in the marketplace. Competition always benefits the EDA tool users who need every advantage to get to market quickly and have first silicon success.