BannerforSemiWiki 800x100 (2)

Why CEVA Is My Favorite Semiconductor IP Stock For 2014

Why CEVA Is My Favorite Semiconductor IP Stock For 2014
by Ashraf Eassa on 12-10-2013 at 9:05 pm

As a full time financial writer/investor, I am always on the lookout for compelling risk/reward opportunities, particularly in small-cap tech. While the world of large-cap tech is generally well understood by the investment/analyst community, smaller cap names are usually under-followed and often misunderstood. One such example of this – and one of my highest conviction stock picks for 2014 – is CEVA Inc., a leading vendor of DSP IP blocks that primarily go into cellular baseband chips (although the company is expanding outside of this core business).


A Little History

Following CEVA’s most recent earnings report (at which the company disappointed investors by missing top and bottom line expectations), the shares crumbled – dropping from just north of $18 per share to a low of $13.71. It is then that I took a position in the stock. But why did I do so?

Well, in order to understand why I believe CEVA’s future is so bright, it’s important to understand the difficulties that the company has had over the last few years. Really, the company’s (biggest) problems can be summed up with just one word: Qualcomm.

Remember how I told you that CEVA’s DSP blocks are used in the cellular modems from Samsung and Intel (via Infineon acquisition)? Well, the first “strike” was when Infineon Wireless lost the cellular baseband socket in the GSM version of Apple’s iPhones. Tens of millions of units – each generating over 2.5 cents per unit in pure royalty profit – evaporated into thin air. Now, this is a pretty unfair characterization – while the leading edge iPhones were all Qualcomm modem (with a Qualcomm DSP) – the older generation iPhones still sported Infineon modems, meaning that CEVA saw a continued bleed until the iPhone 4 and earlier were no more.

But it got worse. MediaTek, one of the leading vendors of applications processors, actually picked up DSP IP vendor (and competitor to CEVA) Coresonic, which meant that CEVA’s IP was given the boot in MediaTek’s product lineup (and MediaTek is a rather high volume player in the handset market). This was an even worse blow from a revenue standpoint than losing the iPhone.

Finally, the last leg into its steep fall from financial grace, was the fact that none of CEVA’s remaining baseband heavyweights – Samsung, Broadcom, or Intel – were able to deliver truly credible LTE solutions (yes, Samsung did its own but the vast majority of the phones that matter for Samsung use Qualcomm apps processors and baseband – this isn’t by accident). This meant that Qualcomm basically took over the entire high end of the cellular baseband market, leaving nothing for the last two years for CEVA’s big licensees.
However, the pain is almost over and I believe that CEVA’s business is on the cusp of a major inflection point.

Samsung, Intel, and Spreadtrum: CEVA’s Saviors

Intel formally announced that its XMM 7160 multimode LTE modem was shipping to customers now and that its second generation LTE-Advanced modem, known as the XMM 7260, would be shipping in devices during the first half of 2014. Samsung, too, has been rather clear about its intentions to use its own LTE-Advanced baseband across its product lineups next year. While Samsung’s RF transceiver partner, Silicon Motion, doesn’t yet know to what extent Samsung will be deploying its own solution (the modem is still in testing), everyone involved seems optimistic that Samsung will deliver.

Intel, too, has made it quite clear that there is significant customer demand for its upcoming XMM 7260, with Intel’s wireless GM going so far as to tell the audience at the company’s annual investor meeting that customers have been nagging him to pull in the schedule for this device by a month here and a couple of months there. Intel’s modem looks quite compelling and possibly even good enough to give Qualcomm’s top notch LTE-A modem a run for its money. Success for Intel here means success for CEVA.

Finally Spreadtrum, one of the leading vendors of integrated SoCs and cellular basebands for the Chinese handset market (and a rapidly growing one, too) leverages CEVA’s DSPs for its LTE products (which, with the rollout of LTE on China Mobile’s networks, should be selling in rather nice quantities shortly).

Yes, there’s other stuff, too – but I don’t care

While I do like that CEVA is aiming to put its DSP cores into other products outside of cellular baseband, the fact of the matter is that 87% of the company’s royalty-bearing unit volume comes from cellular baseband chips. Further, if my thesis is right, CEVA will start to see a meaningful upward inflection in the number of basebands ship with its DSP cores as new products from Intel, Samsung, and Spreadtrum begin to ramp in earnest.

If CEVA’s other businesses take off as well, then that’s just more profit for CEVA and its investors, but the important thing to see here will be just how much traction CEVA’s licensees gain in the baseband market. If Samsung moves most of its phones over to its own baseband, then there’s no question that CEVA will see a nice unit volume spike. However, I’m frankly more excited about Intel here because I do think Samsung will still use a meaningful number of Qualcomm (i.e. non-CEVA royalty bearing) chips in many of its phones. Intel, on the other hand, is getting ready to attack both the discrete LTE/LTE-Advanced market during the first half of 2014 and then will be pushing hard with an integrated apps processor + baseband (presumably with CEVA IP since the modem block is based on the same one found in the XMM 7260 discrete modem). At any rate, my thesis is simple: as Qualcomm’s monopoly breaks and as Samsung/Intel/Spreadtrum gain share, CEVA is poised to profit.

What if I’m wrong?

I could be dead wrong – no investor should believe in a 100% fool-proof investment thesis. Intel’s baseband efforts could tank and so could Samsung’s – there’s a reason Qualcomm has held the top spot for so long and it won’t be so easy to knock this behemoth off of its perch. That being said, even with baseband sales having plummeted (and, frankly, probably bottomed – it’s hard to see how much more damage can be done here), the company still has a massive pile of cash (~$5.27/share in cash on the books), valuable IP, and is still cash flow positive (although nowhere near where it was when things were going well in 2010/2011). If things get really bad, I see no reason why Intel or Samsung wouldn’t just peel off a couple hundred million from their massive cash flows and just buy the company. The IP is valuable and the engineering team has proven that it can consistently do a good job. If MediaTek bought Coresonic, does it really seem farfetched for Intel/Samsung to buy CEVA?

More encouragingly, after a bad quarter and a not-so-great forward guide, CEVA’s shares found incredibly strong support in the high $13/low $14 range. The downside – from my perspective – just simply doesn’t seem to be all that great (a few bucks per share) compared to what I believe could be a doubling in share price over the next couple of years if Intel/Samsung gain meaningful traction in baseband (and remember, the cellular market itself is growing wildly, so Intel/Samsung don’t have to take an obscene amount of share for CEVA to see a nice uptick in royalty-bearing units being sold).

Conclusion

CEVA is a small cap semiconductor IP play that, in my view, represents one of the best, catalyst-driven plays in the space. I also like Imagination Technologies which could be a real winner if it can win back meaningful traction with MIPS and stave off ARM/Vivante/NVIDIA in the GPU IP business (and Samsung/Intel’s own internal efforts), but the CEVA story just seems easier. ARM is the biggest, best-in-class semiconductor IP vendor but it is expensive and, in my view, the majority of the run has happened (although at today’s market capitalization, ARM could easily issue some stock and buy CEVA…).

Those are just my view of things. What do you think, SemiWiki?

More articles by Ashraf Eassa…

Also read: A Brief History of DSP

lang: en_US


Cadence CEO Keynotes DVCON 2014!

Cadence CEO Keynotes DVCON 2014!
by Daniel Nenni on 12-10-2013 at 8:00 pm


Next year’s DVCon attendees can expect to learn about both practical solutions to their pressing problems that can be applied today and also receive a preview of the technologies that will affect them in the near future. DVCON is March 3-6, 2014 @ the DoubleTree Hotel in San Jose.

KEYNOTE: An Executive View of Trends and Technologies in Electronics

The Semiconductor Revolution continues to drive tremendous economic growth more than 50 years after it began. Today’s major trends – like cloud computing, wearable computing, and the internet of things – are fueled by innovations in product design using advanced semiconductors.

But rapid market trends bring increasing time-to-market pressures that threaten to stifle innovation. The technical challenges of SoC design for advanced process nodes, as well as system issues like communications performance, data security, and ultra-low-power design, create increased complexity and risk. From a business standpoint, the cost of developing the software is becoming the biggest factor in SoC design, followed by the cost of verifying everything – including the software.

One of the ways the electronics industry addresses these challenges is through improvements in design technology. The EDA industry, including Cadence, is investing billions of dollars to develop new design technologies and methodologies to keep the semiconductor industry moving forward. In this keynote, Lip-Bu Tan will discuss the relationships between market trends, opportunities, and challenges, and show how new design technologies are an essential element of continuing innovation.

Lip-Bu Tan has served as President and CEO of Cadence Design Systems, Inc. since January 2009 and has been a member of the Cadence Board of Directors since February 2004. He also serves as chairman of Walden International, a venture capital firm he founded in 1987.

PANELS @ DVcon:

Is Software the Missing Piece In Verification?
Panelists from different segments of the semiconductor industry will discuss and debate the presumed need, maturity, scalability, and adoptability of software-driven system-level verification tools, as well as what’s needed to get them to mass usability level. In addition, panelists will discuss current approaches to the verification problem and what place this technology has with respect to current verification methodologies such as UVM.
See the exciting line-up of panelists.

Did We Create the Verification Gap?
Panelists will explore how verification teams interact with broader project teams and examine the characteristics of a typical verification effort, including the wall between design and verification, verification involvement (or lack thereof) in the design and architecture phase, and reliance on constrained random in absence of robust planning and prioritization to determine the reasons behind today’s Verification Gap.
More details.

TECHNICAL SESSIONS @ DVcon:

  • System-Level Design
  • Formal and Semi-Formal Techniques
  • Hw/Sw Co-Verification
  • Advance Methodologies and Testbenches
  • Mixed-Signal Design and Verification
  • Low Power Design & Verification
  • Automated Stimulus Generation
  • SoC and IP Integration Methods and Tools
  • Interoperability of Models and/or Tools

REGISTER NOW

DVCon is the premier conference for the application of languages, tools and methodologies for the design and verification of electronic systems and integrated circuits. The primary focus of DVCon is on the practical use of specialized design and verification languages such as SystemC, SystemVerilog and e, assertions in SVA or PSL, as well as the use of AMS languages, design automation using IP-XACT and the use of general purpose languages C and C++.
Conference attendees are primarily designers of electronic systems, ASICs and FPGAs, as well as those involved in the research, development and application of EDA tools.

lang: en_US


Could FD-SOI be Cheaper too?

Could FD-SOI be Cheaper too?
by Eric Esteve on 12-08-2013 at 11:00 am

We agree now that FD-SOI technology is Faster, Cooler, Simpler. But can it also be a cheaper technology? Let start with an overview of the current estimation of the development cost for complex SoC on advanced technology nodes. The following data are extracted from International Business Strategies, Inc 2013 report. The first picture shows the Hardware Design Cost evolution in respect with the target design node. The exponential nature of the growth rate is known for a couple of years at least, I think it’s good to remind these data points:

  • 28nm H/W Design cost is: $50M
  • 20nm H/W Design cost is: $95M
  • 16/14nm H/W Design cost jump to $193M

Because a chip maker will have to integrate these development cost into the Integrated Circuit (IC) selling price, it should target a market large enough to sell several dozen million IC, if not hundred million. And this chip maker should target a technology offering the cheapest cost per gate data point.

The next picture will not make life easier for this chip maker: the cost per gate trend is reversed from 20nm node! Because the market targeted by this chip maker is certainly ultra-competitive (remember, it’s a several hundred million chip segment, attracting many players), every chip generation is expected to integrate more features, and more functions (CPU, GPU, SRAM, DSP etc.), moving to the next technology is not a choice, it’s a requirement.

The above picture is based on Bulk Silicon technology, so what can be the cost of FD-SOI technology in comparison? Thanks to an active Semiwiki reader, Scott W Jones, who has posted a relevant comment on a previous blog, and direct me to: FDSOI more cost effective than bulk for 22nm SOC I have extracted the following table, which is a cost comparison between a 20nm Bulk fabricated wafer cost, and a 22nm FD-SOI. This table will provide us with the 20nm data point:

You may want to read the complete article to check the methodology used to come to this table (just click on the title above), the very interesting point is that both cost are within 1 or 2 %. The starting wafer cost (material) at $500 for FD-SOI instead of $130 for Bulk is compensated: FD-SOI is simpler, this means that there are less masks and less technology steps, less equipment (depreciation and maintenance), lower labor cost… And, by the way, the lead time to process FD-SOI wafer is shorter! If you have ever waited for ASIC prototypes to come back to the wafer fab, I am sure you will greatly appreciate this benefit! And the chip maker marketing team will also appreciate to benefit from such a shorter lead-time during production ramp-up, to accelerate TTM.

So far, I did not use any data coming from ST-Microelectronics, FD-SOI champion, along with IBM and GlobalFoundries, but after searching the web, I could not find data to be used to generate other data points, at 28nm and 14 (or 16)nm. This table was presented during IP-SoC 2013 in Grenoble, so I have downloaded the slides (here) from STM keynotes presentation to have enough time to carefully look at it:

If we look at the first cell in the table, 28nm planar FD-SOI vs 28nm HKMG bulk planar, we just see “FD-SOI showing better cost”. In the worst case, we can say that FD-SOI is cheaper or at the same cost than HKMG bulk planar technology, assuming the fabrication related cost could be very similar to the cost listed in the previous table. In fact, digging in the presentation, we can find this picture.

The outcome is:

  • 28nm FD-SOI is the same cost than 28LP
  • 28nm FD-SOI provides the same performances as 28G…but a 15% lower cost.

Thus, we have the next data point: FD-SOI is cheaper by 10-15% than 28G bulk planar technology, at equivalent performance. Another data point would be interesting: the comparison between 14/16 planar FDSOI and 14/16 bulk FinFet. According with the above (Benefits) Table, 14nm Planar FD-SOI Fabricated wafers are 35% cheaper than 14nm FinFet Bulk. That makes sense, as the SOI wafer over-cost ($500 vs $130) is certainly more than compensated by the numerous extra process steps needed to create the 3D FinFet structure… It would be good to benefit from cost data point coming from an independent source, as far as I know, some current research is on-going, so I will update this article as soon as such data will be available.

Let me quote Pr Asen Asenov, Professor at The University of Glasgow (and CEO of Gold Standard Simulations (GSS) Ltd), who has run some very interesting research on the topic: “Making SOI finfets is cheaper than bulk finfets, but SOI wafers are more expensive, so if a foundry goes to SOI it has to give some of its profits to the SOI manufacturer. So bulk is better for foundries and that’s why they’re going for bulk at 16nm and 14nmm. But SOI has the advantage. For foundries it’s not the most important thing to have the best technical solution but to have the solution which produces the most revenue.” Clearly, Pr Asenov considers that SOI (FinFet or even more Planar) technology is cheaper than Bulk FinFet. He is also making a very good point when noting that a higher part of the fabricated wafer margin is going to raw SOI wafers vendor, explaining why foundries prefer to use Bulk technology: they can keep a higher part of the overall margin when selling Bulk wafers…

Faster, Simpler, Cooler,…and Cheaper: FD-SOI technology should get very good traction in the near future!

From Eric Esteve from IPNEST

More Articles by Eric Esteve …..

lang: en_US


Equipment Spending Down 2013; Expect 33% Growth in 2014

Equipment Spending Down 2013; Expect 33% Growth in 2014
by Daniel Nenni on 12-08-2013 at 10:10 am

Fab Equip Spending

SEMI’s World Fab Forecast report, published in November, predicts that fab equipment spending will decline about -9 percent (US$32.5 billion) in 2013 (including new, used and in-house manufactured equipment). Setting aside the used 300mm equipment Globalfoundries acquired from Promos at the beginning of 2013 (NT$20-30 billion), fab equipment spending sinks further, to -11 percent in 2013. The previous World Fab Forecast in August predicted an annual decline of just -1 percent (-3 percent without the used Promos 300mm equipment).

Fab equipment spending slowed in the third quarter much more than anticipated. The fourth quarter is also expected to be slower than previously thought, but will remain the strongest quarter of 2013. See figure 1.

Figure 1: Fab equipment spending for Front End facilities by quarter

2011 Still Record Year but 2014 Closing in

After two years of decline, the 2014 wafer fab equipment market is expected to grow over 30%. Taiwan will be the strongest spending region with over US$9 billion, while Korea and the Americas will each spend at least US$6 billion each, and China and Japan will each spend around US$4 billion.

A growth rate of over 30% brings 2014 close to 2011 spending for wafer fab equipment. Comparing the actual spending numbers for 2014 and 2011, spending in 2014 is expected to be slightly below 2011 levels, about US$39.7 billion for 2011 compared to US$39.5 billion projected for 2014.

Fab Equipment Spending Patterns: Predicting the Next Slow Down?
The industry has displayed a predictable pattern for most of the past 15 years with regards to fab equipment spending: following two years of negative growth, there have been typically two subsequent years of positive growth. See figure 2.

Figure 2: Growth rates of fab equipment spending (2010 deliberately cut off in order to emphasize)

According to the SEMI World Fab Forecast, in 2012 and 2013 the fab equipment market contracted, while the next two years, 2014 and 2015, are expected to be positive. The same scenario occurred from 2008 to 2011. After 2005, just a single year of a small decline, 2006 and 2007 showed growth. The same scenario occurred from 2001 to 2004. This pattern is not new and has been observed by many analysts. However, over these 15 years, the industry has never experienced three consecutive years of growth or three years of decline according to SEMI database tracking. At this point, the pattern points to expected growth in 2015, between 8 and 12 percent. If the pattern holds, another decline will occur in 2016.

2013 — a Maverick Year
Semiconductor revenue growth is usually followed by more equipment spending, with revenue and capex typically riding the same rollercoaster. This is not the case for 2013 as semiconductor revenues are expected to grow — although by single digits — equipment spending will not. See figure 3.

Figure 3: Change rates of semiconductor revenue and fab equipment spending (2010 deliberately cut off in order to emphasize)

As demonstrated by the above chart, positive semiconductor revenue growth led to positive growth for fab equipment spending (except in 2005), while negative revenue years led to contractions in fab equipment spending. Industry consensus points to about 6 percent semiconductor revenue growth in 2013, though fab equipment spending will contract. With the expected growth in semiconductor revenues for 2014, SEMI’s World Fab Forecast data support much stronger growth for fab equipment spending in 2014. The drop in 2013 may be explained by delays in ramping next generation products and a slower pace of new capacity addition.

Fab Construction Projects Strong in 2013 but Slowing

Across the industry, there are 40 major construction projects on-going in 2013, and 28 are predicted for 2014. Construction spending growth for 2013 is about 40 percent (US$7.5 billion). By 2014, this will drop by -15 percent (US$6.4 billion). The largest construction projects already underway or expected to start soon are Samsung S3 (Line 17), Flash Alliance Fab 5 phase 2, possibly Globalfoundries Fab 8.2, Intel D1X module 2, and TSMC with four facilities. The World Fab Forecast report shows details per fab by quarter.

SEMI’s data support strong equipment spending in both 2014 and 2015, while construction spending is expected to decline in both years, and new capacity additions remain below 4 percent in 2014 and most likely in 2015 as well.

Activity report

SEMI’s World Fab Forecast lists about 1,150 facilities. Sixty-seven of these (with various probabilities) have started or will start volume production in 2013 or later. The report lists major investments (construction projects and equipping) in 206 facilities and lines in 2013 and 180 facilities and lines in 2014.

Since the last fab database publication at the end August 2013 SEMI’s worldwide dedicated analysis team has made 301 updates to 257 facilities (including Opto/LED fabs) in the database. The latest edition of the World Fab Forecast lists 1,149 facilities (including 250 Opto/LED facilities), with 67 facilities with various probabilities starting production this year and in the near future. We added 7 new facilities and closed 11 facilities.

SEMI World Fab Forecast Report

The SEMI World Fab Forecast uses a bottom-up approach methodology, providing high-level summaries and graphs; and in-depth analyses of capital expenditures, capacities, technology and products by fab. Additionally, the database provides forecasts for the next 18 months by quarter. These tools are invaluable for understanding how the semiconductor manufacturing will look in 2013 and 2014, and learning more about capex for construction projects, fab equipping, technology levels, and products.

The SEMI Worldwide Semiconductor Equipment Market Subscription (WWSEMS) data tracks only new equipment for fabs and test and assembly and packaging houses. The SEMI World Fab Forecast and its related Fab Database reports track any equipment needed to ramp fabs, upgrade technology nodes, and expand or change wafer size, including new equipment, used equipment, or in-house equipment. Also check out the Opto/LED Fab Forecast. Learn more about the SEMI fab databases at: www.semi.org/MarketInfo/FabDatabase and www.youtube.com/user/SEMImktstats

SEMI
www.semi.org
San Jose, California


By Christian Gregor Dieseldorff, SEMI Industry Research & Statistics Group

lang: en_US


Capturing Analog Design Intent with Verification

Capturing Analog Design Intent with Verification
by Daniel Payne on 12-08-2013 at 10:05 am

Analog IC designers are gradually adopting what digital IC designers have been doing for years, metric driven verification. When you talk with analog designers about their methodology and approach, you hear terms like artisan being used which implies mostly a manually-oriented methodology. Thanks to automation from EDA companies, we are seeing more analog designers capturing their analog design intent with verification in mind. This week I spoke with Fouad Mkalech of Mentor Graphics to learn about an environment they offer called ICanalyst.


Fouad Mkalech, Mentor Graphics

Q: What is ICanalyst all about?
A: ICanalyst is something we’ve developed that is beyond just the traditional flow where you simulate a transistor netlist and then view waveforms. Metric driven verification is a very powerful concept used in digital design, so we’ve brought that idea over to analog IC designers. This tools provides a methodology for you to verify your analog design against the specifications.

Q: What changes would I make to my normal analog design flow?
A: You would start to use MEASURE and EXTRACT statements in your netlist, simulate, and then collect results into a database for analysis purposes, with both Digital and Analog domain results in one place.

Q: What kind of inputs are accepted in this process?
A: Design inputs include: Mentor Pyxis Schematic, Cadence Virtuoso schematic or just a SPICE netlist. Your SPICE circuit simulator also requires a PDK with the SPICE models.

Q: Which simulators are supported?
A: We support several simulators from Mentor: Eldo, ADiT, Questa ADMS.

Q: What are the benefits of analog design verification?
A: Designers now have the ability to input their requirements and then compare against simulated results in an automated fashion. It’s really quite similar to the UVM approach for digital designs in terms of creating a spec, adding requirements, then verifying in a testbench. This approach allows you to link the specification, test plan, and analog environment for measurements. You can then track your verification progress with ReqTracer using a requirement-driven process. Analog designers are no able to track down each requirement and see how it drives the test plan, and then answer the important question, “Is my design failing or passing this requirement?”

Q: Where is ICanalyst developed?
A: ICanalyst started out development in San Jose, and has now moved to Grenoble, France.

Q: Who would be the typical user of this tool?
A: Users include: analog designers, verification engineers or even project managers.

Q: What is the learning curve for ICanalyst?
A: There’s a user interface, so it feels and looks intuitive. The general usage flow has: Create a spec, create Excel document with your requirements, import Excel, match measurements to tracking (SPICE measurements), results saved to a DB. We find that engineers are learning the tool within a day.


UI of ICanalyst: manage simulation setup, parallel job execution and results management

Q: What kind of questions can using ICanalyst help me answer?
A: One important question is, “Have I done enough analog circuit simulation to meet my specs?”

Q: Who has been using ICanalyst?
A: One customer is DENSO, they are a global supplier of automotive technology, systems and components.

Q: When is the next release of ICanalyst due?
A: In January we have our next release and it adds support of Requirements driven verification for analog design.

Q: Are there other EDA vendors with something similar to ICanalyst?
A: Yes, from Cadence there is a product called Virtuoso ADE XL.

More Articles by Daniel Payne …..

lang: en_US


How to Assure Quality of Power and SI Verification?

How to Assure Quality of Power and SI Verification?
by Pawan Fangaria on 12-08-2013 at 10:05 am

As power has become one of the most important criteria in semiconductor design today, I was wondering whether there is a standard set for the power verification for an overall chip. We do have formats evolved like CPF and UPF and there are tools available to check power and signal integrity (SI), however I don’t see a standard objective assessment of power and SI verification. It’s more of a subjective matter based on how much testing has been done.

A while ago I was reviewing Apache’s industry leading tools, RedHawk and PowerArtist, and that made me curious to know about what kind of procedures these tools may be using for power grid and SI analysis and verification. And, to my pleasant surprise, I found this whitepaperwhich provides a good level of detail about the step-by-step, ordered flow at various stages that systematically increases the coverage of power grid verification. By gaining high coverage and fixing detected weak points, it is assured that the design is free from any SI defect such as high noise or jitter and any timing path violation. The overall Apache solution internally uses several powerful utilities and tools to perform these steps. So, let’s see in brief what these are.



Early Analysis
: At this stage, early power estimation is done at RTL level of the design by using Excel2IR utility and RPM (RTL Power Model) interface to RedHawk, thus reducing lengthy iterations later.

Design Input Check: RHE (RedHawk Explorer) verifies complete design data and reports missing data to be completed. At this stage, the simulation setting is also checked and corrected such as required frequency for a good capture of design power.

Design Weakness: Design extraction at this stage can detect missing vias, shorts, unconnected wires and instances. LVS checks connectivity and RedHawk checks quality of connections and detects high resistances (which can lead to Noise and Electromigration issues) to particular devices. High resistance connections are fixed at this stage to avoid wastage of time in debugging later.

Static Simulation: This is the first level of simulation which confirms the design to average power consumption and current. Parameters such as adequacy of metal density, number of pads and number of power gates, in case of low-power design, are determined.

Dynamic Simulation Guided Vectorless: Vectorless simulation is very important for verification coverage because it detects weak points with the help of RedHawk’s built-in engine. It considers each instance: function, toggle, timing overlap, power, frequency, and weak connection to the power grid. Package data is also included as it has high inductance and that can lead to high dynamic IR drop. Design related activity and power information is added to guide the simulation.

Dynamic Simulation Scan: This stage reflects the largest current peak consumed by the design in a dynamic simulation. All FFs receive the same clock frequency. The simulation can be done in vectorless mode when the ATPG information about the scan order is inserted into RedHawk.

Dynamic Simulation RTL VCD: Using RTL VCD can provide realistic switching scenarios, however is limited by specific vectors. A state propagation can provide more accurate power calculation. RedHawk’s internal state propagation engine generates a small size of VCD file.

Dynamic Simulation GL VCD: At this stage, simulation is very accurate and the VCD has true timing including propagation delay, switching time and toggle for every instance. Accurate APL (Apache Power Library), package information and fine time step make the simulation results close to silicon.

RedHawk along with its other powerful utilities performs all these tasks and builds verification coverage to gain significant confidence in power and SI reliability of a chip, package and board. RHE is a versatile utility in-built into RedHawk and works at all stages. Success criteria for a tape-out can be pre-set in RHE which can then work on satisfying those criteria. A detailed description on all these procedures can be found in the whitepaper. Happy read!!

More Articles by Pawan Fangaria…..

lang: en_US


A Brief History of ARM Holdings

A Brief History of ARM Holdings
by Daniel Nenni on 12-07-2013 at 12:00 pm

It was on 26th April 1985 (at 3pm to be precise) that the very first ARM silicon sprang in to life – it was a 25K transistor design implemented in 3um technology with just 2 layers of metal.

However back then the “A” in ARM stood for Acorn – ARM the company had yet to be formed. Acorn sold computers to schools and so cost was a prime concern, this meant that when it came to replace the aging 8bit 6502 in the BBC Micro with a more powerful microprocessor it had to be cheap.

Unfortunately the commercially available alternatives at the time were simply not cheap enough and so Hermann Hauser, the Managing Director of Acorn, decided that Acorn should build its own 32bit microprocessor – however he gave the ARM design team two distinct advantages over other microprocessor design teams – no money and no people! So the design had to be simple and straight forward, indeed the first ARM reference model was written in just 808 lines of Basic.

Interestingly, although the ARM silicon worked first time, it appeared to be consuming no power at all, at least that is what the ammeter said. It turned out that the test board had a fault that meant the ARM was effectively unpowered and was running solely on leakage from the I/Os. This low power consumption was a valuable side effect of making the ARM cheap and turned out to be the key to its success in the emerging mobile electronics market.

1990: ARM Ltd. Founded
In the early 1990s Apple was developing a “Personal Digital Assistant” called Newton and were looking for a low power processor to power it. Apple was very interested in ARM but were reluctant to base a product on Acorn’s IP and so ARM Ltd. was founded on 27[SUP]th[/SUP] November 1990 as a joint venture between Apple, Acorn, and VLSI Technology. The first office was in a beautiful 17C converted barn just outside Cambridge UK . Apple put in £1.5M of cash, Acorn put in the 12 Engineers who had worked on ARM and VLSI provided the design tools.

ARM then set about extending the architecture to meet Apples requirements for 32bit addressing and endienness support. In January 1992 the ARM610 was complete and Newton was launched in 1993.
Partnership Model
Unfortunately, the Newton was not a great success and so Robin Saxby, ARM’s CEO, decided to grow the business by pursuing what we now call and IP business model, which was rather unusual at the time. This was still at the stage that a microprocessor was a whole chip, before it was small enough to just form part of an system-on-chip.

The ARM processor was licensed to many semiconductor companies for an upfront license fee and then royalties on production silicon. This effectively incentivised ARM to help its partner get to high volume shipments as quickly as possible.

One interesting feature of this IP licensing business model is that the pipeline is very long—it can take years from the time a license is signed until the royalties really start to kick in.

When ARM started it had an internal software group to produce the compilers, assemblers and debuggers that it required. But ARM could not do everything necessary internally. However, it was still a small company with a niche processor architecture and companies like Wind River that produced real-time operating systems needed to be paid to port their product lines and support the architecture.

But as the ARM architecture became more and more widely licensed, ARM put a lot of effort into building a partner program so that anything that an ARM licensee might need would be available from an ecosystem of 3[SUP]rd[/SUP] party suppliers. The economics for the partners changed so that gradually selling into ARM’s base of licensees was a huge opportunity and nobody needed to be bribed to support the architecture.

1994: “Thumb” – the big break
In 1993 Nokia approached TI to produce a chipset for an upcoming GSM mobile phone and TI proposed an ARM7 based system to meet Nokia’s performance and power requirements. Unfortunately Nokia rejected the proposal as the memory footprint of a an ARM7 based solution made the system cost too high—being a 32 bit processor every instruction took 4 bytes. ARM came up with a radical idea to create a subset of the ARM instruction set that required just 16 bits per instruction. This improved the code density by about 35% and brought the memory footprint down to a size comparable with 16 bit microcontrollers. Thumb, as it became known, was a major breakthrough that won Nokia over, and arguably is what got ARM in to mobile phones. The first ARM-powered GSM phone was the hugely popular Nokia6110 and the ARM7TDMI which powered it went on to become one of ARMs most successful products with more than 170 licensees who have shipped over 10 billion units since its introduction in 1994.

By the end of 1997, ARM had grown to become a £27m business with a net income of £3m and so it was time to float the company. On April 17th, 1998, ARM completed a joint listing on the London Stock Exchange and Nasdaq with an IPO at £5.75. The stock soared and ARM became a billion dollar company almost over night .

The move to Synthesizable cores
Chips were now large enough that a microprocessor only occupied a small part of the chip and so it was possible to build software-based systems on a single chip, the so-called system-on-chip, or SoC. Microprocessors were one of the first parts of SoCs to use the IP business model since most design teams didn’t have the knowledge or desire to build their own microprocessor and certainly lacked the skills to build the tool-chain of compilers and debuggers necessary to make it usable. As a result, ARM was designed into more and more SoCs, especially in the explosively growing cell-phone market where ARM had gradually become the de facto standard.

However, the ARM core was technology-specific “hard IP” and it soon became clear that porting it to so many different technologies was a real bottleneck and that something had to change. A synthesizable core was required that could be licensed to anyone without needing a technology-specific port of the core.

In 2001 the ARM926EJ-S was announced. It was fully synthesizable with a 5 stage pipeline and a proper MMU, as well as hardware support for Java acceleration and some DSP extension. It went on to be licensed by over 100 silicon vendors worldwide and has shipped over 5 billion units to date.

2005: Cortex
The subsequent development of ARM9 and ARM11 families had extended the capability of the ARM architecture in the direction of higher performance with the introduction of multi-processing, SIMD multimedia instructions, DSP capability, Java acceleration etc. However there were other, potentially much larger market segments, which these processors did not address. So, in 2005, ARM introduced a change of direction and the ARM architecture was split into 3 “profiles”, the upwards and to the right path continued with the Cortex-A, a new range of high-performance real-time processors was introduced as Cortex-R whilst the Cortex-M profile targeted microcontrollers.

2008: Multicore
By 2008 the Smartphone market was booming and the demand for increased performance whist at the same time maintaining a long battery life presented quite a challenge. ARM responded with the Coretex-A9 MPCore, a multi-core processor which was better able to address the huge dynamic range in processing power from idle or playing music to full bore 3D gaming. This was further improved with the introduction of the heterogeneous “big.LITTLE” approach in 2011, which provides high performance with a powerful core when required and then switches back to a lower performance but much lower power core when high performance is not needed.

Of course, the product line continues to develop with a spread from the Cortex-M0 microcontroller all the way up to 64-bit multicore processors aimed at the corporate datacenter, and with cores in between targeted at attractive large markets such as low-end low-price smartphones.

ARM is now the standard microprocessor for use in mobile products, especially smartphones such as the iPhone or Samsung Galaxy, and tablet computers like the iPad. It powers Qualcomm’s Snapdragon, Apple’s series of Ax application processors, Mediateks’s chipsets, and also the high-volume low-cost feature phones.

The ARM Connected Community, as the ecosytem of partners is now known, has over 1000 companies participating. These partners add value to the ARM architecture and are a formidable barrier to entry into ARM’s processor licensing business.

When ARM was founded back in 1990 it had just one licensee, VLSI Technology, who had shipped a total of 130K cores. Today ARM has more than 280 licensees who have collectively shipped over 30B cores to date. There are 22 million ARM cores entering the market every day.

About ARM

ARM designs the technology that is at the heart of advanced digital products, from wireless, networking and consumer entertainment solutions to imaging, automotive, security and storage devices. ARM’s comprehensive product offering includes RISC microprocessors, graphics processors, video engines, enabling software, cell libraries, embedded memories, high-speed connectivity products, peripherals and development tools. Combined with comprehensive design services, training, support and maintenance, and the company’s broad Partner community, they provide a total system solution that offers a fast, reliable path to market for leading electronics companies.


The Leading Edge Depends on What You Are Doing

The Leading Edge Depends on What You Are Doing
by Paul McLellan on 12-06-2013 at 11:10 pm

At Semicon Japan a few days ago, Subi Kengeri of GlobalFoundries delivered the keynote. While he covered a number of topics, using Tokyo’s recent win of the 2020 Olympics as a hook, one major theme was the increasing importance of processes other than the bleeding edge digital processes that get all the news.

What is leading edge depends a lot on what it is you are doing. For example:

  • automotive: 40nm
  • display drivers: 55nm
  • power management: 130nm
  • RF antenna: 130nm
  • amplifiers: 180nm


When Apple’s latest Ax chip gets all the buzz, it is easy to forget that a typical smartphone chip contains more analog devices than logic/memory devices. There are analog chips for power, audio, video, display, touch, transmitters, compass, gyroscope and accelerometer. Whereas the application processor (with on-board DRAM), the Nand flash, Nor flash and modem makes four chips. Without the analog the phone won’t turn on and the display won’t work. Not a very smart smartphone.

One result of all of this is that the market for analog is growing significantly faster than the overall semiconductor market.

In analog, what is important is not so much the process generation but the voltages that can be supported. Different applications have different voltage requirements. But to make this efficient, it is necessary that these requirements be delivered on the back of a much more modular process than has been the case historically where a lot of custom process development was the norm. And whereas analog fabs often were not on the leading wafer size, the volumes required mean that the efficiency of 300mm wafers is not optional for many designs.

A foundry needs to offer a modular process with choices of wafer sizes. On top of the baseline process various options can be added, with process modules for RF, higher voltages, MEMS, non-volatile memory and so on. By mixing modules it is possible to put a full integrated SoC together than contains several of these technologies.

Subi talked about MEMS. This is an area of rapid development. As Subi said:”MEMS will evolve just like CMOS…only faster.”

Of course all of this, analog development and MEMS development (as well as leading edge digital) needs to be done in an increasingly cooperative fashion with customers, foundries and other stakeholders working together, what GlobalFoundries calls Foundry 2.0. There are a lot of challenges taking 28nm designs down to 16nm for sure, but there are also major challenges in getting higher voltages onto more recent process geometries so that complete integration can be done on a single baseline process. Obviously you cannot mix and match designs on a single die if they are in completely different process technologies (although 3D and interposer designs might become important here, although costs still seem to be too high).

Of course GlobalFoundries is doing all this, investing in their analog and mixed signal fabs (primarily the ex-Chartered fabs in Singapore), refreshing 200mm fabs and expanding 300mm capacity, and so enabling a complete solution with advanced analog, digital, power, MEMS, non-volatile memory and RF.


More articles by Paul McLellan…


Virtual Prototypes Made Easier for SoC Design

Virtual Prototypes Made Easier for SoC Design
by Daniel Payne on 12-06-2013 at 6:24 pm

Using a virtual prototype for your SoC design is accepted, conventional wisdom today because it can save development time by eliminating design iterations and avoid costly bugs that will cause an expensive product recall. In order to simulate your virtual prototype you need models, so a major question has always been, “Where do I get all these models for each block in my SoC, many of which are IP purchased through multiple vendors?”

Carbon Design Systems provides both a virtual platform plus models to enable virtual prototyping, and they have just extended their agreement with Cadence Design System who supplies a popular DDR3 memory controller. Read the press release here.


Simulating a Virtual Prototype with Carbon SoC Designer Plus

I wanted more details than the press release provided, so I followed up with Bill Neifert of Carbon Design:


Bill Neifert, CTO at Carbon Design Systems

Q&A

Q: Why is this announcement important and to whom?
A: Accurate virtual prototypes are a vital part of the development process for leading edge SoCs. This partnership enables customers developing these SoCs to more easily design with and optimize designs using Cadence IP. By including Cadence IP in Carbon’s next generation of CPAKs, it will enable designers to get up and running quickly and leverage the value of Cadence expanding IP library earlier in the design cycle.

Q: When did the partnership first begin with Cadence? What is the history?
A: This agreement originally started as a partnership with Denali for their DDR3 memory controllers in April of 2010. We successfully deployed multiple 100% accurate memory controller models to mutual customers and prospects through the years. After Denali’s purchase by Cadence we’ve amended the agreement a few times to add additional pieces of IP such as the Gigabit Ethernet MAC and PCI Express controller.

Q: Are the Carbon models of the Cadence DDR3 memory controller the first new models under this agreement?
A: These models and accompany Carbon Performance Analysis Kits (CPAKs) have been available since April 2010. We’ll be rolling out models of the rest of the Cadence IP portfolio in the upcoming month or two.

Q: How many design IP models does Cadence have that will be available on your portal? Is there a list that I can see?
A: We will have models of all of Cadence’s IP available on the portal, Cadence’s IP library is available here and contains:

  • Memory Design IP
  • Storage Design IP
  • Connectivity Design IP
  • Analog Design IP
  • Core SoC Building Blocks

Q: What are the new CPAKs coming out, and when?
A: The new CPAKs will be rolled out based upon customer interest. We typically do this by modifying an existing CPAK to incorporate a new piece of IP or replace an existing one so this process is pretty straightforward

Q: Does this partnership change anything with Tensilica processors?
A: Carbon has a longstanding partnership with Tensilica which is unchanged with this agreement

Q: Which web site should customers go to, Cadence or Carbon or either? Why?
A: All of the models will be exclusively available from Carbon’s IP Exchange web portal: www.carbonipexchange.com This web site is already the industry’s only source for accurate models of IP from other leading IP vendors such as ARM and Imagination Technologies so it’s a natural place.

Q: Which customers benefit from this partnership?
A: Customers performing IP selection, configuration, performance analysis and pre-silicon firmware development

More Articles by Daniel Payne …..

lang: en_US


Physically Aware Synthesis

Physically Aware Synthesis
by Paul McLellan on 12-06-2013 at 2:47 pm

Yesterday Cadence had their annual front-end summit, the theme of which was physically aware design. I was especially interested in the first couple of presentations about physically aware synthesis. I joined Cadence in 1999 when they acquired Ambit Design Systems. One of the products that we had in development was called PKS which stood for physically knowledgeable synthesis. It was common knowledge back then that wireload models were not going to work much longer and we needed to combine placement and global routing with synthesis. It was 15 years ago that PKS was started. So now to the present day.

The first presentation was by Jaga Shanmuga of Cisco about using the beta of the new release of RTL Compiler for designing a couple of chips. One interesting datapoint is that 2% of worldwide power goes to datacenters, and Cisco is one of the contributors to that and so are very focused on low power. Their other two priorities are silicon robustness and time to market. Get the products out there quickly and make sure they don’t come back. A typical chip is 28nm with 50M instances, 30-80 subchips and die up to 24mm (one inch, almost the maximum possible) square.

The first surprise was that they continue to use wireload models for a lot of their synthesis. This is at 28nm. They are often better at the back end and a lot easier at the front end. But not always. The next level up is PLE, physical layout estimation. This is not actually physically aware (despite the anem) but it generates more accurate tables than wireloads for that block.

The next weapon is physically aware structuring (PAS) which is especially important for designs that are a large mux or crossbar, where there is a lot of interaction between how it makes sense to break the huge mux into gates and the physical layout of the routing.

Physically aware mapping (PAM) can produce large gains if the structure is already right, especially for blocks with lots of long wires (such as the Cisco design with over 100 memories with complex interconnections).

Ankush Sood of Cadence went into more detail about how RTL Compiler addresses physical challenges early in RTL synthesis. For him, 28nm is mainstream, 16/14nm is in heavy use especially in mobile, and 10nm is imminent. MHz/W/$ is the key decision metric. Chips are now more limited by pins and access rather than the core logic area and synthesis needs to take this into account. 80-90% of the wires are local interconnect but global wires are the ones that cause lack of correlation with the back end.

Other new features (other than the ones Cisco already discussed):

  • multi bit-cell instantiation is improved. Single bit cells are placed and then once timing is known they can be grouped based on affinity etc. Power is reduced in the clock tree and also indirectly by the reduced area
  • metal stack awareness. High layers are low resistance and lower layers are high resistance. Can group the layers into about 3 bins and then worry about long nets and clocks and assigning to bins. Wire topology needs to be optimized too, not just gates
  • DPT (double patterning) awareness. Requires special spacing requirements which, during physical synthesis can be converted to padding values so that library cells get spaced out
  • enahnced multi-Vt cell selection during global mapping. Reduces leakage power
  • improved slew degradation estimation during timing analysis
  • advanced on chip variation support based on logic depth. More accurate than plain common cell derating approach
  • hierarchical flow support

More details on RTL compiler are here.


More articles by Paul McLellan…