100X800 Banner (1)

Analog IP Design at Moortec

Analog IP Design at Moortec
by Daniel Payne on 09-28-2011 at 12:34 pm

Stephen Crosher started up Moortec in the UK back in 2005 with the help of his former Zarlink co-workers and they set to work offering AMS design services and eventually created their own Analog IP like the temperature sensor shown below:

We spoke by phone last week about his start-up experience and how they approach AMS design.


Continue reading “Analog IP Design at Moortec”


Samsung versus Apple and TSMC!

Samsung versus Apple and TSMC!
by Daniel Nenni on 09-28-2011 at 6:56 am

Apple will purchase close to eightBILLION dollars in parts from Samsung for the iSeries of products this year alone, making Apple Samsung’s largest customer. Samsung is also Apple’s largest competitor and TSMC’s most viable competitive foundry threat so it was no surprise to see Apple and TSMC team up on the next generations of iProducts. The legal battle between Samsung and Apple did come as a surprise however and will change how we do business for years to come.

“Our mission is to be the trusted technology and capacity provider of the global IC industry for years to come.” TSMC Website

During the past 25+ years I have been to South Korea a dozen or so times working with EDA and SemIP companies in pursuit of Samsung business. South Korea is a great place to visit but South Korea is not a great place to do business (my opinion) due to serious ethical dilemmas. Let’s not forget the Samsung corruption scandalthat engulfed the government of South Korea. Let’s not forget the never ending chip dumping probes. The book “Think Samsung” by ex-Samsung legal counsel accuses Samsung of being the most corrupt company in Asia. So does it really surprise you that Apple is divorcing Samsung for cloning the iPad and iPhone?

I was never an Apple fanboy, always choosing “open” products for my personal and professional needs. If the IBM PC was “closed” and obsessively controlled like Macs, where would personal computing be today? The iPod was the first Apple product to invade my home and only after a handful of other MPEG players failed on me. Without iPod/iTunes where would the music industry be today?

iPad2s came to my house next. Would there even be a tablet market without the iPad? I looked at other tablets but since they were to be gifts to SemiWiki users I had a much more critical eye for quality. I even kept one of the SemiWiki iPad2s which I now use daily. We still have some iPad2s left so register for SemiWiki today and maybe you will win one!

A MacBook Air ALMOST came next, but I chickened out and bought a Dell XPS instead. The support burden of moving my family of six from Dell/HP/Sony laptops to Apple Town was just too much to fathom.

iPhone5s for the entire family will be next, Santa is bringing them for Christmas. I’m tired of my Blackberry and I being out smartphoned by snot nosed iPhone kids. I did look at the Samsung iPhone and iPad clones, and while they are less expensive, my professional experience with Samsung will not allow me to buy their products. I will wait for an Apple flat screen TV as well.

Paul McLellan did a nice write up of “The battle of the Patents” for the wireless business: Apple, Samsung, Microsoft, Oracle, Google, Nokia, and here comes a real threat to the mobile industry, Amazon (Kindle Fire Tablet)!

The Apple / Samsung legal debacle will most definitely change the semiconductor foundry business. Can Samsung or even Intel become “the trusted technology and capacity provider of the global IC industry for years to come”? Not a chance.


Battle of the Patents

Battle of the Patents
by Paul McLellan on 09-27-2011 at 5:01 pm

What’s going on in all these wireless patent battles? And why?

The first thing to understand is that implementing most (all?) wireless standards involves infringing on certain “essential patents.” The word “essential” means that if you meet the standard, you infringe the patent, there is no way around it. You can’t build a CDMA phone without infringing patents from Qualcomm; you can’t build a GSM phone without infringing patents from Motorola, Philips and others.

The second thing to understand is that typically, if you are a patent holder, you want to license the last person in the chain. There are two reasons for this. Firstly, the further down the value chain, the higher the price, and so the easier to extract any given level of license fee. It is easier to get a phone manufacturer to pay you a dollar than a chip manufacturer, for example. The second reason is that often the patent is only infringed in the final stage of the product chain. Any patent that claims to cover phones that do something special is not infringed by chips, software or IP that might go into the phone to make that something special happen. Plus you can’t really embargo anything other than the final product if it is all assembled offshore.

Apple, presumably in a calculated way, didn’t worry about licensing anyone else’s patents. They pretty much invented what we think of as the smartphone and it is hard to build one without infringing lots of Apple patents on touch-screens, gestures, mobile operating systems, app stores and so on. So they figured that they had a good arsenal for cross-licensing to address their lack of patents on basic wireless technology.

Google seems to have been blindsided by this. They created Android, which in and of itself doesn’t infringe much. They didn’t patent much on their own and probably didn’t have any intention of suing anyone. “Don’t even be as evil as suing someone.” But when Android is put into a smartphone or tablet then that end product infringes lots of patents, most notably Apple’s. Google tried to fix this, first by offering $3.14159B for Nortel’s patents (which they lost) and then by buying Motorola’s mobile phone division for around four times as much (well, they got a mobile phone division too, which might turn out to be important).

Microsoft also had a lot of patents. In fact it has been so unsuccessful so far in its mobile strategy that it reportedly makes more money licensing Android phone manufacturers (for patent licenses) than it does licensing Window7 phone manufacturers (for software licenses, presumably including the patent licenses since suing your customers tends to be bad for business).

Also, in here somewhere, is Oracle, which with its acquisition of Sun owns any patents on Java. And Android’s app development environment is Java (Apple’s is Objective-C, which they acquired with Next).

The most schizophrenic relationship is Apple and Samsung. Samsung build the A4 and A5 chips that are in the current iPhone and iPad, it supplies some of the DRAM and some of the flash. I wouldn’t be surprised if Apple is their largest customer. But they are suing each other mainly over Samsung’s iPhone lookalikes Galaxy S and Galaxy SII and iPad lookalike Galaxy Tab. Samsung announced that they have already shipped over 10M Galaxy SIIs, which is an impressively large number. Samsung is probably the biggest threat (as a single manufacturer) to Apple, already #2 in profitability and, I think, #2 in unit volume behind Nokia.

Apple has also been suing some of the Android manufacturers but they are countering since Google is now licensing some of the Motorola patents to them (for free, I assume). Remember, Apple can’t sue Google directly since an OS doesn’t infringe a phone patent, only phones can do that, and so Google can’t counter Apple directly, it has to do it through its licensees.

Meanwhile, Nokia, which must have an enormous patent portfolio, is also suing Apple, although Apple has already settled (surrendered) some of this by paying a license fee. If Nokia is to be successful with its strategy du jour of using Microsoft for its smartphone strategy then it will need to be able to defend itself against Apple. It also needs to get moving, since the latest Mango release of Microsoft’s WP7 is already coming to market through HTC and Fujitsu. If all Nokia has is a late to market me-too WP7 implementation they are doomed. Well, I think they are doomed anyway although it may depend on how much the carriers want to keep Nokia and/or Microsoft WP7 alive to counter Android and Apple.

Oh, and Amazon’s Fire tablet comes to market tomorrow, supposedly. Don’t be surprised if Apple sues them. Amazon is probably the biggest threat to Apple leveraging content rather than basic tablet technology.

What will happen in the end? Probably not much. Nobody has a clue how much anyone infringes anyone else’s patents and nobody is going to put much effort into finding out. I expect that everyone will cross-license, with Apple and anyone else who lacks fundamental patents (the ones that are used in non-smart phones) having to make some balancing payments to cover the last couple of decades of investment that they are riding on, and anyone who hasn’t got their own smartphone patents having to make balancing payments to Apple who pretty much invented them as we now think of them.


Magma eSilicon One Keynote

Magma eSilicon One Keynote
by Paul McLellan on 09-27-2011 at 2:31 pm

I was at the first half of Magma’s Silicon One event yesterday. The first keynote was by Rajeev about the environment for SoC designs, especially fabless startups, and Magma’s role going forward. More about that later. The other keynote was Jack Harding, CEO of eSilicon. As usual Jack did his presentation without any powerpoint slides, something I find very difficult to do without losing my thread.

Jack started off with some statistics about eSilicon. They have been in existence for just over 10 years now and have done over 200 parts. A 3rd party audited them for a customer and decided that they had a 98% first time hit rate. For those who don’t know eSilicon, their business model is to be an ASIC company although they don’t have a fab. But they are more than a design house. They deliver tested, packaged parts just like an ASIC company with a fab, except that they let you put your own logo on the parts. At VLSI, for example, we always put ours on (look at any pictures of motherboards of early Macs).

The big change that Jack wanted to talk about was the consumerization of SoCs, and the effect that this is having on the design chain. Design used to be “all” digital, with a smart group of slightly eccentric designers down the hall (or in a separate company) who used spice and a layout editor and their bare hands to wrestle analog to the ground. Analog was something nobody worried about.

In the 90s, the strategy was to make it a separate chip. That way it could be done in an older process. So a system might be 4 digital chips and an analog chip that was either a standard product or designed and manufactured in a separate process.

But now all that is condensed into a single SoC with all the digital, and lots of it, and all the analog, and lots of it, on a single chip. This is a problem and an opportunity. eSilicon has historically, for business reasons, been more focused on networking than consumer and over half their chips had a serdes on. Today virtually every chip they do has hundreds of lanes of serdes. There are now so many variables to make the analog work that it is incomprehensibly by a human. So it is going to have to get more automated whether the designers like it or not, just like when place and route first arrived and designers figured they could do better. For a few gates, yes, but for thousands it is just impossible.

Even picking the IP is an intractable task. TSMC has 12 flavors of 28nm process and 40 different commercial lilbraries available (from them and 3rd parties) so that’s 480 combinations just there.

Jack was asked in the questions what EDA companies don’t design the chips for at least some of their customers. He thought that this made a lot of sense but there are big problems with the way Wall Street values EDA companies (many types of companies with mixed product lines, such as HP, have this problem too). To combine an eSilicon type business with Cadence type business (and remember Jack was CEO of Cadence too) would mean going for 95% software margins to 45-50% semiconductor margins and nobody knows how to value a company that mixes those two business (one reason VLSI spun out Compass when I was there was for that Wall Street got confused at companies with mixed product lines like that). So right now it is good business for eSilicon but clearly a potential slot for EDA to step up and provide themselves. But of course that would mean they only get paid of the tools work…

Which leads to the next question: why is EDA just a $4B business? Jack’s view is that it is a flaw in the EDA business model whereby EDA charges for a capability regardless of success. Everyone else is at risk. If a chip doesn’t go to production, no wafers are bought, no parts are packaged, nothing is tested and nobody makes any money. Except EDA. But the quid pro quo is that if EDA is not going to take that risk then it capped at $4B (in fact, excluding IP, it is probably shrinking). In the early days, EDA had a hardware business model (Calma, Applicon etc) and this model made sense. The software was thought of almost as an add-on to sell the hardware. But 15-20 years ago that stopped making sense. Jack’s estimate is that if EDA had switched to take risks on a variable basis then it would be a $40B business. More chips could be made, probably with a higher percentage failure rate (as business lines, not necessarily technical) but much more volume in total.

But he wouldn’t want to be the first CEO to make the switch. Wall Street would punish you for 2-4 years until the first designs in a new process node went from EDA software development through design to volume production. The challenge is how to find a way to switch without blowing up the existing business model completely. Jack said that they have done a few small deals where eSilicon + EDA company + customer on basis of long-term royalty. Possibly a good candidate to grow.

So the takeaway is that things need to be looked at differently before. The only way to get these designs done is to be very silicon aware, work with EDA partners, silicon partners, test and assembly and so on. This is leading to re-aggregation of supply chains since someone needs to take responsibility for everything. After all, if a package is broken then the foundry isn’t getting any wafer orders even though it isn’t their fault and vice versa. For the designs they do, eSilicon takes that responsibility and they invest a lot in communication and staffing for process people, manufacturing people, test people, package experts and so on.


Cadence VIP Enables Users to be First-to-Market with Mobile Devices Leveraging Latest MIPI, LPDDR3 and USB 3.0 OTG Standards

Cadence VIP Enables Users to be First-to-Market with Mobile Devices Leveraging Latest MIPI, LPDDR3 and USB 3.0 OTG Standards
by Eric Esteve on 09-27-2011 at 1:56 am

The mobile devices market is simply exploding, with smartphones shipmentgoing up to the sky, tabletsemerging so fast that some people think it will replace PC (but this is still to be confirmed…). This lead mobile SoC designs to integrate increasingly more features, to support customer needs for more computing power and sophisticated video, audio and storage. To support these new features, improving both performance and power, new interface standards have emerged, that SoC designers need to integrate, under an ever increasing time to market pressure, opening the door for external sourcing of new functions (Design IP) and the need for solutions that can accurately test the functionality of their design and ensure manufacturing success (Verification IP). Cadence has pretty well defined the problematic (see image): it’s too hard to verify from scratch, as it requires too much time, effort and expertise.

That’s why the Verification IP market is so dynamic these days, and that’s why both Synopsys and Cadence not only communicates, but are also very active: almost every week brings major news, acquisition or release of new VIP supporting emerging standards. The involvement of Cadence in the Verification IP for System on Chip developed for the Mobile industry (wireless handset, media tablets, and portable consumer electronic devices) clearly appears when looking at the list of VIP offering for mobile applications with support for the following standards:

  • LPDDR3: This low-power version of the pervasive DDR3 memory standard enables customers to meet the high bandwidth and power efficiency requirements of mobile systems.
  • MIPI CSI-3: Providing an advanced processor-to-camera sensor interface, MIPI CSI-3 enables mobile devices to deliver the bandwidth required to enable high resolution video and 3D.
  • MIPI Low Latency Interface (LLI): This interface cuts mobile device production cost by allowing DRAM memory sharing between multiple chips.
  • USB 3.0 On-The-Go (OTG): Providing 10x the performance of the previous USB specification, USB 3.0 OTG allows consumers to rapidly transfer data, such as video and audio content, as well as quickly and effortlessly charge devices.
  • Universal Flash Storage (UFS): A common flash storage specification for mobile devices, UFS, a JEDEC standard, is designed to bring higher data transfer speed and increased reliability to flash memory storage.
  • eMMC4.5:Designed for secure, yet flexible program code and data storage, eMMC4.5, a JEDEC standard, enables high bandwidth, low pin-count solutions that simplify system design.
  • cJTAG:With its support for reduced pin count, power management and simplified multichip debug, cJTAG enables efficient testing of mobile devices, a key requirement for delivering high volume, high quality mobile devices.

The secret sauce for supporting emerging standards, when these are still in development, is to actively participate to the standard committee, like for example MIPI Alliance. According with Joel Huloux, chairman of the board, “MIPI Alliance continues to advance mobile interface standards with processor and peripheral protocols that streamline system development and expand the sophistication of today’s mobile devices. By ensuring verification support for these protocols at the earliest stage possible, companies such as Cadence enable mobile designers to embrace the latest standards and deliver products that transform the consumer’s mobile experience.” Cadence was also the first company to add support for ARM Ltd.’s AMBA 4 Coherency Extensions protocol (ACE), speeding the development of multiprocessor mobile devices, and the DFI 3.0 specification, which defines an interface protocol between DDR memory controllers and PHYs.

Another important ingredient for cooking a successful recipe, at the other side of the spectrum, is the collaboration with the system manufacturers. If you take a look at the member list on the MIPI Alliance web, you realize that this collaboration could be with companies like: Ericsson, Nokia, Panasonic, RIM or Samsung, even if Cadence does not disclose this information. Being present at both side of the spectrum, participating to the standard elaboration well before the protocol release, and working closely with the system integrators, the final users, is a good way to fine tune the verification product and release it as early as possible, allowing the SoC designers to cope with increasingly shorter time to market.

Eric Estevefrom IPNEST


Apple Plays Saudi Arabia’s Role in the Semiconductor Market

Apple Plays Saudi Arabia’s Role in the Semiconductor Market
by Ed McKernan on 09-27-2011 at 12:08 am

The retirement of Steve Jobs left most commentators wondering if Tim Cook could lead Apple marching ever onward and upward. In truth, Tim Cook’s contribution on the operations side has been just as instrumental in the destruction of Apple’s PC and consumer electronics competitors as Jobs’ product vision. Under Tim Cook’s guidance, Apple has increased their gross margins from 29% to 41% in the last five years and they look to increase this further. Cook is executing on what I will call the “Swing Consumer” strategy, which is a takeoff of Saudi Arabia’s “Swing Producer” position within the OPEC Cartel. What it means is that those companies competing with Apple will be operating off of high priced left overs.

Saudi Arabia, naturally took the role of “Swing Producer” within the cartel based on its vast oil reserves, its ability to easily lift over 12M barrels of oil a day at a cost of $2 per barrel in a country with a relatively small population. The net of it all is that Saudi Arabia for many years could financially support itself on less than half of its full capacity and therefore ensure a maximum oil price in the market per any given economic situation. Despite having a dozen members in the OPEC Cartel and gross margins of over 90%, in the case of Saudi Arabia, the oil market has not collapsed like the DRAM market does in perpetuity off a base of less than a handful of suppliers. Furthermore, Saudi Arabia recognizes that the kingdom itself is valued more on what’s still in the ground than what it produces in terms of the current cash flow. 200B+ barrels of oil at $80 is more than $1.6T of assets, which is why short term price fluctuations don’t matter as much as maintaining the long term oligarchy. They always adjust supply to get oil to an upper range.

Apple first moved into the “Swing Consumer” of semiconductors mode when it based all of its new, high growth products off of NAND Flash in combination with commodity ARM processors. By 2005, Apple was taking roughly 40-50% of Samsung’s NAND capacity and growing. To maximize their opportunity, Apple decided in July 2009 to write a $500M check to Toshiba as a prepayment for NAND Flash capacity at a discounted price. As the largest consumer of Flash coupled with commodity ARM processors that could be build anywhere, Apple found itself all alone, able to dictate the lowest worldwide pricing. The only exception would be if Samsung subsidized their smartphone and tablet groups. All of Apple’s competitors were left negotiating the 2nd best price within a sometimes spot shortage environment. There is no chance for them to overturn Apple’s component cost lead.

Apple’s premium brand combined with its “Swing Consumer” logistics has put it into a similar position to that of Saudi Arabia. At a moments notice it can not only shift NAND flash capacity but also DRAM suppliers, LCDs, wireless chips etc. Last week, an article appeared that confirmed Apple’s plan to source more NAND and DRAM from Toshiba and Elpida and away from Samsung (see Apple Looks to Japan for DRAM, NAND Flash Supply). The article speculated that it was in response to the legal issues it is embroiled with Samsung. It is that and more. It is a chance for Apple to lower Samsung’s prospects of being a serious competitor in the Smartphone and Tablet market and it is a leveling of the playing field. Apple needs both Toshiba and Samsung around as competitors always sharpening their pencils.

As a growth company with strong product margins, Apple is able to offer suppliers a guaranteed forecast that is always expanding, while its competitors like Dell, HP and others must remain tentative in their outlooks in the face of worldwide economic turmoil. As I discussed in an earlier article (Apple Will Nudge Prices Down in 2012: PC Market Will Collapse), margins for the PC OEMs and the retailers (i.e. Best Buy) are so thin that they are forced to under build and under forecast to suppliers because of the fear that they are left with too much inventory at the end of a selling season which will wipe out any gains they accrue in the first few months. Thus PC OEMs are in a death spiral losing more and more market share as time passes and thus losing cost leverage over suppliers.

Imagine the task at hand for new HP CEO Meg Whitman as she tries to salvage this quarter’s revenue by first having to reassure customers that they are staying in the PC business. At the time of the last earnings call, when Apotheker announced that the tablet was to be cancelled and the PC group spun out, HP lost immediate leverage with suppliers as they expected to receive order reductions and cancellations.

The near term bottom for tech stocks, and in particular Dell and Apple, came the following day: August 18[SUP]th[/SUP]. I would tend to speculate that Dell and Apple reached out to suppliers to take advantage of HP’s debacle. Now, as Whitman restarts the PC group’s engine, HP has to go back to suppliers to beg for parts at higher prices. And the worst part is that HP has to re-enter the 10% margin PC business in order to reassure customers who are looking at buying higher margin servers, networking and services. Apotheker’s critical mistake was that he believed dropping the PC group would not impact the other business units. The phones went quiet.

Apple’s complete domination of its supply chain through the “Swing Consumer” strategy will allow it to continue to squeeze suppliers and improve margins. They do, however, have one more threshold to cross and that is with its processor strategy. Apple will be a dual CPU House for a long time but it needs cost leverage over Intel. Both are playing a slow elephant dance before the ultimate partnership is signed that could be a benefit to both.

In the past year, both Intel and Apple have implemented communication and corporate strategies to try to gain an upper hand on each other. As everyone knows, Apple is good about keeping secrets for things that it truly doesn’t want the world to know. It goes to the extend of tracking down missing iphone prototypes mistakenly left at bars or cutting off suppliers who talk too much about upcoming and actual shipping products. In areas that it wants to frame public opinion, it communicates in ways that appear secretive: like the recent reports that TSMC is signed on to fab the A6 and future A7 processors at 28nm and 20nm respectively. It’s anybody’s guess if and what an A7 is but it magically appears in print. One wonders where Intel fits in the scheme of things.

Perhaps we have been offered a hint at what is coming.

Midway through Intel’s IDF conclave, where their upcoming 22nm chips were on full display, there was a simple press release announcing that Intel was issuing debt for the first time in over 20 years (see Intel Announces Senior Notes Offering). The initial press release made no mention of the amount but did say the purpose was for stock repurchases. Given that Intel had just raised their dividend in August and have been aggressively buying back stock with their massive operating cash flows, the offering seemed out of place.

The Wall St. Journal later confirmed that the debt offering raised $5B in total: a number that is familiar to anyone looking to build an advanced 22nm or 14nm Fab. Perhaps Intel’s way of saying they are ready to build an additional fab for a new customer. More on how this can play out in another column.


Semiconductor equipment spending beginning to decline

Semiconductor equipment spending beginning to decline
by Bill Jewell on 09-25-2011 at 7:41 pm

Semiconductor manufacturing equipment shipments have leveled off after a strong rebound from the 2008-2009 downturn. August 2011 three-month-average shipments based on combined data from SEMI (North American and European companies) and SEAJ (Japanese companies) were $2.9 billion, down from a peak of $3.2 billion in May 2011. Three-month-average bookings have dropped significantly to $2.2 billion in August, down 31% from the peak of $3.3 billion in August 2010. The book-to-bill ratio dropped to 0.78 in August, indicating continuing declines in billings.
SEMI’s September 2011 forecast for semiconductor manufacturing equipment calls for a 23% increase in billings in 2011 followed by a decline of 3% in 2012. One of the largest capital spenders, TSMC, plans to cut spending by 19% in 2012 after a 25% increase in 2011, according to Taiwan press reports. What will be the impact of decreased spending on semiconductor capital equipment on semiconductor capacity and utilization? The latest available data from Semiconductor Industry Capacity Statistics (SICAS) for 1st quarter 2011 showed industry IC capacity utilization of 94.2%, the fifth consecutive quarter with utilization above 90%. Semiconductor shipments are currently sluggish. Recent forecasts for the 2011 market range from a decline of 2% to growth of 5%. Forecasters agree the semiconductor market will pick up to stronger growth in 2012, ranging from 5% to 10%. Our forecast at Semiconductor Intelligence remains at 4% for 2011 and 10% for 2012.

The worldwide economic outlook is very uncertain, causing semiconductor manufacturers to be very cautious. The downward trend in seconductor manufacturing equipment spending will slow down the rate of capacity growth. If the forecasts of improving semiconductor market growth in 2012 hold true, IC capacity utlization should remain above 90% through at least 2011 and probably into at least the first half of 2012. Utilization in the 90% to 95% range is generally healthy for the industry – high enough for semiconductor manufacturers to remain profitable but not high enough to result in significant shortages.

Semiconductor Intelligence, LLC can perform a variety of services to provide you and your company with the intelligence needed to compete in the highly volatile environments of the semiconductor and electronics markets.


A Verilog Simulator Comparison

A Verilog Simulator Comparison
by Daniel Payne on 09-22-2011 at 2:40 pm

Intro
Mentor, Cadence and Synopsys all offer Verilog simulators, however when was the last time that you benchmarked your simulator against a tool from a smaller company?

I just heard from an RTL designer (who wants to remain anonymous) about his experience comparing a Verilog simulator called CVC from Tachyon against ModelSim from Mentor.



Benchmark Details

First, let me say that my primary use for the CVC tool is with regard to regressions being done on RTL designs, so it is not a gate level design that I can give you benchmark data on. In my regressions, sometimes the test bench activity contributes as much or more to the total simulation time requirements.

The test case that I was writing about when I first sent you the email was the regression testing for a relatively small digital design of about 150,000 gates in 0.35 micron UMC; however, as I mentioned before, the regressions were being performed on the RTL.

In that design, there are about 7,500 lines of RTL code in the design, and the test bench is about 6,500 lines of RTL code.

In a regression that took Modelsim Questa 28 hours to complete, CVC completed the work in 10 hours. This regression consists of a bash script that calls the same test bench with different conditions repeatedly to test all the features and automatically verify the performance.

In a more recent test that I have done with a much smaller design, where the test bench takes more time to run than the actual DUT, I ran a 100 msec simulation in 6 minutes with CVC that toom Modelsim Questa over 30 minutes to run. The test bench in this design does a state space model simulation of an analog circuit that is connected to the DUT and performs functional simulation not so much for regression testing, but more for the purpose of design analysis. We do small designs on large geometry due to the power control nature of our business.

In this case, the RTL for the DUT is 12,000 lines and the test bench is 8,000 lines long. As you can see, there are more lines of code here, but the design is about 1/10th the size of the previous example (about 13,000 gates).

We find that the greatest differences in speed between Modelsim Questa and CVC relate to the test bench part; however, we have also noticed that CVC is often 20 to 50 percent faster in the gate level sims as well. Where it does NOT shine is that we run into some trouble using SDF back-annotation with it. We can get it to work, but it seems to be not consistent.

We tend to use CVC more for functional verification and development, but we still use Questa for the back end validation steps.

I know that some of this data is not perhaps as quantitative as the example that you sent to me, but since we have been using CVC now for almost 2 years, I believe that our results have been consistently observed over enough projects such that our group is increasing its use over time due to the speed advantage that it gives us at least in our circumstance.

I have NOT had an opportunity to use it with large designs such as those found in much of the communications, graphics, and other DSP intensive applications where the gate counts get into the millions. So I cannot address that behavior with my current experience with it.


Apple’s Supply Chain

Apple’s Supply Chain
by Paul McLellan on 09-21-2011 at 5:48 pm

I am doing some consulting right now for a company that shall remain nameless, and one of the things I have had to look at is Apple’s supply chain. I came across an interesting article by someone with the goal to “buy a MacBook Air that isn’t made by Apple.” He is in the UK and doesn’t like Apple’s UK keyboard and he doesn’t really want to have to run everything on a virtualization environment. So basically he wants to buy a MacBook Air that is actually a PC.

This is a market that Intel has announced it will support with $300M of its own money under the name Ultrabooks. After all, there is presumably a big market for a MacBook Air that is actually a PC Ultrabook. And how hard can it be? Apple’s industrial design is great but it isn’t that hard to copy. Most of the components are standard. Lots of people know how to put together a PC.

The answer turns out to be surprising. It is really hard to build a PCair for the same price as Apple does. And the reason is one of the secrets of Apple’s supply chain that I hadn’t really thought much about before. Apple hardly makes any products. Sure, it ships huge volume: it is the world’s largest semiconductor purchaser, much bigger than HP or Cisco or any other obvious candidates (expected to be $22B this year). But it has three iPhones (iPhone 3GS, iPhone 4 (GSM) and iPhone 4 (CDMA)), one iPad, two MacBook Airs, some iPods and some bigger notebook and desktop computers. Apple is not the Burger King of electronics, you don’t have it your way, you have it Steve’s way. Compare that to HP or Cisco’s product lines.

The PC market is predicated on have it your way. You go to HP or Dell’s website and decide what options you want, do you want wireless, which speed of processor, how much memory and so on. Also, they have broad product portfolios so they are forced to use standard products such as screens, batteries, wireless daughter cards, power supplies and so on since it doesn’t make sense to customize anything for a subset of a subset of the product line. So the PC industry is largely based on having a lot of components that purchased in varying amounts and then clicked together to build the end product. They ship a large volume but of a broad product range, so not much of any particular model.

Apple, by contrast, can integrate as much as it wants and buys all components in the same quantities (for each product) since you don’t get that flexibility. This gives it a much higher volume and more predictable demand and it can leverage this into lower prices. And since it has so few products it can invest in specialized components for each one: the MacBook Air has a specially shaped battery that just fits in among all the other stuff in there, the iPhone and iPad contain a custom Apple SoC (A4 for the current iPhone, A5 for the iPad2 and presumably for the imminent iPhone 5). Famously, a few years ago, Apple bought Samsung’s entire flash memory output. Too bad if you are someone else.

With those greater volumes and greater purchasing leverage, Apple can build the MacBook Air for less than any of the PC competitors can build an Ultrabook. Plus it doesn’t fit their business model well: a premium product that you cannot customize. Where does that go on the Dell website?

As an aside, a lot of this supply chain optimization is not the aesthetic Steve Jobs side of Apple, but is what Tim Cook, the new CEO, worked to put in place.


Custom Signal Planning Methodologies

Custom Signal Planning Methodologies
by Paul McLellan on 09-20-2011 at 4:08 pm

It is no secret that custom ICs are getting larger and more complex and this has driven chip design teams to split up into smaller teams to handle the manual or semi-automated routing of the many blocks and hierarchical layers that go to make up such a design. These sub-teams don’t just need to handle the routing within their own block(s) but also integrate the routing between the blocks and also address the challenge of creating correct top-level routing (that overflies the block) within the assigned part of the die.

Using informal approaches, such as verbal and email status reports, is no longer enough and makes the routing of a large custom chip become the long pole in the tent, very labor-intensive and with a schedule that determines the overall schedule of the entire chip. Once you add in congestion issues, advanced-node parasitic effects, the fact that the design itself is probably not stable and undergoing incremental change, then the process becomes almost impossible. Even “industry standard” routers are unable to complete top level routing challenges because they were not designed to fully address the complex combination of specialized topologies, hierarchical design rules and DFM requirements (via redundancy, via orientation, via enclosures, wire spreading, etc) that are required to achieve successful on-time design closure for AMS and custom ICs.

What is needed is a fully automated approach to signal planning. The key is to integrate the process with the block placement tasks and the use of intelligent, routing aware pin-placement algorithms to address multi-topology routing problems. Providing a tool with tight integration of these tasks means that designers can explore the implications of different placement alternatives before deciding on an optimal solution. And in a much faster time than doing it manually or semi-automatically.

One critical consideration is the routing style required to handle these complex top-level and block routing tasks. A manhattan routing style is used to avoid jogs, thus reducing the number of vias required and minimizing wire length, in turn reducing timing delays and power. Nets can be sorted during routing to avoid crossing routes, thus reducing crosstalk and other noise. Of course users must be able to define constraints for the router, such as width, shielding requirements, maximum and minimum widths on each layer and identifying matched signal pairs.

Another way to optimize area and improve productivity is to use a router which supports multiple-bias routing as well as strictly biased X-Y routing. With its jumpered-mode designers can define complex schemes where all routes in both horizontal and vertical biases can use the same metal layer efficiently, allowing a separate layer to be used as a jumper layer for channels where a layer change is required to route effectively. Further, many semiconductor manufacturers use routers that support special optimization for bus routing and compact signal routing, allowing them to take advantage of specialized semiconductor vias and via directions resulting in still more compact routing.

More information on PulsIC’s Unity Signal Planner is here.

Note: You must be logged in to read/write comments