It is no secret that SoC designs continue to increase in complexity and time-to-market windows are shrinking. While there is room for debate on just how big a fraction of SoC design effort goes on verification, there is no debating that it is a large part of the total. Simulation is increasingly too slow, especially when software has to be verified against the hardware. In some areas, formal techniques can be attractive but they are not universally applicable. One attractive approach is to use FPGAs to build a prototype of all or part of the SoC use this for verification. However, one big challenge is that synthesis, place and route for an SoC-sized FPGA can take as long as a day. It is impossible to observe every signal in a large SoC and so the SoC must be re-synthesized each time that a change is required in which signals need to be kept track of. Full re-creation of all the layout each time a change is made in the choice of signals is too slow. ProtoLink Probe Visualizer is a new approach to the problem. By understanding the placement and routing within the FPGA it is possible to make changes incrementally and almost instantaneously. With the addition of an interface containing probe memory, the data that it is possible to collect goes from 10s of signals for limited cycles, with a one day turnaround on changes to the signal list, to thousands of signals for millions of cycles, with a few minutes to update the list. The whitepaper is here.
Fun Break
At DAC SpringSoft had a couple of video games set up, one about functional verification and one about, surprise, physical layout. But you didn’t need to go to DAC, you can play them now:
Liberate your Layout
Smartphones shipments, Sky is the limit…
…or a global recession, but that’s not the purpose of this blog. As everybody knows, Apple is designing and selling smartphones, only. Does it mean that only smartphones are generating profit in the mobile industry? As we have seen recently in Semiwiki, Apple makes 2/3 of profit of entire mobile industry.
Let’s have a look (below) at the complete picture of Phone revenue (yellow), profit (red), smartphone market share (green) and phone share (blue) for:
- RIM, Apple, HTC
- LG, Samsung
- Sony-Ericsson, Nokia, Motorola
The first remark is that profit curve is almost duplicating the smartphone share curve. Moreover, when the company does not market smartphones (LG or Sony-Ericsson or Motorola) the profit curve is flat… equal to zero! Nokia is an exception, as they used to have the largest smartphone market share, now declining, and their profit are impacted by the company overall strategy in the mobile phone industry (where they still have the largest share).
Click on the picture to see it larger, if you still can’t see enough, go here
This simply means that the current profit, and the future success for the handset manufacturers is intimately linked to their implication in the smartphone, just like if the largest part (75% of the shipments), feature phones and entry phone, did not count at all… at least in term of profit.
I have spent some time to survey MIPItechnology, and so far MIPI has been only used in the smartphone, so I had to look carefully at this handset segment. When I started the survey back in September 2010, the available figures for 2009 was 173 million smartphone. At the beginning of 2011, we had access to the 2010 figure: 304 million units! 75% year to year growth on such a market, where you count by hundred millions, it is something we never saw before in the electronic industry… That push me to build the forecast for the next five years, I share with you today:
Interesting, but what is the impact on our industry, SC (and EDA, Foundry and IP)?
Like the PC can be determined by the processor it integrates, smartphone can be characterized by the Application Processor it uses. If you have a look at OMAP5, you can see that it integrates two processor cores (Cortex M4 from ARM), plus a graphic processor core (SGX544 from Imagination technologies), an audio processor, a DSP core, tons of Interface IP (USB2, USB 3.0, MIPI CSI, DSI, DigRF and LLI, SATA, LPDDRn and many more) as well as IP functions for memories, internal interconnect and many more. Such a SoC is also a challenge for EDA tools (you have to deal with a lot of clock domains and be able to manage the power down states for all the functions), has to be processed on the latest technology nodes – from foundry or IDM – hopefully by dozen of million units when the SoC is successful. To summarize, Application processor is at the technology edge, both for EDA and manufacturing, requires building real partnership with IP vendors… and are sold at a small fraction of a CPU ASP: say $15 to $25. But more than 400 million will be sold this year, which is more than the overall number of CPU sold in PC segment.
As this market is served by a dozen of companies (Qualcomm Inc., Texas Instruments Inc., Samsung Electronics Co. Ltd., Marvell Technology Group Inc., and Nvidia Corp., ranked as the top five, respectively, of sellers of smartphone applications processors in the first quarter, according to a new report by Strategy Analytics and others, including the yet to be known Chinese start-up), the direct sales impact for IP and EDA should be –mathematically- higher than for the single two AMD and Intel. Especially if you take into account the time to market impact, forcing the companies involved into Application Processor design to outsource as many IP as it make sense. I am far to minimize their strengths when saying their design know how is first based on “IP integration” and “design for low power”. If it was so easy to do, Intel would now dominates this market too… which is not the case!
Another forecast is worth to look at, made by ABI research for the total Application processor market evolution up to 2016:
If in Q2 2011 Apple has made 2/3 of the profit for the entire mobile industry, shipping 20 million smartphones out of a market of slightly more than 100 million in Q2, we can say that most of the profit in the handset industry are coming from smartphone, even if they only represent 28% of the handset. This is only the beginning of the story, as the smartphone shipments has grown last year (2010) by 70%, for a 20% share of the handset market, and will continue to grow during the next five years with a 26% CAGR. This means that in 2016, we can expect just less than 900 million smartphone to reach the market, or, in other words, almost half handset will be smartphone at that date. This will generate a market for Application processor growing at the same rate, in units, and a bit less in value, due to the price erosion. There will be a direct impact on EDA and IP sales, as these SoC are ever more challenging and complex to design and the mobile industry demand in Time to Market impose to outsource ever and ever more IP.
Eric Esteve from IPnest
Samsung to Acquire AMD?
Don’t get me wrong, I’m a big fan of AMD, I buy AMD based products whenever possible to prevent an innovation stifling Intel monopoly. Unfortunately Silicon Valley coffee house conversations continue to paint a bleak picture for AMD, even with a recent stock surge on better than expected revenue guidance for the rest of 2011. I’m sure Wall Street coffee house conversations do not track with ours and here’s why:
Intel has always been the semiconductor manufacturing technology leader but throughout the years AMD was very clever and kept pace. Unfortunately AMD went fabless last year and as a result will not be able to compete in the discrete microprocessor market (my opinion). AMD also has no mobile strategy so where will they be when tablets and phones replace laptops?
Today AMD is in production at 32nm SOI with the Ex AMD fab in Dresden which is very competitive with the current Intel 32nm HKMG process. Future AMD microprocessor generations however will use commercially available foundry processes which track a process node or two BEHIND Intel. You should also know that microprocessor manufacturing is unique and may not 100% adapt to a more generic foundry process.
Source: TechConnect
Mid next year AMD will have CPU/GPUs on 28nm HKMG processes but will they be price/performance competitive with the Intel 22nm Tri-Gate technology? The answer is probably NO, not in the discrete microprocessor market. Intel will also be the first to 450mm manufacturing which will bring a dramatic costs savings versus mainstream 300mm semiconductor manufacturing.
A glaring non-compete example is gross margins. How can AMD again achieve the 50%+ margin average required to compete profitably against Intel? AMD’s gross margins fell below 40% in 2008 and have yet to recover. As an example, take the most recent AMD Wafer Supply Agreement Amendment. A detailed analysis of this agreement is a blog in itself, especially with all of the redacted areas. Skip down to pages 8, 9, and 10 where the financial details are and tell me how AMD will even hit 40% gross margins in 2012. If you disagree let me know in the comment section and we can discuss in more detail.
To make a long blog short, AMD needs an exit strategy to be competitive with the Intel CPU dynasty. My first choice would be ATIC taking AMD private and integrating it with GlobalFoundries. Second choice would be for Samsung to buy AMD. Either way there would be a viable competitor to Intel. Or not, but it would certainly be fun to watch!
Don’t get me wrong, I’m not bashing ANYBODY here, I do this blog out of love. The SemiWiki mission statement is “For the greater good of the semiconductor design ecosystem”. I admire ATIC and what they have done for the semiconductor industry, I respect the accomplishments of AMD, and I’m the #1 fan of GlobalFoundries. But let’s be honest, Intel is a force to be reckoned with and we had all be better prepared for the coming war of microprocessors!
Note: You must be logged in to read/write comments
Sentinel-PSI Webinar
The last of the current series of webinars is on Sentinel-PSI,IC-Package, Power and Signal Integrity Solution. It will be at 11am Pacific time on Thursday 11th August. It will be conducted by Dr. Tao Su, product manager of the Sentinel products. Dr. Su has many years of experience in the EDA industry and is specialized in power integrity and signal integrity analysis for package and PCB. He received his Bechelor’s degree in Electrical Engineering from Tsinghua Univ., Beijing, China, and his MS and PhD degree from the University of Texas at Austin.
This is a 3D full-wave electromagnetic solver for power and signal integrity analysis of IC package and PCBs, with the ability to perform DC (static), AC (frequency domain), and transient (dynamic) simulations from a single environment. Based on the fast finite element method (FFEM), Sentinel-PSI provides the accuracy of a conventional full-wave tool, with the unparalleled capacity to handle an entire package or board design. Sentinel-PSI is seamlessly connected to other Apache products in system-level analysis, and is linked with Sentinel-SSO to perform system-level I/O-SSO simulations.
Register for the webinar here.
Apple Roadmaps Intel to 14nm
Intel will not win the tablet market with any of the various Atom chips rolling out at 32nm, 22nm and even 14nm. They are too late to a game that Apple owns 90% of today and will so in the future. All of these ultra low power atom versions are like the Saturn test rocket developments that preceded the Apollo 11 Moon Landing. They are necessary test chips where engineers at Intel try out new circuit designs and architectural tradeoffs in tuning power vs performance in preparation for a set of chips co-architected with Apple and appearing in 14nm.
Many threads lead me to the above conclusion. But from a business perspective there are huge benefits to Apple and Intel joining hands. The 90% market share in the tablet market and the growth coming from the MAC Air notebooks are key indicators to the future of the mobile market. Both are kissing cousins in term of their form-factors and hardware features. The clear driver for the form-factors was economical Flash memory in substitution for HDDs and the elimination of DVD drives. The thickness shrunk and the weight dropped to the point you can hold them with one hand comfortably.
There are two main differences. First is the light iOS of the iPAD and the heavier MAC O/S of the MAC Air. And Second is the $25 ARM A5 processor in the iPAD vs. the $220 Intel i5 ULV in the MAC Air. A merging of OS interoperability and lower x86 CPU pricing is coming.
With a co-architected family of processors running in Intel’s fab – the costs of today’s $25 A5 could drop to as low as $15. The math is key here. Let’s say at the low end Apple ships 50MU pads in 2014 at a $10 savings using Intel (based on smaller die) = $50MU *$10 = $500M. Now factor in market share gains that Apple would accrue if they added a higher performance CPU based iPAD at similar battery life (something Intel can deliver). Apple stretches out their market TAM while retaining a high market share with Intel delivering the CPUs. But what would the high end of the CPU line look like for iPAD? Start looking at the MAC Air.
The low end line of the MAC Air sells for $999 and uses the $220 Sandy Bridge i5 ULV processor. This $ percentage of CPU to system price is very high historically for Intel and is one of the reasons the PC Clones are having difficulty undercutting Apple’s $999 price. Intel knows if they don’t offer a much lower cost CPU, then the iPAD cannibalizes the low end, light-weight notebook PC market.
To the plus side for Intel, the attractiveness of the MAC Air package and its ability to run all MAC O/S Apps makes it a competitive alternative to the iPAD. If customers had the choice of a MAC Air that sells for $200 or $300 less and an iPAD, then I believe a large majority would choose the MAC Air. This is my cursory read having played in this market a long time.
At the moment and for the next 12 months, Intel’s hands are tied. The MAC Air uses an Intel ULV processor that has a TDP of 17W. Intel can not yield enough at this power level that would allow them to sell a low end processor (an i3 ULV for $75). At $150 reduction, a system price would drop by $300 to $699.
With Ivy Bridge (using 22nm Tri Gate process), Intel will get an immediate drop in TDP by 50% at the same performance level and cores and caches. Look for Intel to yield near 100% at 17W TDP to enable a $75 CPU by mid 2012. Then look for them to introduce a lower power ULV at around 9-10W at the same yield and price points today. So does Apple go more aggressive MAC Air in mid 2012 with a 10W TDP while the PC clones are going with the 17W TDP. My guess is yes and it will allow them to claim more battery life and retain higher pricing (maybe $899 at the lowest). The key here is that Apple wants the iPAD to flourish and expand rapidly in the next 18 months so that it is effectively game over for tablet competitors. They than can decide to balance the iPAD vs MAC Air volume.
Intel will turn the crank one more time with the Haswell architecture in 2013 where the TDP will be lowered again and Intel will offer aggressive price points on x86 processors that demonstrate 3-7W TDP. Standby power will be extremely low and the advantages of ARM’s low power are eliminated. Then it is a game of who can build what the cheapest.
There have several speculative articles lately on Apple using a future multi-core A6 in MAC Air to lower the cost and to set their own destiny. Furthermore by 2016 they would get to a 64 bit quad core and replace x86 in MAC Book Pro and the desktop. I think these rumors are spread for a reason. Apple wants to see better pricing out of Intel so that they can gain market share in the MAC Air and MAC Book Pro. Apple can not execute completely on their vision of winning the corporate market with products that use just Intel’s mid range and high end processors. In other words they are not using anything less than $200 on Intel’s price list.
As Apple scans out over the horizon then, they see that their dominance in Tablet and iPhone can come to fruition in the PC space if they get a few things that are unique to them and not available to others. This is where, if they play the Intel relationship right they can drive their 5% worldwide market share into high double digits. This may sound over the top … but there are changes on the way to MAC Pro as well that complete Apple’s frontal attack.
Part of Apple’s brand is based on things that they develop and in some cases based on a co-marketing development. The A4, A5 and A6 are seen as their gift to the mobile world in terms of performance at the right power. The Thunderbolt I/O developed with Intel and released before the rest of the world had access to it is a good example of technology co-marketing.
For Apple to retain its brand leadership, it needs to co-architect and be first to market with a new multicore processor that combines Intel x86 and process technology with a future A6 that includes Apple’s specified graphics. The multi-core solution will run both iOS and MAC OS. These processors will be built in 14nm and out the door late 2013 or 1H 2014. Intel will build a family of A6/x86 combo processors to allow low to moderately priced CPUs. Apple will have sole rights to the chips and can hide from its competitors the true cost of the CPUs. Apple MAC based PC will drop in price. On Intel’s side, they can continue to serve legacy x86 markets without a price conflict.
Finally Intel will use the Apple partnership to leverage other large volume customers to consider a partnership with x86 based CPUs that include customer proprietary IP and functions. But again Apple has the lead on others. This will be a win for Apple in terms of performance, cost and technology leadership. This will also be a win for Intel as it increases its overall market share in selling high $$$ sand.
Yalta in EDA: Cadence stronger in VIP territory…
…when Synopsys is getting the lion’s share in Interface IP. In Q2 2010, there was two major acquisitions in EDA world: Synopsys has bought Virage Logic (for more than $300M) when Cadence bought Denali for an equivalent amount. Synopsys bought a 100% IP focused company, when Cadence bought a strongly VIP focused company. Does it sound like Yalta in EDA world? Let’s have a look today at Cadence positioning in the VIP market, and keep for another blog Synopsys positioning in the IP territory.
Cadence preference for the VIP market is not dated from 2010. Back in 2008, Cadence made the acquisition of verification IP (VIP) assets from Yogitech SpA, IntelliProp Inc., and HDL Design House. to expands its existing VIP portfolio by five times to include over 30 standard protocols for wireless, networking, storage, multimedia, automotive, and more. With Denali acquisition, Cadence has even more heavily invested into VIP market as they bought a strong VIP port folio and competencies, mostly based on Memory (the “memory models” for standard products and DDRn VIP), and also PCIe and USB. Some leading edge IP were also in the basket, with PCIe 3.0 and even more important, DDRn memory controller, a segment where Denali has always had a good share, allowing Cadence to put a stone in Synopsys garden.
When you consider that Synopsys integrates their VIP into Design Ware, which means they do not try to value each VIP, you may think that Cadence should extract most of the value from the VIP segment, as they market one (functional verification) product by protocol standard. The list of supported protocol has been recently updated, and demonstrates the strong commitment of Cadence to this market:
· AMBA 4
· AMBA AHB · AMBA AXI · AMBA APB · CAN · Ethernet · HDMI · I2C · JTAG · LIN · MIPI DSI · OCP · PCI · PLB · SAS |
· DDR2
· DDR3 · DDR4 · DDR NVM · EEPROM · GDDR3 · GDDR4 · GDDR5 · LBA NAND · LPDDR2 · LRDIMM · MMC 4.41 · One NAND · QDR SRA · SDRAM · SRAM |
We have updated the VIP wiki to reflect the strong focus put by Cadence on the VIP products. Unfortunately we are missing a key factor: the weight, in term of revenue, of the VIP market. There is no market survey giving even an evaluation of the $ value of this segment. We have said previously in Semiwiki that “IP would be nothing without VIP” and we have tried to evaluate the size of this segment. The conclusion was it could be anything between $200M to $500M… Not a very precise evaluation, I agree. If anybody wants to challenge it, feel free to post a comment!
That we can say is that Cadence, with the current offer supporting more than 50 protocol standards, will certainly get a strong market share of VIP, maybe in the same range than Synopsys market share in the Interface IP segment (also based on protocol standards or specifications), or about 50% of the segment. If anybody from Cadence wants to comment this assertion, feel free to post a comment!
Eric Esteve from IPnest
August 11th – Hands-on Workshop with Calibre: DRC, LVS, DFM, xRC, ERC
I’ve blogged about the Calibre family of IC design tools before:
Smart Fill replaced Dummy Fill Approach in a DFM Flow
DRC Wiki
Graphical DRC vs Text-based DRC
Getting Real time Calibre DRC Results with Custom IC Editing
Transistor-level Electrical Rule Checking
Who Needs a 3D Field Solver for IC Design?
Prevention is Better than Cure: DRC/DFM Inside of P&R
Getting to the 32nm/28nm Common Platform node with Mentor IC Tools
If you want some hands-on time with the Calibre tools then consider attending the August 11th workshop in Irvine, CA.
SNUG outside Silicon Valley
SNUG in Silicon Valley was in March so either you were there or you’ve missed it. But it is the summer (and fall) of SNUG in the rest of the world:
SNUG China (in Beijing, Shanghai, Shenzhen) on August 22nd-30th
SNUG Singapore on August 23rd
SNUG Taiwan (in Hsinchu) on August 25-26th
SNUG Japan (in Tokyo) on September 7th
SNUG Canada (in Ottowa) on September 19th
SNUG Boston on September 29th
SNUG Austin on October 3rd
Full details here.
Assertion-based Formal Verification
Formal verification has grown in importance as designs have grown and it has become necessary to face up to the theoretical impossibility of using simulation to get complete coverage along with the practical impossibility of simulating enough to even get close.
There are a number of solvers for what is called satisfiability (SAT) but these work in a rather rarefied theoretical environment different from the way designers work. So it is necessary to add a modeling layer to connect properties in the designer’s world to the types of equations that the solvers can prove. Some properties require additional logic to be added to the design in order to convert, for example, a temporal property into one that an SAT engine can prove.
The modeling layer takes in the design description, the property/properties to be verified, the initial state of the design and any constraints. It then transforms these into the formal equations required by the SAT solver. The solver attempts to find a “witness” for each property. A witness is a sequence of input vectors that make the property true while satisfying all the constraints.
The SAT solver produces one of 3 outcomes:
[LIST=1]
As an aside, formal verification products are quite interesting to sell. Typically, to evaluate them, the customer will have an application engineer run an old design through the tool, one that is already in production. It is interesting when the design promptly fails and a sequence is found that causes the design to do something it shouldn’t. Of course, you don’t tell the customer all the problems, they need to buy the tool to find that out.
Atrenta’s white papers on formal verification, which go into a lot more detail, are available here.