100X800 Banner (1)

PC Growth Latches on to the Parabolic Curve of Emerging Markets

PC Growth Latches on to the Parabolic Curve of Emerging Markets
by Ed McKernan on 11-04-2011 at 7:56 am

One of the interesting tidbits of information to come from Intel’s October earnings call was that Brazil, a country of nearly 200M people, has moved up to the #3 position in terms of PC unit sales. This was a shock to most people and as usual brushed aside by those not familiar with the happenings of the emerging markets (i.e. the countries keeping the world out of a true depression). A few days ago I saw an article about Brazil’s economy posted on one of my favorite web sites called Carpe Diem. The picture to the left and the following article should put things in perspective (see Brazil to Surpass U.K. in 2011 to Be No. 6 Economy). Brazil’s economy (GDP) has increased 500% since 2002 and is expected to grow another 40% in the next 4 years. Does this not look like the Moore’s Law parabolic curve with which we are all familiar?

For the past year, Intel has re-iterated on every conference call and analyst meeting that they conservatively saw an 11% growth rate for the PC market over the next 4 years. The Wall St. analysts scoffed that Intel was overly optimistic and used data from Gartner and IDC to back up them up. Gartner and IDC were in Intel’s words not able to accurately count sell through in the emerging markets. For those of you not familiar with the relationship between Intel and Gartner/IDC, let’s just say Intel NEVER Shares Processor Data with analysts. It’s a guessing game at best and therefore Gartner and IDC put together forecasts that are backward looking and biased towards the US and Western Europe. If these two regions are flat while the emerging markets are growing, then you get the picture.

The result of all this is that the understanding of the worldwide PC and Apple markets is skewed towards what sits on the analyst’s desk and not what is sold in the hinterlands. Intel knows best that what is going on is that there are three distinct markets in the world. There is the Apple growth story that is playing out in the US and Western Europe, cannibalizing the consumer/retail PC market at a fast clip. Then we have the corporate market that is tied to the Wintel legacy and these are selling at an awesome rate. How do we know, Intel’s strong revenue and gross margins tell us this. Finally there is the emerging market that is based almost solely on Intel or AMD with some fraction of Windows. (real or imaginary). For this market, the iPAD and MAC notebooks are too expensive. Given the growth rate of the emerging market economies, the PC will have a strong future.

Considering that Brazil is just surpassing the UK in GDP and Brazil is 3 times the population of the UK, then one can see several trends. First, income is rising to the point PCs are affordable. Second there is much more demand coming on stream the next few years from younger countries with rising salaries. And finally, if as one would expect that LCD prices will continue to fall, DVD drives discarded, and that SSDs will finally enter the mix as a cheaper alternative to HDDs in the next 24 months, then there is further room for notebooks to move lower in price. A $300 notebook today that trends to $200 and below may result in a new parabolic demand curve. Moore’s Law shows up again in another unsuspecting place.


Arteris vs Sonics battle…Let’s talk NoC architecture

Arteris vs Sonics battle…Let’s talk NoC architecture
by Eric Esteve on 11-04-2011 at 6:23 am

The text of this very first article about Arteris had disapeared from Semiwiki, for an absolutely unknowed reason…If you have missed it, this is a pretty useful introduction to NoC concept, as well as to the legal battle between Arteris and Sonics:

The Network on Chip is a pretty recent concept. Let’s try to understand how it works. Anybody who has been involved in the Supercomputer design (like I was in the 80’s), knows that you need a “piece” between the multiple CPU and memory banks, at that time a “crossbar switch”. To make it outrageously simple, you want to interconnect the M blocks on the left side with the N blocks on the right side, to do so you create a switch made of MxN wires.

The “crossbar switch” is as old as the phone industry! Would this type of architecture be implemented in a multimillion gates System on Chip, you easily can guess the kind of issues generated: routing congestion, too long wires, increasing delay and power consumption due to interconnects. Thanks to the modern telecommunication, the “old” way to build networks has been replaced by “bit-packet switching over multiplexed links”. This simply means you can use a single physical link to support several functional interconnect, and that you apply the packet transport approach, very popular in the new interface protocol like PCI Express for example, serialized to again reduce the number of wires. The NoC concept was born in the early 2000’s and the first dedicated research symposium on Networks on Chip was held at Princeton University, in May 2007.

Let’s come back to the fight of the day, on my left Sonics, fighting with Arteris, on my right. Sonics has been founded in 1996, it’s important to mention it, as it was very early in the NoC history. The first product launched by Sonics, like the Synapse 3220, was based on “crossbar switch” topology and the “significant benefit”, according with Sonics here was “its control of the interconnects power dissipation. In a classic bus-based architecture, during any given transaction, all of the wires in the bus are driven, not just those from a given initiator to a given target. This wastes a significant amount of power. Synapse 3220 addresses the problem by activating only the required segment of the interconnect for a given transaction, while the rest is kept deactivated.” As you can see, the product was not packet based, neither multiplexed, nor serialized – it was a crossbar switch where you could deactivate the busses not used in a given transaction. If we look at the NoC product released in 2005 (Arteris was created in 2003), like the SonicsMX, it was still based on “crossbar switch” (just look at the dark blue blocks):

When Sonics came on the market, they were alone on a niche, enjoy many design win and grow their customer base. And they had to keep their legacy interface (based on OCP for example) to satisfy the existing customers when developing new products. When Arteris start business (in the mid 2000’s), they jump to the most effective, modern NoC topology: “point-to-point connections rather than the more typical mixture of multiple fan-out meshes. While a more standard hybrid bus will contain a centralized crossbar and numerous wires that create difficult-to-route congestion points, an NoC’s distributed communication architecture minimizes routing congestion points to simplify the chip’s layout effort”.

What was the market answer? In 2005, Sonics was still enjoying prestigious design win for NoC at many application processor chip makers for the consumer or wireless handset market (“Broadcom, Texas Instruments, Toshiba Corp., Samsung and several unnamed Original Equipment Manufacturers”). That we see today is that Arteris’ customer list includes: Qualcomm, Texas Instruments, Toshiba, Samsung… and also LG, Pixelworks or Megachip! There is nothing like customer design win to quantify an IP product success.

So, I don’t know if Sonics’ patent can be applied to Arteris’ NoC IP (I am not a lawyer, neither a NoC expert), but that I can see is that Sonics came very early on the NoC IP market, using a “crossbar switch” topology and has enjoyed a good success on a niche market… where it was the single player. About 10 years later, Arteris came to the same niche, but rich, market (complex SoC for application processor for wireless handset, or multimedia processors) with a more innovative product (see above) and win pretty quickly (5 years is quick to design-in a new concept) majors IDM or ODM sockets… If your product is not good enough, is it time to go to legal? I don’t say it’s the case, but it look like it is!

Eric Esteve fom IPNEST


Learning Verilog for ASIC and FPGA Design

Learning Verilog for ASIC and FPGA Design
by Daniel Payne on 11-02-2011 at 11:17 am

Verilog History
Prabhu Goel founded Gateway Design Automation and Phil Moorby wrote the Verilog language back in 1984. In 1989 Cadence acquired Gateway and Verilog grew into a de-facto HDL standard. I first met Prabu at Wang Labs in 1982 where I designed a rather untestable custom chip named the WL-2001 (yes, it was named to honor 2001 A Space Odyssey) and was lectured about the virtues of testability, oh well.

Learning Verilog
Today you can learn Verilog by a variety of means:
[LIST=1]

  • Buy a book and self study
  • Browse the Internet and self study
  • Attend a class, seminar or workshop

    I’ve learned Verilog through self study and kept in touch with a corporate trainer named Tom Wille who operates TM Associates, we both worked at Mentor Graphics. Several years ago Tom asked me to update and deliver a Verilog class for Lattice Semiconductor to use:

    I’ve delivered the Verilog class to both Lattice Semi and other companies in the US. Recently I updated the Verilog class again and trained a group of AEs at Lattice Semi in Hillsboro, Oregon using:

    Class Experience
    Each AE brought in their own laptop computer loaded with software and I handed out a thick binder with lecture material and notes, and a smaller binder for lab exercises. Most of the AEs used Aldec and Lattice Diamond using Microsoft Windows however one AE ran ModelSim using Linux. Some of the reasons for having an on-site Verilog training class are:
    [LIST=1]

  • Convenient method for engineers to quickly come up to speed and learn Verilog by theory (lecture) and application (labs)
  • Interactive questions encouraged
  • Uses a tested process for learning
  • Learn by doing the labs


    In three days we covered 12 units of study, typically two units before lunch and two units after lunch. Here’s the general outline we followed:

    Day 1
    Unit 1: IntroductionCoding and synthesizing a typical Verilog module to be used in the wireless chip.

    • Synthesis-friendly features of Verilog-2001
    • Migrating the module from an FPGA prototyping technology to a submicron ASIC technology
    • Wireless chip design spec

    Unit 2: Combinational Logic

    • Effective use of conditional constructs (ifelse, case,casez, ?:)
    • Decoding. Priority encoding
    • Code conversion. ROM usage
    • Multiplexing/demultiplexing
    • Iterative constructs (for, while, disable)
    • Signed/unsigned arithmetic
    • Using concurrency

    Unit 3: Sequential Logic

    • Sequential building blocks
    • Registers with synch/asynch reset and clock enable
    • Parallel/serial converter
    • Ring counter
    • Edge detector
    • Using blocking vs. non-blocking assignments
    • Non-synthesizable constructs and workarounds
    • ASM (algorithmic state machine) charts as an aid to sequential-machine design

    Unit 4: Block Integration

    • Chip-level design and integration issues
    • Coding above the block level
    • Multiple clock domains
    • Partitioning an entire chip into modules
    • Separating blocks with different design goals
    • Registering outputs
    • Maximizing optimization
    • Instantiating IP blocks such as Synopsys DesignWare
    • Instantiating I/O cells using generate loops

    Day 2
    Unit 5: FSMs and Controllers

    • Coding FSM-oriented designs
    • ASM (algorithmic state machine) chart usage
    • Mealy vs. Moore insights
    • Modified Mealy FSM with registered next-outputs
    • Hierarchical FSMs
    • Controller for wireless chip
    • Datapath/controller paradigm


    Unit 6: Getting the most out of your tools

    • Synthesizable HDL subset
    • Unsupported constructs
    • Excluding simulator-oriented code
    • Using parameters and localparam
    • Name-based redefinition
    • Text substitution
    • Managing code modules
    • Using include directives and definition.vh files
    • Coding for reuse and portability

    Unit 7: Coding for Area

    • Classic area/delay trade-off
    • Avoiding excess logic
    • Reducing ASIC gate count or FPGA LUT usage
    • Minimizing algebraic tree nodes
    • Sharing arithmetic resources
    • Sharing non-arithmetic logic like array indexing
    • Cacheing recomputed quantities
    • Scheduling over multiple clock cycles


    Unit 8: Coding for Performance

    • Parallelizing operations
    • Minimizing algebraic tree height
    • Resource implementation selection
    • Exploiting concurrency
    • Accommodating late input arrivals

    Day 3 Unit 9: Verification

    • Verification definition, methodology
    • Testbench architecture
    • Clock generation
    • Timescale
    • Stimulus generation
    • Sampling response at regular intervals or on change
    • Comparison of responses
    • Using $random
    • Using forkjoin


    Unit 10: Testbench Techniques

    • Encapsulating tests within tasks
    • Self-checking testbenches
    • File-oriented testbenches
    • Using $readmem
    • Fixed vectors
    • Bus functional models
    • Synchronizing stimuli
    • Named events
    • Accessing the Verilog PLI

    Unit 11: Avoiding Simulation Pitfalls

    • Weaknesses of Verilog-2001
    • Truncation and other risky coding constructs
    • Timescale pitfalls
    • Avoiding race conditions during simulation
    • Avoiding simulation-synthesis mismatches
    • Avoiding simulator bottlenecks and improving performance

    Unit 12: Advanced Topics

    • Bottom-up vs. top-down verification methodology
    • Emergence of static verification
    • Coding guidelines for formal equivalence
    • Co-simulation using Vera
    • Scan-based testing
    • DFT guidelines
    • Future directions.

    The pace is fast and the group of AEs had many questions that were answered and clarified using the white board. More than half of the time is spent in the labs where students really get to apply the theory in a practical way by coding Verilog, debugging and then verifying correct results. We code both designs and test benches.

    In this particular class we did uncover one subtle difference between Verilog simulation results using Modelsim versus Aldec. The student using Modelsim was able to tweak the one lab design to pass the test bench.

    Summary
    If you have a group of engineers that needs to learn Verilog for the first time, or just increase their Verilog understanding then consider contacting Tom Wille to find out if an on-site class might be of value. His company also offers VHDL training and has been in business for many years using a variety of freelance instructors.


  • High-efficiency PVT and Monte Carlo analysis in the TSMC AMS Reference Flow for optimal yield in memory, analog and digital design!

    High-efficiency PVT and Monte Carlo analysis in the TSMC AMS Reference Flow for optimal yield in memory, analog and digital design!
    by Daniel Nenni on 11-01-2011 at 9:00 am

    Hello Daniel,
    I am very interested on the articles on the PVT simulation, I have worked in that area in the past when I worked in process technology development and spice modeling and I also started a company called Device modeling technology (DMT) which built a Spice model library of discrete components, such as Bipolar/MOS /POWER MOSFET/Analog Switch/ADC/CDA/PLL sold to companies like Fujitsu, Toshiba …etc.

    We used to have a project when I worked on R&D to simulate the process based on the device architecture and send the out data to a simulator called PICE which is a device simulator and the output again was sent to the input of Spice simulator , as the Process simulator , the device simulator and spice simulator are connected.

    We can easily define the performance of the targeted analog circuit with variation of process recipe and device structures, we can also predict the yield of each corner with running the spice PVT simulation against the six sigmal spice models. However, as you know, the performance always has to compromise with the reliability, and you can’t run the circuit simulation together with the reliability models, because no such models are available.

    As a result I do not pay much attention to the result of spice simulation, because it can never tell you what the reliability will be with the result of spice simulation, and I still believe real corner lot wafer is the best way to verify the performance, yield and reliability.

    Hi Edward,

    Process variation is of great interest at 28nm and even more at 20nm. In a recent independent survey, variation-aware custom IC design was ranked the number one area requiring advancement over the next two years. The survey revealed:

    [LIST=1]

  • 53% of design groups missed deadlines or experienced respins due to variation issues
  • Designers experienced an average 2 month delay due to variation issues
  • Designers spent an average 22% of design time on variation issues

    For further information, see the Gary Smith EDA analyst report on variation design.

    Here is a recent webinar done by Solido and TSMC on High-efficiency PVT and Monte Carlo analysis in the TSMC AMS Reference Flow for optimal yield in memory, analog and digital design.

    Attendees of this webinar learned:

    [LIST=1]

  • Variation challenges in custom IC design
  • Variation-aware solutions available in the TSMC AMS reference flow
  • Methods to develop and verify designs over PVT corners in less time
  • How to efficiently apply Monte Carlo techniques in design sign-off
  • How Monte Carlo is really possible up to 6-sigma
  • Customer case studies of the above methods

    Solido customer case studies include:

    [LIST=1]

  • NVIDIA for memory, standard cell, analog/RF design
  • Qualcomm for memory design
  • Huawei-HiSilicon for analog design
  • Qualcomm for I/O design
  • Anonymous for analog/RF design

    Presenters:

    [LIST=1]

  • Nigel Bleasdale, Director of Product Management, Solido Design Automation
  • Jason Chen, Design Methodology and Service Marketing, TSMC

    Audience: Circuit Designers, Design Managers, CAD Engineers


  • Meg Whitman Should Buy AMD and Take HP Back To Its Roots

    Meg Whitman Should Buy AMD and Take HP Back To Its Roots
    by Ed McKernan on 10-31-2011 at 11:16 am

    Back in the 2008 financial Crises, GM was finally brought to its knees and had to face a radical makeover. They asked for a bailout from the government that allowed the unions to swap out lower compensation for equity, something no union would do unless the alternative was to shutter the doors. The bondholders and the shareholders got a major hair cut in the process. For the management, what they wanted in return for running the company was the flexibility and cost structure to allow them to focus the future of the company in two areas: trucks and the fast growing China market. Meg Whitman has to come up with a strategy that get’s HP out of commodity box that is being slammed on many sides (i.e. Oracle, IBM and Apple). One proposal would be for her to return HP to its roots by buying AMD and going vertical in the PC and Server space to a degree that can be unmatched by Dell, IBM or Oracle.

    As I blogged a few months ago, HP’s unraveling began in the summer of 1994 when they decided that they no longer wanted to develop PA-RISC server processors and instead signed on with Intel to have them develop Itanium. HP quickly set out to acquire all available minicomputer architectures (Convex Computer, Tandem and DEC VAX and Alpha) under its umbrella to spread Itanium’s architecture into a market share leadership position. Itanium was late and the issue of porting software to the new architecture was a greater challenge than first imagined. As a result HP did not overcome IBM or Sun’s lead in the high-end server and workstation business. IBM continues to profit and Oracle’s acquisition of Sun SPARC allows it to execute on a very profitable razor/razor blade business model with some legacy customers. Meanwhile HP continues to lose share and revenue.

    Listening to AMD’s recent earnings conference call I was struck by how well they were doing in the emerging markets and the fact that they are increasing revenue along with Intel and nVidia in 2011. Clearly the emerging market is growing much faster than Gartner or IDC or the Wall St. analysts can get their hands around. All this is happening even with the economic slow down in Europe, which still represents about 20-25% of the world economy. AMD said in the conference call that they believe that they have 28% of the $200-$600 retail market which is 45% of the total retail market. All the growth was in notebooks. They are executing on a <$30 CPU plan at 45% Gross Margins that Intel does not have the capacity or desire to play in.

    What we are witnessing is a massive shift in notebook price points (meaning 14” and 15” LCDs) into what was a 10” LCD based netbook space just 12-24 months ago. The crunch down in notebook prices and not Apple’s iPAD is the reason netbooks are dead. The driving factor is lower cost LCD panels, nearly FREE $ DRAM and x86 CPUs that sell down to $20. I assume that in some of the emerging markets where notebooks approach $200, the O/S is Linux or a boot copy of Windows.

    Back to HP and the reasons for an acquisition of AMD. Some months ago, I made another prediction that AMD would get bought when its stock price dropped below $5. I speculated nVidia as the likely suitor. nVidia, though has done much better than expected the past year focused on >$700 notebooks which is above AMD’s $200-$600 focus. In addition, nVidia is very focused on the ARM based Tegra. AMD today is under $6 and valued at less than $3.7B. They did drop below $4.5 in September. But either way, well within the range that HP could pull off rather easily, especially given their almost unlimited ability to raise debt.

    As HP looks around them, they see that all of their chief competitors with the exception of Dell have an internal processor group (i.e. IBM, Apple, Oracle) that can customize the processors for the target market. With an AMD acquisition, they can leverage themselves into the emerging markets at the expense of Lenovo, ASUS, Acer and Dell who would have to rely on more expensive Intel processors. HP could finally move their market share from around 20% today to a range of say 40-50%, where they would have the world’s lowest cost supply and operations infrastructure.

    Secondly, an AMD acquisition paired with a partnership with Calxeda, could allow HP to offer a broader data center product line. Calxeda is the Austin startup focused on low power, ARM based server processors. With Intel raising prices on Xeon and reaping 80%+ gross margins, it now becomes a major focus for Google, Amazon and data centers to figure out how to shift work loads off of x86 to other lower cost architectures like ARM. This is no different than IBM helping Fortune 500 companies move their workloads over to their PowerPC based mainframes and servers.

    Without a processor group, HP will continue to be commoditized in both the PC and the server markets by lower cost China based competitors and by IBM and Oracle. For the small price to pay for AMD and investing in Calxeda, HP can go to a broad set of customers and offer a better value proposition than any of its competitors with complete server and networking solutions that they designed. HP then, would be back in the business of Inventing.


    EDA Company Selected as One of the Fastest Growing Companies in North America by Deloitte’s 2011 Technology Fast 500™!?!?!?!

    EDA Company Selected as One of the Fastest Growing Companies in North America by Deloitte’s 2011 Technology Fast 500™!?!?!?!
    by Daniel Nenni on 10-31-2011 at 11:07 am

    Wow! We always hear semiconductor companies complain about the lack of innovation amongst the EDA leaders. Placing high on the Deloitte 500 list shows that innovation is alive and well in EDA and it IS possible to have a meaningful impact regardless of your overall size. It is worth noting that there are very few EDA companies that have ever won this award.

    The Deloitte’s 2011 Technology Fast 500™ is a ranking of the 500 fastest growing technology, media, telecommunications, life sciences and clean technology companies in North America. The winners are selected based on percentage fiscal year revenue growth from 2006 to 2010. During this period, Berkeley Design Automation’s revenue grew 787%, while its customer base grew to over 110 semiconductor companies worldwide.

    There are more and more pain points cropping up as our customers try to close the gap between circuit design and actual silicon performance at ever-shrinking process nodes. Physics and statistics are becoming critical to understand electronics. The focus at BDA is on those problems that arise when analog, mixed-signal, RF, and custom digital circuit content grows rapidly. Technology that is directly targeted to solve these pain points, together with knowledgeable application expertise, timely responsiveness to customer issues, and the right business models are the key ingredients behind the company’s rapid revenue growth.

    It is an incredible honor for BDA to win this prestigious award. It validates their mission, strategy, and execution in solving some of the newest and most difficult problems for semiconductor design teams. BDA is widely recognized for its technology leadership via its Analog FastSPICE™ Platform and its growing market share in the electronic design automation industry. Berkeley Design Automation is the only EDA company selected for this year’s ranking.

    Ravi Subramanian, Berkeley Design Automation’s chief executive officer, credits the company’s incredible revenue growth on the combination of the:

    • Industry’s rapid move to nanometer mixed-signal design starts
    • Company’s breakthrough verification technology
    • Collaboration with key customers and partners
    • Widespread customer success with this new technology

    “This is a prestigious honor for Berkeley Design Automation, and we would like to share this honor with our customers. Our stellar team, strong technology base, innovative products, outstanding customer focus, and execution discipline have helped ensure our strong and consistent year-on-year revenue growth, even as we faced challenging economic times. I would like to thank our customers for fueling this growth via the strong demand for our products.”

    Berkeley Design Automation, Inc. is the recognized leader in nanometer circuit verification. The company combines the world’s fastest nanometer circuit verification platform, Analog FastSPICE, with exceptional application expertise to uniquely address nanometer circuit design challenges. More than 100 companies rely on Berkeley Design Automation to verify their nanometer-scale circuits. Berkeley Design Automation has received numerous industry awards and is widely recognized for its technology leadership and contributions to the electronics industry. The company is privately held and backed by Woodside Fund, Bessemer Venture Partners, Panasonic Corp., NTT Corp., IT-Farm, and MUFJ Capital.

    For more information, visit the BDA landing page on SemiWiki:

    http://www.semiwiki.com/forum/content/section/256-berkeley-design.html


    TSMC ASIC versus IBM ASIC!

    TSMC ASIC versus IBM ASIC!
    by Daniel Nenni on 10-30-2011 at 3:00 pm

    Lunch with Jim Lai, President of Global Unichip(GUC), was the highlight of my week, I had a very nice crab cake salad. As you may have read, GUC announced itself as the “Flexible ASIC Leader” taking direct aim at the traditional ASIC market led by the likes of IBM, ST Micro, TI, Renesas, and Samsung. This will be like “shooting fish in a barrel” for two very simple reasons: 28nm/20nm design challenges and the incredibly complex IP and packaging that goes with it!

    • Established in 1998
    • Headquartered in Hsin-chu Science Park
    • IPO on TSE (symbol: 3443), 2006
    • Largest shareholder is TSMC, current share 36%
    • Worldwide presence: US, China, Europe, Japan and Korea

    After my time as Director of Strategic Foundry Relationships for Virage Logic, I spent two years with the eSilicon and Virage sales teams in Silicon Valley. My theory was that IP was key to competing in the ASIC market so combining the two would be a perfect fit. My vision was for eSilicon and Virage and to merge and create an ultra competitive ASIC company which of course did NOT happen but I digress……

    The New GUC provides superior domain-specific design flows and a comprehensive IP portfolio, melded through an unparalleled bond with the manufacturing leader, to forge our uncompromising ASIC capabilities.

    During this time I identified 200+ fabless semiconductor companies in Northern California and profiled them based on product application, EDA methodology, IP, foundry, process node, etc…. I also attended weekly eSilicon and Virage Logic sales meetings to document why business was won and lost. Bottom line, IP absolutely was a key differentiator in why eSilicon lost to the ASIC guys, including GUC.

    GUC provides an unmatched combination of advanced technology, low power and embedded CPU design capabilities and production knowhow through close partnership with TSMC and major 3D IC packaging and testing companies that are ideal for advanced communications, computing and consumer electronics ASIC applications.

    Process technology is also now a differentiator since the ASIC guys have all but given up their fabs. TSMC has the only yielding 28nm processes, so which ASIC company has the inside track at TSMC? Well that would be GUC. The eSilicon guys often complained about the unfair advantage GUC had being the child of TSMC. When I relayed this story to Jim Lai he responded, “Yes, of course that is true!” GUC HQ in Taiwan is right across the street from TSMC Fab 12. TSMC executives are in Fab 12, including Dr. Cliff Hou, Vice President, Design and Technology Platform at TSMC, who is on the GUC board of directors.

    GUC is the Flexible ASIC Leader that communications, computing and consumer electronics companies turn to when low power AND high performance ASICs are “a must have.” The company provides superior domain-specific designflows and a comprehensive, proven IP portfolio, melded through an unparalleled bond with the manufacturing leader, to forge our uncompromising ASIC capabilities.

    GUC may live in the shadow of TSMC but with 500+ employees worldwide, 100+ customers, and 2010 revenue of $327M, they are casting a much larger shadow on the traditional ASIC industry. It will start at 28nm and finish in 20nm, Global Unichip Corporation will be the number one ASIC company worldwide, believe it! I have been invited to GUC HQ during my trip to Taiwan later this month and I’m really looking forward to it!



    What’s New with Semiconductor Test and Failure Analysis at Mentor?

    What’s New with Semiconductor Test and Failure Analysis at Mentor?
    by Daniel Payne on 10-28-2011 at 6:03 pm

    ISTFA
    Silicon Valley is a great location for trade shows and technical conferences, so if you have an interest in test and failure analysis then don’t miss out on the 37th annual International Symposium for Testing and Failure Analysis. This year ISTFA will be held from Sunday, November 13th thru Thursday, November 17th in San Jose at the McEnery Convention Center.

    I’ll never forget the first DRAM design that I worked on because we had a few percent yield issue caused by electromigration. I could look at my DRAM chip under the microscope and vary the VDD supply until an aluminum wire would start to bubble, melt and evaporate before my very eyes.

    You can visit all of the exhibitors on Tuesday and Wednesday, November 15-16.

    • New Tutorials including: Construction Analysis and Reverse Engineering, Package FA, Chip Access and Repackaging,
      Delayering Techniques, Photovoltaic FA

    • Technical Sessions including Counterfeit Electronics and Renewable Energy
    • Technology-Specific User Groups include: Package and Assembly FA, 3D and Finding the Invisible Defect
    • Panel Discussion “…But How Does One Find an ‘Invisible’ Defect?”
    • Pre- and Post-Conference Education Short Courses
    • North America’s Largest FA-Related Industry Show
    • Exhibitor AfterHours Demonstrations
    • Unlimited Networking Opportunities
    • Significant Early-Bird and Housing Discounts

    Mentor Graphics
    New in 2011 is a cell-aware flow where you can have user-defined fault models (UDFM) to generate test patterns for cell internals.

    Dave Macemon has written a White Paper on this topic of UDFM.

    Another new area for Mentor in 2011 is DFM + Yield analysis. For ramp-up of a new design you need to quickly identify the fundamental cause for low yield. The Tessent YieldInsight® tool gives you statistical analysis and data mining that work along with Tessent Diagnosis. With these tools you can identify the likely source of systematic defects prior to physical failure analysis.

    These approaches could save you days or weeks of effort.

    To register for after hours demos (by invitation only) Tuesday, Nov 15 from 5:45 pm – 7:15 pm, send email to: silicon_test@mentor.com

    Also, to get your free expo pass for the conferencesimply enter MEN102as your promo code here.

    More Details
    Here’s where to get more information about the ISTFA 2011 Conference and Exposition.


    Xilinx and Altera’s Summer At The Beach

    Xilinx and Altera’s Summer At The Beach
    by Ed McKernan on 10-28-2011 at 11:01 am

    The “old saw” is “To Sell in May and Go Away.” It’s a Maxim that particularly applies to semiconductor stocks as they typically drop from a post April earnings peak through the summer doldrums to a late September nadir only to be revived in the prelude of October earnings. It has happened again this year, although the path taken by the various big semiconductor players was quite different. In particular, the highly profitable, seemingly unstoppable Altera and Xilinx were coasting along until mid July when their fait was sealed by the Financial shenanigans of the US and Europe Governments. More than other semiconductors, their revenue faucet appears to turn hard off when sovereigns play chicken and the bank dominoes tip over. No bank financing, economy stalls.

    August means the Hamptons and Martha’s Vinyard for the stressed out Wall St wizards and the White House Family. Instructions are usually left at the office to never disturb the sun tanners, unless it’s absolutely the end of the world. If the US government had just spent a $100 billion less these past 3 years – which works out to less than 1% of the overall budget, Obama wouldn’t have had to pick a budget fight with Boehner until the Fall and the S&P wouldn’t have downgraded the US Debt in early August. And if the Greeks would just raise the retirement age a couple years, the Europeans would have been able to kick the can down the road until the cold of winter reminds the Germans why they desperately need Greece after all is said and done.

    No with great synchronicity that one could swear was a conspiracy, the financial geniuses decided that the first weekend in August was to be the line in the sand. It worked to perfection, the banks swooned and so the big communications equipment vendors like Ericsson and Alcatel Lucent broke the “In case of Fire” Glass to implement the plans they should have had ready back in September 2008. Back then, nobody thought the experts would actually take the financial system over the cliff. Sitting on FPGA silicon inventory that rots while sales stalls out is not a plan that companies like Ericsson and Alcatel Lucent and Huawei etc can live with for long. The everyday cash flow paying salaries, rent and the electric bill quickly consume what remains in the bank. And so purchasing gets a call from the CFO to cancel all open orders and return to distributors what is not soldered on a PCB until all is clear.

    Altera and Xilinx reported stunning drop-offs in revenue for Q3 of 5-10% sequentially. In addition, they forecast Q4 2011 will see another sequential drop of as much as 8-11%. In terms of drop-offs this is as bad as it gets without being 2008. And yet the morning after the earnings call, all was forgiven as analyst cheered seeing lights at the end of the tunnel. Altera and Xilin’s stocks have been on an absolute tear, because no one on Wall St. wants to be late to the party when the whisper goes out “Customers have come back, Book-to-Bill is much greater than 1.”

    It has been such a treat to observe the comedy of errors of Wall St. analysts writing opinion pieces quarter after quarter of the great FPGA inventory bubble about to burst while Altera’s stock tripled and Xilinx’s more than doubled. This months earnings season was about Xilinx and Altera finally confessing that they had hit a wall, albeit one not of their own doing. No matter, sins were confessed, absolution delivered and the stocks have been anointed. For now the Wall St. analysts can go back to their bosses and claim “See, I was right, the FPGA guys were overbuilding and they just entered an inventory correction.” My sense is that the big traders have stopped listening to Wall St. analysts.

    What does the new economic environment of stop and go sovereign debt crises mean for Altera and Xilinx. It means we will probably see more oscillation in their revenue, however margins will still be strong, they didn’t budge this last quarter. Xilinx and Altera, unlike other semiconductor companies have very long revenue tails so inventory doesn’t go to waste. Secondly, both players have discovered that lost revenue in the short term is eventually made up in the long term with revived orders and with new products. Every new process node they seem to ratchet up prototype ASPs, which boosts margins as well. No need to cry for Altera and Xilinx, but I will remind myself that every debt crises presents an opportunity.


    Think differentiation

    Think differentiation
    by Paul McLellan on 10-27-2011 at 5:01 pm

    Wally Rhines’s keynote at the ARM TechCon was about differentiation and how to use it to create measurable value. We all know what differentiation means in some intuitive sense, but how do you make it measurable? Wally’s answer was that differentiation is a measure of the difficulty of switching suppliers and is best measured by the gross profit margin (GPM). It doesn’t really work for pure software products because of the accounting rules we are forced to use, but it is fine if the software is embedded in something (think a smartphone).

    The alternative to differentiation is commoditization, where the products are interchangeable and compete only on price. Sometimes this is quite deliberate (think DRAM) and sometimes it happens despite the companies involved trying to unsuccessfully to differentiate (think PCs).

    Sustainable differentiation depends on the three-legged-stool of product, infrastructure and ecosystem, each more difficult for a competitor to create. ARM is actually a good example. They don’t have a monopoly on good processor design but they have done a good job of creating the infrastructure of compilers etc that are required to make a processor usable. But where they excel is in the number of partners that they have built into an ecosystem around the ARM processor families, ranging from cell-phone manufacturers, to semiconductor partners, RTOS suppliers and many other categories.

    So once something is commoditized is it possible to escape from the pit. Wally’s example was bottled water. It sells for 30,000 times the price of a glass of tapwater, although it is basically the same product. And how big is that market. Embarrassingly, it is 13X the size of the EDA market at $65B. However, I’m not sure that this is really a good example of a product where differentiation is measured by the difficulty of changing suppliers. To switch from Evian to Calistoga doesn’t have a lot of switching cost.

    So what about semiconductors? Where is the differentiation? The most differentiated (based on GPM) is FPGA, followed by analog. Memories and discretes bring up the rear.


    FPGAs fit the three-legged-stool model where there is product (FPGA itself), infrastructure (tools) and an ecosystem. Once you have a piece of IP working on, say, Xilinx then it is pretty certain that the next product in that family will be on Xilinx too. Why requalify it on Altera when the underlying product (the FPGA itself) isn’t that much different. And at the other end of the scale, memories are pin compatible and switching is easy. Indeed many manufacturers use multiple sources anyway (for example, the teardowns of the latest iPhone seems to have Samsung memory chips in the A5 SoC in the US, but Elpida in Europe).

    Foundry looked at as a group has low GPM too. But that hides what is really happening which is that TSMC has margins in the 50% range (and increasing) whereas the other foundries struggle around 20%. Again this partially reflects the strong ecosystem that TSMC has built up around its processes with IP and reference design flows.

    Wally had an interesting chart listing the semiconductor companies not by their revenue but by their gross margin. Linear Technology leads the pack with 77%, followed by Altera, PMC-Sierra, Nationa, Qualcom, Silicon Labs, Intel, ADI, Xilinx and Conexant. Analog, FPGA and microprocessors.

    Going forward Wally reckons that there will be less differentiation by process, the traditional edge for a semiconductor company, and more for design. The infrastructure will be build around proprietary IP. For everyone except Intel I think I’d agree but Intel being a process generation or two ahead of everyone else gives them differentiation even where their design is not the best.

    But how to build an ecosystem?

    First, luck with good followup. Both ARM and Intel have done this, riding the cell-phone and the PC industries when they took off. They were in the right place at the right time but then executed very well.

    Another approach is to sacrifice a key capability as Adobe did with Acrobat. Give up the revenue from Acrobat Reader. Or IBM did with the PC by their extreme openness (although they rather screwed that one up: a company with the best semiconductor technology, a great RISC processor, world-leading operating system developers gave all the money to Intel and Microsoft).

    TSMC in the early days had a problem. They were second sourcing designs that were made to run in different fabs and different processes so constantly had to tweak their fab to get good yield. So they took their design rules, traditionally the biggest secret in any semiconductor company, and published them. At VLSI/Compass we created libraries and sold them so suddenly people were doing designs directly in TSMC’s process and all that design tweaking went away.

    So there you have it. GPM is the best measure of differentiation, also driven by the difficult of switching. And the hardest thing to reproduce for a competitor is an ecosystem.