100X800 Banner (1)

Carl Icahn Blinks in Bid for Mentor Graphics

Carl Icahn Blinks in Bid for Mentor Graphics
by Daniel Payne on 05-02-2012 at 3:42 pm

10929430 large

One year ago activist investor Carl Icahn started a hostile takeover bid for Mentor Graphics and was able to offer up three new board members, however yesterday we read that Mentor Graphics will:

  • Have their annual shareholder meeting on May 30th
  • Two of Icahn’s board members are not on the roster for renewal
  • Mr. Icahn has no new board members to offer up

Read the complete proxy here.

To me this spells defeat for Carl Icahn in taking over Mentor Graphics because he is not offering up any new replacements on the board of directors. Had he been able to get just 5 out of 8 board members to agree with him, then he could’ve controlled the company. Now it appears that Carl’s three board members will be reduced to just 1, a very noticeable minority.

Mentor has a strong poison pill provision in place and only 1 Icahn board member will probably remain after the votes are tallied on May 30th.


Carl Icahn, AP Photo

Mr. Icahn does still own 14.6 percent of MENT shares, which he acquired at prices between $8 and $9 per share, so the present share price of $14.45 gives him a paper profit of over 50%, not too shabby.


If history is any indicator, then buy MENT in August, sell in April

I’ll attend the May 30th annual shareholder meeting at Mentor Graphics and let you know if there is any more drama left in this story, so hopefully we can all turn our attention to creating value for customers through EDA tools that enable the SOC revolution.


IC design at 20nm with TSMC and Synopsys

IC design at 20nm with TSMC and Synopsys
by Daniel Payne on 05-02-2012 at 10:25 am

willychen80x95

While the debate rages on about 28nm yield at foundry juggernaut TSMC, on Monday I attended a webinar on 20nm IC design hosted by TSMC and Synopsys. Double Patterning Technology (DPT) becomes a requirement for several layers of your 20nm IC design which then impact many of your EDA tools and methodology.
Continue reading “IC design at 20nm with TSMC and Synopsys”


ARM Models: Carbon Inside

ARM Models: Carbon Inside
by Paul McLellan on 05-01-2012 at 10:00 pm

ARM used to build their own models. By hand. They had an instruction-set simulator (ISS) called ARMulator that was largely intended for software development, and cycle-accurate models that were intended to run within digital simulators for development of the hardware of ARM-based systems.

There were two problems with this approach. Firstly, ARMulator was built using interpretive technology and consumed approximately 1000 host instructions to simulate each ARM instruction so ran at a few MIPS. Modern virtual platform models use Just-In-Time (JIT) compilation, cross-compiling the ARM code into host instructions and so avoiding any interpreter overhead (the compiler creates overhead but that is minimal for instruction sequences executed many times). They thus run hundreds of times faster, fast enough to boot operating systems and debug industrial-sized loads of embedded software.

ARMulator was reasonably cheap to maintain since it simulated the instruction set which didn’t change much, apart from its half-hearted attempt at cycle-counting. The same could not be said for the cycle accurate models. These were expensive to develop and had to be re-implemented for each separate ARM processor. Verification, in particular, was a huge challenge. It is comparatively easy to get the basic model to work, but handling all the corner cases (pipeline interlock, delayed writeback, branch prediction, bus contention etc) is a real challenge.

In 2004, ARM acquired Axys Design in an attempt to leverage their models more as true virtual platforms. They renamed it SoC Designer. At the time I was working for VaST (so competing with Axys) and predicted that this would fail. And it was nothing to do with the technology, which was perfectly fine.

I had watched at VLSI Technology as customers balked at using our design tools since they didn’t want to get locked into VLSI-only solutions and I felt ARM customers wouldn’t want to get locked into ARM-only solutions. If you were using, say, a CEVA (then still DSP Group I think) Teak DSP then how are you going to get that into the platform. And what sort of support will you get from ARM if there are problems.

In 2008 ARM gave up, and sold SoC Designer to Carbon. This fixed the ARM-only issue since Carbon is an independent company. Further, at the same time, they gave up trying to hand-build their own cycle accurate models. Instead, Carbon used existing technology to build the models automatically from ARM’s RTL. ARM shifted their modeling focus to concentrate on their own JIT technology which they introduced as ARM Fast Models. These TLM models don’t attempt to model cycle accuracy and therefore execute much faster.

Since then Carbon has moved from being an RTL acceleration company to a true virtual platform company, with a complete set of ARM models (and later MIPS, Imagination and others) and some powerful technology for both bringing in transactional level models such as ARM’s Fast Models, where they exist, or creating models automatically from RTL where they do not. In some cases where both models exist, such as with ARM’s Cortex A processors, Carbon has even introduced dynamic swap technology to enable a virtual platform to start running with TLM models (to do the operating system boot, for example) and then switch to accurate models at a point of interest.


RedHawk: On to the Future

RedHawk: On to the Future
by Paul McLellan on 05-01-2012 at 6:00 am

For many, maybe most, big designs, Apache’s RedHawk is the signoff tool for analyzing issues around power: electromigration, power supply droop, noise, transients and so on. But the latest designs have some issues: they are enormous (so you can’t just analyze them naively any more than you can run a Spice simulation on them) and increasingly there are 3D designs with a whole new set of electrical and thermal effects that result from stacking very thin die very close on top of each other.

Apache has announced the latest version of RedHawk called RedHawk 3DX (the previous versions were SD for static-dynamic, EV for enhanced version, NX for next version and now 3DX for 3D extensions — Apache didn’t spend a lot on branding consultants!). This attacks some of the big issues connected with 3D, in particular the thermal problem (how much heat is there and what happens when you don’t get it all out) and issues concerned with how you can analyze designs which are too large to analyze the old way.

There are 3 main changes:

  • in keeping with its name, RedHawk 3DX is indeed ready for 3D and 2.5D ICs
  • there is a gate-level engine in RedHawk now, enabling the RTL based analysis of PowerArtist to be pushed further
  • there are changes in capacity and performance to keep up with Moore’s law and take RedHawk down to 20nm designs

In 3D there are two big problems (apart from the sheer capacity issue of analyzing multiple die at once). The first is die-to-die power and die-to-die thermal coupling, and the other is modeling TSVs and interposers (which obviously you don’t have unless you are doing 3D).

With a multi-pane GUI you can now look at the different die and the interposer and see the thermal effects. TSVs are obviously an electrical connection between adjacent die but they are (often) made of copper and are a good thermal connection too. This can be good (the next die is a sort of heatsink) or bad (DRAM doesn’t like to get hot). You can look at voltage drop issues to make sure you don’t have power supply integrity problems, and also at current and thermal.



The second big change is that the RTL analysis approach of PowerArtist is pushed into RedHawk. PowerArtist does RTL analysis and prunes the simulation vectors down to perhaps a few hundred critical ones where power transitions occur or where power is very high (for example, inrush current when a powered down block is re-activated). These few vectors need more detailed analysis down at the level RedHawk operates. There is also a vectorless mode. You can see in the pictures below how well the RTL level analysis matches the gate-level analysis, both in the numbers which differ by a few percent and just visually looking at the voltage drop color maps.

Of course on a large chip you want the capability to analyze different blocks at different levels, some at gate, some at RTL, some vectorless, and you can do that.

The third aspect of the new generation of RedHawk is keeping up with design size. There is a new sub-20nm electromigration signoff engine which is current direction aware, metal topology aware and temperature aware (all things that affect EM).


Plus, adding an extraction reuse view enables a reduced model of part of the power delivery network. This enables up to (actually occasionally more) 50% reduction in the simulation node-count without reducing accuracy. This enables full-chip simulation including the package impact, with blocks of interest analyzed in detail and other blocks reduced using the ERV approach.


Book Review – Quantum Physics: A Beginner’s Guide

Book Review – Quantum Physics: A Beginner’s Guide
by Daniel Payne on 04-30-2012 at 8:00 am

It’s been 34 years since I graduated from the University of Minnesota with a degree in Electrical Engineering so I was curious about what has changed in quantum physics since then. Alastair Rae is the UK-based author who wrote the book – Quantum Physics: A Beginner’s Guide. I read this on my Kindle Touch e-book reader and iPad, preferring the Kindle for actual reading and iPad for better looking images.

In nine chapters I refreshed my memory on:

[LIST=1]

  • Quantum physics is not rocket science
  • Waves and particles
  • Power fro the quantum
  • Metals and insulators
  • Semiconductors and computer chips
  • Superconductivity
  • Spin doctoring
  • What does it all mean?
  • Conclusions

    Since 1978 the physicists have given names to the quarks and found most of them.

    I found it interesting to see the surface of Silicon through a scanning tunnelling microscope:

    Fussion (sun and stars) and fission were explained.

    When a neutron enters a nucleus of the uranium isotope U235, it becomes unstable and undergoes fission into two fragments along with some extra neutrons and other forms of radiation, including heat. The released neutrons can cause fission in other U235 nuclei, which can product a chain reaction.

    Chapters 4 and 5 pertain most directly to our semiconductor industry with explanations on conductors, insulators and semiconductors. When talking about transistors the author writes about Biploar instead of MOS devices, so if he ever does another addition I’m hoping that he will add the dominant MOS device because it is so central to all of consumer and industrial electronics today.

    Quantum bit or qubit was a new term that I hadn’t really heard of before: A quantum object that can exist in either one of two states or in a superposition make up from them.

    Chapter 8 delves into some philosophical discussions on quantum behavior where the act of measuring an object erases its past state or spin. The popular sci-fi concept of multiple universes comes into play based on the notions of objects being in a superposition of two states at once and then applying that to people or worlds.

    My Conclusions

    I found the book easy to read and it was almost like I was back at the university again, although this time with no homework to turn in. If you want a physics refresher or have avoided physics entirely and are curious, then this book will be a help to your learning.


  • Such a small piece of Silicon, so strategic PHY IP

    Such a small piece of Silicon, so strategic PHY IP
    by Eric Esteve on 04-30-2012 at 6:05 am

    How could I talk about the various Interface protocols (PCIe, USB, MIPI, DDRn…) from an IP perspective and miss the PHY IP! Especially these days, where the PHY IP market has been seriously shaken, as we will see in this post, and will probably continue to be shaken… but we will have to wait and look at the M&A news during the next few weeks or so.

    Before looking at these business related movements, doing some quick evangelization about what exactly is a PHY. The acronym comes from “PHYsical Media Attachment” (PMA) which described the part of the function dealing with the “medium” (PCB or optical). As of today, the vast majority of the protocols define high speed differential serial signaling where the clock is “embedded” in the data, at the noticeable exception of DDRn protocol where the clock is sent in parallel with the (non differential and parallel) data signals. The first reaction when seeing the layout view of an IC including a PHY function is that it’s damn small! A nice picture is always more efficient that a long talk, so I suggest you to look at the figure below (please note that the chip itself is a mid size IC, in the 30-40 sq. mm range).

    If we zoom in the PHY area, we will see three functional blocks: the PMA, usually mixed-signal, the Physical Coding Subsystem (PCS), pure digital as well as the block interfacing with the controller, named PIPE for Physical Interface for PCI Express in the PCI Express specification, SAPIS for SATA PHY Interface Specification and so on for the various specifications. Whatever the name, the functionality is always the same, allowing to interface high speed, mixed-signal and digital functions usually located in the pad ring (PMA and PCS) with the functional part of the specification (Controller) always digital and located into the IC core. The picture below shows a layout view for a x4 PCIe PHY. The Clock Management Unit (CMU) is a specific arrangement where the PLL and Clock generation blocks are shared by the four lanes.

    Zooming again within the above picture, we would come to the PLL and the SerDes, both functions representing the heart of the PHY, requiring the most advanced know how to be designed. When sending data from the chip, you receive digital information from the controller, say a byte (parallel), then encode it (add 2 bits for encryption purpose) and “serialized” it to send 10 serial bits; this in case of 8B/10B encoding, valid for SATA or PCIe gen-1 or gen-2. For PCIe gen-3, encryption becomes 134B/136B when for 10 Gigabit Ethernet it’s 64B/66B. Precisely describing the various mechanisms used to build the different SerDes would take pages. Which is important to note here is that both the PLL and the SerDes design are requiring high level know how, and that Moore law allow to design faster and faster SerDes (but not smaller): in 2005, designing a 2.5 Gbit/s SerDes on a 90 nm technology what state of the art, now we are talking about 12.5 Gbit/s PHY designed on 28nm; and the 25 Gbit/s PHY able to support 100 Gigabit Ethernet using four lanes only (instead of ten 10G today) is not that far!

    If we look at the business side, it’s easy now to understand that only a few teams around the world are able to design PHY functions, and an ever smaller number of companies to sell advanced PHY IP functions. Not because the market is small, in fact the massive move from parallel to serial interface specification has created a very healthy Interface IP segment weighting more than $300M in 2011, see this blog, but because it requires highly specialized design engineers, the type of engineers starting to do a decent job after five or ten years of practice, and being good after fifteen or twenty years experience!

    Until very recently, end of 2011, the PHY IP vendor landscape was made of a large EDA and IP vendor, Synopsys, selling PHY + Controller IP to the mainstream market, but not so comfortable with the most advanced PHY products like PCIe gen-3 (8G) or 10 Gigabit Ethernet, competing with a couple of companies: Snowbush (IP Division of Gennum), MoSys (after the acquisition of Prism Circuit in 2009) and V Semiconductor, founded in 2008 by former Snowbush employees, as well as Analog Bits.

    Snowbush, founded in 1998 by Pr Ken Martin (Professor of Microelectronic at Toronto University, this was the best location to hire young talents!), was considered by the market as the most advanced PHY IP vendor when the company was bought in 2007 by Gennum, a fabless company. Gennum realized that they had acquired a nugget, and quickly developed sales based on existing port-folio. They also realized that they were missing controller IP to be in the position to offer an integrated solution, and bought Asic Architect in 2008 for a very low price. At the same time, they fired the founders, including Pr Ken Martin, and try to develop and sale integrated solution, competing with Synopsys. Although Snowbush’ revenues quickly moved at the $10-12M range in 2008 and 2009, the company was still at the same level of revenue in 2011, on a market which grew 40% on the same period… As a matter of fact, if the PHY IP was still competitive, the controller IP never reach the same level of market acceptance.

    Prism circuit was founded in 2007 and could compete on the same market than Snowbush (most advanced PHY developed in the latest technology nodes). The company was already doing good with a $5-6M range revenue when it was bought (for $25M, more than Snowbush!) in 2009 by MoSys. The rational was for MoSys to use SerDes technology from Prism Circuit to build an innovative product, the Bandwidth Engine IC, moving to fabless positioning, but still selling PHY IP… As we will see in a minute, this strategy was not as fruitful as expected.

    In fact, both companies had success on a niche, lucrative segment (very high speed PHY) when a new comer, V Semiconductor, was started in 2008 by former Snowbush employees. V Semi was able to catch a good market share, as their revenue (not publicly disclosed) could be estimated around $10M in 2011. Among other design-in, they have designed the multi standard SerDes on Intel 28nm technology for FPGA start-up. This type of sale represent is a multi million dollar deal. Not that bad for supposedly not enough efficient business development people!

    Untill 2011, we had a large EDA and IP vendor, enjoying the larger market share, challenged by a few PHY IP vendors, so customers still had the choice, especially when needing above 10 G PHY solution designed in advanced technology node.

    Then, at the end of 2011, Gennum was acquired by Semtech, another fabless. A month ago or so (March 2012), it was clear that Semtech decided to keep Snowbush PHY for their internal use, which is a good strategy as such IP can be used as strong differentiators. The result is: PHY IP Vendor minus one. Then, still in March, came the announcement that Synopsys (see 8K form) has acquired MoSys PHY IP unlimited license for a mere $4.2M (former Prism Circuit was bought for $25M in 2009), preventing MoSys to grant new license. The result is now: PHY IP Vendor minus two!

    There is still one or two vendors (V Semi and Analog Bits), you’d say… But Analog Bits is more design service than pure IP vendor oriented, supporting various mixed-signal products, from SerDes to SRAM/TCAM passing through PLL. And, last but not least, the rumors about an acquisition of V Semi are multiplying, days after days. As I don’t want to relay an unverified rumor, I will not disclose the last company name I have heard, but such an acquisition would not go in favor of more competition on the PHY IP market…

    By Eric Estevefrom IPnest



    GSA 3DIC and Cadence

    GSA 3DIC and Cadence
    by Paul McLellan on 04-29-2012 at 10:00 pm

    At the GSA 3D IC working group meeting, Cadence presented their perspective on 3D ICs. Their view will turn out to be important since the new chair of the 3D IC working group is going to be Ken Potts of Cadence. Once GSA decided the position could not be funded then an independent consultant like Herb Reiter had to bow out and the position would need to be taken by someone funded by the company they work for. So thanks Cadence. And thanks for the beer and wine after the meeting too.

    Ken would have been there but unfortunately he has broken both his elbows (get well soon Ken) and driving over 17 from Santa Cruz is dangerous enough with both arms working. I know Ken from both Compass and my own Cadence days. He also owns a car repair shop in Santa Cruz with his family but somehow the gravitational attraction of EDA pulled him back in.

    John Murphy presented Cadence’s perspective. First, Murph pointed out that Moore’s law has been going on for a century. I’ve pointed this out before, indeed we both used the same graphic stolen from Ray Kurzweil that covers 5 technologies (mechanical, relay, tube, transistor, IC). From the system performance point of view, if not from the purist view of lithographic advance, this is likely to continue for the forseeable future.

    Cadence sees two “killer apps” for 3D-IC in the short term.

    The first is yield enhancement. The same defect density has a huge impact on yield depending on die size, so there are is a lot of upside to building a system out of several smaller die versus one huge one. Indeed, this is the motivation for Xilinx’s excursion into 3D (2.5D on silicon interposer). eSilicon used their own cost models to estimate that Xilinx is getting a huge bump in yield. And as Liam Madden of Xilinx had pointed out earlier in the day, they don’t need to put all the slowest die on one interposer so there is a performance gain too, since all 4 slices are never worst case as might happen with a single large die (processed with slightly too long gate lengths). This shows it graphically (obviously exaggerated, nobody is making 4″ chips, too big for the reticle anyway).


    The second killer app is memory subsystem, where by putting memory on logic (or stacks of memory on logic) the power can be significantly reduced while the performance is increased.


    Cadence claim that they have 8 test chips and 1 in production (Altera). In fact they said there is a second production design but they cannot say with who. One thing that is clear is that 3D requires a lot of cooperation between a lot of different partners, not just in EDA. In fact, designing a 3D IC is not that different from designing several separate ICs. Yes, some new stuff for TSVs is required, and test is complicated. But it is nothing compared to what the OSATs have to do: putting a 3D chip stack together is nothing like just putting a chip on a BGA with a whole new set of challenges to do with alignment, not breaking the very thin die, bonding and debonding them from something to make the handleable, making all the microball connnections and so on.

    Like everyone else, Cadence sees 3D coming as a series of steps. First 2.5D silicon interposer designs, then logic on memory with true 3D some way off.


    Smart mobile SoCs: Apple

    Smart mobile SoCs: Apple
    by Don Dingee on 04-29-2012 at 9:00 pm

    Apple sells devices. Lots of them. Their success is due to many things related to design and tech religion, and an important part is the SoC inside those devices which creates the experience people want. The official Apple information on their parts is minimal. Their SoCs have been dissected with more fervor than Roswell aliens. We know some but not every detail, but connecting the dots of some history tells more of the story. Continue reading “Smart mobile SoCs: Apple”


    A Simple, Scalable LDE Optimization Flow for 28/20nm Custom/AMS Design

    A Simple, Scalable LDE Optimization Flow for 28/20nm Custom/AMS Design
    by Eric Filseth on 04-29-2012 at 9:00 pm

    At 28nm and below, a number of electrical variation effects become significant which depend not only on individual devices, but the physical interaction between neighboring devices, wells, etc during the manufacturing process. Some of these effects have become collectively referred to as “Layout Dependent Effects” (LDE); including the well-proximity effect, shallow trench isolation stress and a few others. Similar concerns include poly density, whose magnitude and variation can also affect device performance.

    These effects are not easily predicted in circuit design, because they depend on detailed shape and layer interactions which don’t become known until the physical placement is complete. Yet they are also hard to manage in layout, because in any given context there are likely to be multiple interactions, and it’s often not obvious how they all net out electrically. Furthermore, EDA tool support to analyze LDE has so far been limited, and electrical performance data is really needed in circuit design, not layout.

    The result is that while most advanced-node design groups are aware of LDE and other such context-sensitive effects, there are not currently well-accepted design flows to cope with them. Most 28/20nm groups today take one of two approaches, neither optimal:

    [TABLE] align=”center” border=”1″
    |-
    | colspan=”2″ style=”width: 169px” | Current Approach
    | style=”width: 186px” | Principle
    | style=”width: 90px” | Advantages
    | style=”width: 193px” | Disadvantages
    |-
    | style=”width: 22px” | 1
    | style=”width: 147px” | Ignore LDE Altogether
    | style=”width: 186px” | Assume the design is not critically sensitive to LDE
    | style=”width: 90px” | Simple
    | style=”width: 193px” | Risk: chip may yield poorly or even fail
    |-
    | style=”width: 22px” | 2
    | style=”width: 147px” | Apply Conservative Rules to All Devices
    | style=”width: 186px” | Increase die area until LDE effects are minimal on all devices
    | style=”width: 90px” | Mitigates LDE
    | style=”width: 193px” | Large area penalty since most devices not critical; may not solve Density issues
    |-

    What everybody would like is a “selective” LDE solution: a design flow in which the devices which really matter — those whose LDE-influenced behavior would materially impact the circuit’s performance — are identified early, and those devices and their surroundings are handled appropriately in layout; while the rest of the design is treated conventionally.

    Unfortunately, this is difficult to implement within a traditional custom IC design flow, for several reasons:

    • A standardized way to communicate LDE intent between circuit design, simulation and layout does not yet exist. If it’s done at all, it’s generally ad hoc.

    • Laying out a region for LDE and Density is a complex optimization, with many potential interactions between devices, layers, wells, etc. And, it needs to be done in context of all the other nanometer design rules as well: grids, restricted pitches, poly rules etc. To do all this manually, with different rules for the same device in different contexts, is difficult and error-prone. Local changes as the layout and design rules evolve mean the optimization will likely be repeated multiple times. And a manual optimization is heavily dependent upon robust interactive LDE layout-vs-intent checking flows, which so far have not been widely available.

    • Serious exploration at the physical level is essentially impossible, since traditional layout takes so long. No 28nm circuit designer gets to see his or her layout both with and without LDE optimization; there simply isn’t time to do two (or more) layouts.

    As a result, robust and repeatable LDE optimization flows have yet to find their way into mainstream design. Standard 28/20nm practice today is either to risk LDE-related yield losses, or else to incur large area penalties.

    A Simple LDE Optimization Flow

    In order to address the 28/20nm LDE optimization problem, a practical and robust flow is needed that lets the circuit designer make design decisions which automatically propagate correctly through the rest of the implementation. We propose a simple flow, scalable to extremely advanced geometries, with four main elements:

    [LIST=1]

  • A straightforward way for a front end circuit designer to identify the important devices in his/her design, and set the appropriate LDE conservatism. The key components are designer judgment, Spice with LDE-aware models, and links to foundry characterization data.
  • An open method to automatically map the appropriate characterization data into instance-specific physical design constraints for layout.
  • Automated, correct-by-construction and context-aware placement which obeys the LDE constraints, density targets, and all other design rules together. All instances placed at once, enabling control of devices and their context concurrently; and maintaining correctness during layout iterations and ECOs. Manual device-by-device optimization eliminated.
  • Batch verification that the circuit meets its electrical requirements as laid out. Such a circuit-centric batch flow exists today: extraction and Spice. If run post-place but pre-route, the loop is fast; routing can always be added later.

    Below is a design constructed by a customer to pipeclean such a flow. It is a small, high-speed circuit. In this case two placements were done – in an automated flow it’s very easy to make multiple layouts. One layout has all devices set to “LDE-sensitive” and the other has all devices set “not LDE sensitive.”

    The LDE conservatism used “default” data. The extraction was done with Mentor Calibre xRC and the simulation with Berkeley Design Automation’s Analog FastSPICE.

    Three differences stand out. The first is the LDE-optimized design is faster. Not hugely so, but enough that you might well use it on a critical path, for example.

    The second difference is that the LDE-compensated devices here took about 40% more area (and a manually optimized layout would likely be larger still). Fortunately in a larger design it’s unlikely the designers would want to apply LDE conservatism to more than a fraction of the devices in the design. If only 10% of the devices were designated “be careful with these, and their regions,” then the overall area penalty would be only a few percent, not 40%. A flow that can treat LDE selectively = good!

    The third and possibly most interesting is that generating alternate tradeoff layouts with different combinations of devices designated “LDE on” and “LDE off,” or even LDE with more or less conservative parameters, is rapid and straightforward in this flow. Most circuit designers never get to see more than one layout for each circuit. Yet in environments where the layout has a material impact on the electrical performance, this kind of visibility can be immensely helpful.

    Summary

    As silicon geometries continue to shrink, designers can expect to see more and more sources of electrical variation which depend not just on individual devices, but on those devices’ proximity and relation to their surrounding context. Managing these designs will be greatly assisted by standardized methods of capturing LDE intent, and correct-by-construction layout which optimizes an entire device-level placement simultaneously for all nanometer silicon requirements at once; including LDE, Density, and future context-related concerns yet to be identified.

    Eric Filseth,
    Ciranova



  • Intel says fabless model collapsing… really?

    Intel says fabless model collapsing… really?
    by Daniel Nenni on 04-28-2012 at 7:00 pm

    There is an interesting discussion in the SemiWiki forum in response to the EETimes article: Intel exec says fabless model ‘collapsing’. Definitely an interesting debate, one worth our time since the advertising click hungry industry pundits will certainly jump all over it. Clearly I’m biased since I helped build the fabless semiconductor ecosystem. I will certainly try and be open minded here, but probably not.

    Kirk Skaugen, the new general manager of Intel’s client PC group, moderated a Q&A with Mark Bohr, a 33+ year Intel alum, and Brad Heaney, the Ivy Bridge program manager. This was clearly a scripted Intel PR piece, but also an opportunity for additional hyperbole and commentary. Here are the key quotes from my point of view:

    “Being an integrated device manufacturer really helps us solve the problems dealing with devices this small and complex,” Bohr said “the foundries and fabless companies won’t be able to follow where Intel is going.”

    This is complete nonsense. This is not a David versus Goliath situation, this is hundreds of Davids versus Goliath. This is crowd sourcing, not unlike Twitter and Facebook where millions of people around the world collaborated and toppled ruthless dictators. This is the entire fabless semiconductor ecosystem (Synopsys, Cadence, Mentor, ARM, TSMC, UMC, GlobalFoundries, QCOM, BRCM, NVDA, AMD, and hundreds of other companies) against Intel. Hundreds of billions of dollars in total R&D versus Intel’s billions.

    “Bohr claims TSMC’s recent announcement it will serve just one flavor of 20 nm process technology is an admission of failure. The Taiwan fab giant apparently cannot make at its next major node the kind of 3-D transistors needed mitigate leakage current, Bohr said.”

    Not true of course. TSMC has a 20nm FinFet process coming (my opinion), Morris mentioned it in the most recent conference call:

    “Now FinFET for significant performance case, we’re going to introduce FinFET after the 20-nanometer planar. We’ve been working on FinFET for more than 10 years. We’re quite confident that we will have a robust FinFET technology.” Morris Chang,Taiwan Semiconductor Manufacturing Company Ltd. (TSM) Q1 2012 Earnings Call April 26, 2012 8:00 AM ET

    I honestly believe TSMC will have BOTH planar and FinFet 20nm versions. Why? Because the crowd (customers and partners) requested it. Intel will only have FinFets at 22nm. Why? Because Intel is Intel’s #1 customer and that will never change.

    Intel has stated many times that they will not compete with TSMC in the open foundry market. Mark Bohr repeated it again, “Intel does not want to be in the general foundry business, but it makes its technology available to a few strategic partners.” Does everybody get that? A FEW strategic partners? TSMC is open to all customers. TSMC does not compete with customers. TSMC is customer driven. By definition, TSMC crowd sources and my bet is on the crowd every time!

    Speaking of crowd sourcing, according to LinkedIn there are about 500,000 people in the semiconductor ecosystem now. Since going online in January of 2011 over 250,000 people (unique visitors) have viewed more than 2,000,000 pages on SemiWiki. Now that’s a crowd!

    Either way, I do not see this as a zero sum game, both TSMC (foundry) and Intel (IDM) will thrive in the new geometries. The fabless model has brought us many new innovations and a very rich ecosystem which will be very hard to break. To much money is at stake here and Silicon Valley is full of entrepreneurs who thrive on challenge and doing the impossible. Me for example.