BannerforSemiWiki 800x100 (2)

Qualcomm Meets Jerry Sanders at 28nm

Qualcomm Meets Jerry Sanders at 28nm
by Ed McKernan on 04-19-2012 at 8:26 pm

First the good news: 4G LTE design in activity is off the charts as OEMs building smartphones, tablets and Ultrabooks are buying into the capability for product rollouts that will occur starting in September. Now the bad news: there’s not enough to go around until probably well into 2013. For a Company sitting on over $26B in cash, twice as much as Intel, this is a disaster that didn’t have to happen. Now Qualcomm is in panic mode, as it must spend engineering resources and dollars taping out designs to alternative fabs (likely Global Foundries and Samsung). For this misstep, they will probably pay the price of throttling back the 28nm Snapdragon design win effort and hand over market share to Intel.

As mentioned in previous blogs that I have written, there really are only four players left in the semiconductor game outside of memory. It’s Intel, Samsung, Apple and Qualcomm. Of the four, Qualcomm has played the most risk-averse game of poker, not willing to make bets beyond a single penny ante. Qualcomm was satisfied for many years as TSMC’s largest customer, what could go wrong. Plenty. Like having to share the same leading edge factory capacity several times over with other sizeable fabless players (i.e. Altera, Xilinx, nVidia and Broadcom) including some who are your leading and future competitors.

TSMC can’t be faulted for tallying up every customers wafer forecast and dividing by three, four or even five to get to some reasonable expected market demand. But then nobody expected the “end of the world” economic situation in 2008-2009 followed by the Apple driven Mobile Tsunami of iPhones and iPADs that drove right through the downturn. Apple, though, had its supply chain covered with well-managed capacity build outs at Samsung and Toshiba. Vertical Integration is where we are at and Qualcomm is the only one who hasn’t figured it out.

Intel overbuilt on 22nm capacity knowing that a circuit breaker was going to trip with all their competitors tied into the same single Fab source called TSMC. Malcolm Penn of Future Horizons has a great pitch on this, which I highly recommend. The only way to avoid this trap is to return to the Jerry Sanders Real Men Have Fabs strategy. It is the way in which Qualcomm can break away from Broadcom, Marvell and Mediatek. It also is the only way Qualcomm has a shot of going mano-on-mano with Intel as the end game plays out these next 3-5 years.

Intel’s greatest leaps forward, as I witnessed in the 1990s , was when their competitors screwed up during the moment that they were making their own transition to a new process with a new product. The market jumped on the new product in a stepped function manor and demand went through the roof. Everything a day old was immediately obsolete rotting in the channels. I am thinking about the transition from Pentium to Pentium MMX in the mid 1990s as an example.

In the earnings conference call, there was a moment when Steven Mollenkopf, President and COO of Qualcomm said: “Now in some cases also, our OEM partners are, of course, working with us very closely to try to help us accelerate our own supply.” I take this to mean Apple is stepping in to open doors at Samsung in order for Qualcomm to tape out a part that will only go in the iPhone 5. It is a weak position to be in when your customer is needed to open the doors to new capacity. This is likely to be paid back with a pound of flesh.

For those who a year ago thought that the ARM camp was on its way to dethroning Intel and all the pieces were in place, it is time to adjust to the reality that having a Fab Matters, now more than ever. Qualcomm, at roughly half the sales of Intel needs to write a $5B check for a New Fab starting immediately.

FULL DISCLOSURE: I am long AAPL, INTC, QCOM and ALTR


"Mechanics of Creativity" at DAC 2012: Oxymoron?

"Mechanics of Creativity" at DAC 2012: Oxymoron?
by Holly Stump on 04-19-2012 at 8:13 pm

A perennial DAC highlight for me is the panel session sponsored by Women in Electronic Design. This year, it is called “The Mechanics of Creativity: What does it take to be an idea machine?”

Is this an oxymoron?

I interviewed panelist Dee McCrorey , Chief Risk Guru and Innovation Catalyst at Risktaking for Success LLC, to find out.

“Mechanics /Creativity” and “Idea / Machine” seem like oxymorons, but are they?

I think it is an interesting juxtaposition of concepts, a creative one! Mechanics can mean:
• Functional side of mechanics (“mechanics of the brain”)
• Mechanics associated with physical science that deals with energy and its effect on bodies
• Mechanics and the practical application of machines or tools

I like to think of creativity as energy. But creativity by itself is fleeting. We can no longer wait for our muse to visit us. Mechanics of creativity can develop the “on-demand muse,” and we do this by developing a creative mindset that provides us with a continuous source of energy. A creative mindset “feeds” us on a regular basis and eliminates the need to unblock creatively. By integrating the components of a creative mindset these “packets of energy” become part of your creative DNA—you always have something in the creative hopper.

Ron Adner’s book The Wide Lenssays: “Invention used to be 1 percent inspiration and 99 percent perspiration. These days, it’s probably 50 percent collaboration. Companies trying to commercialize innovations won’t succeed unless suppliers, distributors, and other partners can and will do their parts.”

Is creativity a province of “artistic/ intuitive” people, as opposed to “analytical people” or can everyone be creative? Might analytics even have special strengths when they allow themselves to be creative?

Contemplation is not just for introverts. Success in the new world of business demands “whole brain” thinking–people who use the full functionality of their brain; cross back and forth between that of “thinker” and “creative.” These “whole brain adaptives” will model the best in “flexible mindset” thinking. Ambivert: a person who is intermediate between an extrovert and an introvert.

All energy begins with us, but when we expand energy it gains traction and grows stronger. Think about the last time you experienced a “collaborative high,” that buzz you got when you produced something greater than anything you could have done alone. That’s expansion of energy.

Collaboration is a timing thing—bring it in too soon and you risk vetting before you’ve had a chance to fully juice your idea—this is why contemplation and alone time is so important.

“Time to market /Creative flow” are challenging to balance. Thoughts?

At a time when we’re being called on to solve big, complex problems and to innovate at a faster clip we can’t afford to scatter our energies.

In my book, Innovation in a Reinvented World: 10 Essential Elements to Succeed in the New World of Business, Steve Todd, EMC Distinguished Engineer, shares his advice for professionals in preparing them to succeed in the new world of business: “Learn how to think—just think. Set aside time to just be still…The most complex problems will be solved by the “thoughtful ones,” people who just put their feet on the desk and think about solving big problems. The thoughtful employees will survive and thrive in future.”

For more information on Dee McCrorey
For more information on the “Mechanics of Creativity”Pavilion Panel

Join panelists Dee McCrorey of Success LLC, Lillian Kvitko of Oracle, Sherry Hess of AWR, and moderator Karen Bartleson of Synopsys on Monday June 4, at DAC 2012!



The Carbon Decade

The Carbon Decade
by Paul McLellan on 04-19-2012 at 6:00 am

Carbon Design Systems celebrates its 10th anniversary this month. It is a celebration that the company has survived a decade but also bittersweet that the company hasn’t been acquired for a juicy premium. But we just have to accept that EDA is not a business where you can throw together a company in 18 months and sell it for $1B before it makes its first dollar of revenue.

Like any company that has survived for ten years, its mission has changed somewhat. Carbon started life with technology for taking RTL, throwing away detail, and producing C-based models that ran much faster. The models were described as ‘carbonized.’ I think it is great marketing to have a company name that can be used as a verb, although of course there are risks with trademarks since they are only meant to be used as adjectives (like “I copied it using a Xerox brand photocopier” hmm).

But that initial technology has evolved to include the entire system validation ecosystem including embedded software, microprocessors and the rest of the underlying system.

In 2008, Carbon redefined itself as a virtual prototyping company when it acquired SoCDesigner from ARM (who had themselves purchased Axys four years earlier). Using the carbonizing technology they could compile ARM’s RTL code into 100% accurate virtual models, replacing the previous approach (ARMulator etc) of hand-written models.

In 2010 they unveiled Carbon IP Exchange. The Achilles heel of virtual platforms has always been the availability of models since many of the economic and the time-to-market benefits of virtual platforms evaporate if you have to spend too many dollars and too much time creating models. Now through portals like IP Exchange and the Synopsys TLMCentral that problem is starting to be solved, at least for the most common processors and their peripheral families.

The virtual platform space used to be quite crowded but Synopsys purchased Virtio, VaST and CoWare and Intel/WindRiver purchased Virtutech. Carbon (and Imperas) are the only independent company left standing. Bill Neifert, the original founder, is still there too, ten years later, now its CTO. Here’s his blog on the anniversary.


Introduction to FinFET technology Part I

Introduction to FinFET technology Part I
by Tom Dillinger on 04-18-2012 at 6:00 pm

This is the first of a multi-part series, to introduce FinFET technology to SemiWiki readers. These articles will highlight the technology’s key characteristics, and describe some of the advantages, disadvantages, and challenges associated with this transition. Topics in this series will include FinFET fabrication, modeling, and the resulting impact upon existing EDA tools and flows. (And, of course, feedback from SemiWiki readers will certainly help influence subsequent topics, as well.)

Scaling of planar FET’s has continued to provide performance, power, and circuit density improvements, up to the 22/20nm process node. Although active research on FinFET devices has been ongoing for more than a decade, their use by a production fab has only recently gained adoption.

The basic cross-section of a single FinFET is shown in Figure 1. The key dimensional parameters are the height and thickness of the fin. As with planar devices, the drawn gate length (not shown) separating the source and drain nodes is a “critical design dimension”. As will be described in the next installment in this series, the h_fin and t_fin measures are defined by the fabrication process, and are not design parameters.


Figure 1. FinFET cross-section, with gate dielectric on fin sidewalls and top, and bulk silicon substrate

The FinFET cross-section depicts the gate spanning both sides and the top of the fin. For simplicity, a single gate dielectric layer is shown, abstracting the complex multi-layer dielectrics used to realize an “effective” oxide thickness (EOT). Similarly, a simple gate layer is shown, abstracting the multiple materials comprising the (metal) gate.

In the research literature, FinFET’s have also been fabricated with a thick dielectric layer on top, limiting the gate’s electrostatic control on the fin silicon to just the sidewalls. Some researchers have even fabricated independent gate signals, one for each fin sidewall – in this case, one gate is the device input and the other provides the equivalent of FET “back bias” control.

For the remainder of this series, the discussion will focus on the gate configuration shown, with a thin gate dielectric on three sides. (Intel denotes this as “Tri-Gate” in their recent IvyBridge product announcements.) Due to the more complex fabrication steps (and costs) of “dual-gate” and “independent-gate” devices, the expectation is that these alternatives will not reach high volume production, despite some of their unique electrical characteristics.

Another fabrication alternative is to provide an SOI substrate for the fin, rather than the bulk silicon substrate shown in the figure. In this series, the focus will be on bulk FinFET’s, although differences between bulk and SOI substrate fabrication will be highlighted in several examples.



Figure 2. Multiple fins in parallel spaced s_fin apart, common gate input

Figure 2 illustrates a cross-section of multiple fins connected in parallel, with a continuous gate material spanning the fins. The Source and Drain nodes of the parallel fins are not visible in this cross-section – subsequent figures will show the layout and cross-section view of parallel S/D connections. The use of parallel fins to provide higher drive current introduces a third parameter, the local fin spacing (s_fin).

Simplistically, the effective device width of a single fin is: (2*h_fin + t_fin), the total measure of the gate’s electrostatic control over the silicon channel. The goal of the fabrication process would be to enable a small fin spacing, so that the FinFET exceeds the device width that a planar FET process would otherwise provide:

s_fin < (2*h_fin + t_fin)

Subsequent discussions in this series will review some of the unique characteristics of FinFET’s, which result in behavior that differs from the simple (2*h + t) channel surface current width multiplier.

The ideal topology of a “tall, narrow” fin for optimum circuit density is mitigated by the difficulties and variations associated with fabricating a high aspect ratio fin. In practice, an aspect ratio of (h_fin/t_fin ~2:1) is more realistic.

One immediate consequence of FinFET circuit design is that the increments of device width are limited to (2h + t), by adding another fin in parallel. Actually, due to the unique means by which fins are patterned, a common device width increment will be (2*(2h+t)), as will be discussed in the next installment in this series.

The quantization of device width in FinFET circuit design is definitely different than the continuous values available with planar technology. However, most logic cells already use limited device widths anyway, and custom circuit optimization algorithms typically support “snapping” to a fixed set of available width values. SRAM arrays and analog circuits are the most impacted by the quantized widths of FinFET’s – especially SRAM bit cells, where high layout density and robust readability/writeability criteria both need to be satisfied.

The underlying bulk silicon substrate from which the fin is fabricated is typically undoped (i.e., a very low impurity concentration per cm**3). The switching input threshold voltage of the FinFET device (Vt) is set by the workfunction potential differences between the gate, dielectric, and (undoped) silicon materials.

Although the silicon fin impurity concentration is effectively undoped, the process needs to introduce impurities under the fin as a channel stop, to block “punchthrough” current between source and drain nodes from carriers not controlled electrostatically by the gate input. The optimum means of introducing the punchthrough-stop impurity region below the fin, without substantially perturbing the (undoped) concentration in the fin volume itself, is an active area of process development.

Modern chip designs expect to have multiple Vt device offerings available – e.g., a “standard” Vt, a “high” Vt, and a “low” Vt – to enable cell-swap optimizations that trade-off performance versus (leakage) power. For example, the delay of an SVT-based logic circuit path could be improved by selectively introducing LVT-based cells, at the expense of higher power. In planar fabrication technologies, multiple Vt device offerings are readily available, using a set of threshold-adjusting impurity implants into masked channel regions. In FinFET technologies, different device thresholds would be provided by an alternative gate metallurgy, with different workfunction potentials.

The availability of multiple (nFET and pFET) device thresholds is a good example of the tradeoffs between FinFET’s and planar devices. In a planar technology, the cost of additional threshold offerings is relatively low, as the cost of an additional masking step and implant is straightforward. However, the manufacturing variation in planar device Vt’s due to “channel random dopant fluctuation” (RDF) from the implants is high. For FinFET’s, the cost of additional gate metallurgy processing for multiple Vt’s is higher – yet, no impurity introduction into the channel is required, and thus, little RDF-based variation is measured. (Cost, performance, and statistical variation comparisons will come up on several occasions in this series of articles.)

The low impurity concentration in the fin also results in less channel scattering when the device is active, improving the carrier mobility and device current.

Conversely, FinFET’s introduce other sources of variation, not present with planar devices. The fin edge “roughness” will result in variation in device Vt and drive current. (Chemical etch steps that are selective to the specific silicon crystal surface orientation of the fin sidewall are used to help reduce roughness.)

The characteristics of both planar and FinFET devices depend upon Gate Edge Roughness, as well. The fabrication of the gate traversing the topology over and between fins will increase the GER variation for FinFET devices, as shown in Figure 3.



Figure 3. SEM cross-section of multiple fins. Gate edge roughness over the fin is highlighted in the expanded inset picture. From Baravelli, et al, “Impact of Line Edge Roughness and Random Dopant Fluctuation on FinFET Matching Performance”, IEEE Transactions on Nanotechnology, v.7(3), May 2008.

The next entry in this series will discuss some of the unique fabrication steps for FinFET’s, and how these steps influence design, layout, and Design for Manufacturability:

Introduction to FinFET technology Part II


Linley Tech Mobile Conference

Linley Tech Mobile Conference
by Paul McLellan on 04-18-2012 at 2:14 pm

I went to part of the Linley Tech Mobile Conference. This is the current incarnation of what started life as Michael Slater’s Microprocessor Report, and the twice-yearly Microprocessor Forum. These very technical analysis organizations seem to work well when they are a small group of analysts working together to cover an area of technology, but they don’t seem to scale very well once they are bought by the bigger companies with their high overhead of vice-presidents and sales teams. Microprocessor Report had its own history moving into Ziff-Davis, then Cahners/Reed and In-Stat and now back to its roots in the Linley Group. And as if to emphasize my point, a week ago NPD group apparently shut down In-Stat completely and laid off 30 analysts.

We all know the background to mobile microprocessors: smartphones and tablets are growing like crazy. Smartphones currently have 25% CAGR expected from 2011-15, an tablets even faster, at 54% CAGR although from a much smaller base (which means they triple over the period). By the 2015 smartphone growth should start to be leveling off in the classic S-curve. Everyone seems to be predicting that tablets won’t replace smartphones (I agree) nor PCs (I’m not so sure, I use mine more and more).

The rise of the smartphone has created a huge change in vendors: Samsung and Apple have pretty much taken all the money. Nokia has shrunk from #1, RIM is in trouble. Up and comers are Huawei, ZTE and LG.

Also Motorola which is about to be part of Google. My expectation is that Google will sell Motorola as soon as they can. They really bought it for the patents (to defend their Android licensees against Apple and others) and Wall Street will hate it if Google keeps it. They hate businesses that mix very different margins such as hardware and software. When I was running Compass and we were trying to sell the company, Bala Iyer, the CFO of VLSI, told me, “Wall Street will give me credit just for shutting you down; if we get any money for you, it’s icing on the cake.” I’m sure they are telling Google the same thing. But how much Motorola is worth without the patents is unclear (of course it would have a patent license to everything, but not the rights to sublicense). After all, it is not in any sense the market leader in smartphones or even Android phones. So I’m not sure who would buy them. Chinese companies are the ones rumored to be interested but I don’t quite see why, say, Huawei would want it.

In terms of semiconductor suppliers, it is the story of the rise of Qualcomm and Samsung and the decline of TI (who have exited the baseband business). A smartphone involves two primary sub-systems, the application processor and the baseband chip (which runs the radio interfaces). There has been a trend towards integrating these on the same chip but that trend has been interrupted since neither Samsung nor Apple do it, and they are such a large part of the market. Apple, for example, builds its own application processors (A4, A5) and uses Qualcomm for baseband. But the trend towards integation, plus Apple and Samsung rolling their own means there is only perhaps 20% of the market available to sell a merchant baseband processor. One big advantage of keeping the two subsystems in separate chips is that the whole radio interface (which doesn’t change so fast) doesn’t need to be requalified each time a new version of the application processor is created.

Apple’s new iPad uses the A5X application processor. This is a huge chip using a quad-core Power-VR GPU. The GPU alone is 60mm[SUP]2[/SUP] of die area which is larger than the whole of Nvidia’s Tegra2. But it has to do more than HD and can maintain a frame rate of 50fps on the iPad’s 3 megapixel display. And Apple has enough margin on the new iPad to bury the cost.

Intel and MIPS are trying to challenge ARM’s dominance of the application processor. In principle Android allows alternative architectures painlessly since Apps are distributed as Java bytecodes (which is architecture neutral). In practice, many Android Apps, especially games, incorporate native ARM code making things rather more painful. The solutions are not attractive:

  • pay developers to port (OK for Angry Birds but doesn’t really scale to the whole ecosystem)
  • use JIT emulation (as Apple themselves did to get legacy powerPC code to run on Intel-based Macs) but since the reason for using ARM code is usually performance this might not work
  • get a virtuous cycle going whereby developers don’t want to miss out as the Intel/MIPS phones grow. Chickens and eggs come to mind.

That day, EETimes reported rumors that MIPS had engaged Goldman to help them find a buyer for the company. At the small exhibition that evening the MIPS employees manning their table looked a bit glum. “want to license a microprocessor? Or how about buying…like…the whole company?”


Analog Circuit Optimization

Analog Circuit Optimization
by Daniel Payne on 04-18-2012 at 2:06 pm

Gim Tan at Magma did a webinar on analog circuit optimization, so I watched it today to see what I could learn about their approach. Gim is a Staff AE, so not much marketing fluff to wade through in this webinar.

The old way of designing custom analog circuits involves many tedious and error prone iterations between front-end (Schematic Capture, circuit simulation) and back-end (layout, DRC/LVS, extraction):


The Maga-based custom IC design flow uses:

  • Model-based cells called FlexCells in Titan ADX:
  • Circuit simulator, FineSim SPICE
  • Floorplan, Titan AVP
  • Automated routing, Titan SBR
  • Automated IC layout migration, Titan ALX


Titan AMS has the following tools:

  • Schematic Capture
  • Analog Simulation Environment (wafeforms, cross-probing)
  • Schematic Driven Layout (SDL), uses iPDK or pCells or IPL
  • Layout Editor
  • Process Verification (violation analysis in a GUI)

Foundry support for Titan AMS:

  • TSMC: 180nm, 65nm, 40nm, 28nm. AMS Reference flow 2.0
  • TowerJaxx: 180nm, AMS Reference flow
  • LFoundry: 150nm

Titan Analog Design Accelerator (ADX)

  • Optimize, re-size schematics
  • Process porting
  • Feasibility studies

The design flow with Titan ADX is:

With this flow you can start by importing your old schematics and transistor-level netlists. What’s unique about this flow is the use of FlexCell, which adds a mathematical view using Matlab equations for a circuit along with the traditional views: Schematic, Layout, Testbench

There are ready-made FlexCells for you to start using right away and to help learn model behavior, intent and set constraints. Here’s an example two-stage PMOS Op-Amp FlexCell:

A predecessor to Titan ADX was a technology from Barcelona Design where they used a proprietary modeling language called Flamingo. The learning curve for Matlab should be much shorter than for the old Flamingo code.

Once you’ve defined your analog design as a FlexCell then you can do analog IP optimization with Titan ADX:

Titan ADX is model-based optimization, not simulation based, so that makes it quite unique in the EDA industry. Synopsys also has a simulation-based optimizer (acquired Analog Design Automation), so it will be interesting to see the new product roadmap and if Titan ADX is carried forward.

Magma has a library of FlexCells that you can use:

TSMC is using FlexCells to retarget their own IP for customers as needed. Fraunhoffer is also offering FlexCells.

Titan ADX flow:
[LIST=1]

  • Choose a FlexCell, un-sized schematic
  • Choose a technology
  • Define your specifications and scenarios
  • Optimize
  • Output is a sized schematic netlist


    PLL Migration

    A traditional migration to a different 65nm process node takes 7 weeks (circuit design, functional verification, physical design, physical verification), while with ADX the same task is doe in 1 week or a 7X improvement in time. This kind of improvement assumes that you have all the required FlexCells in place before hand. If you had to write and verify new FlexCell models then that would decrease the time improvement.


    Real Customers
    So far this model-based optimization approach sounds unique and powerful, however who is really using it? Panasonic and TSMC plus the following are using it:

    One Model, Multiple Results
    Using FlexCells and Titan ADX you can optimize for either power or area, or something in between:

    Summary
    Magma offers model-based optimization in Titan ADX which is a different approach than simulation-based optimization. This model-based approach is certainly more elegant than the brute force simulation-based optimization approach, and you’ll have to decide if you can just quickly use the existing FlexCells off the shelf or have to invest in writing your own MATLAB equations for new FlexCells.



  • Changing your IC Layout Methodology to Manage Layout Dependent Effects (LDE)

    Changing your IC Layout Methodology to Manage Layout Dependent Effects (LDE)
    by Daniel Payne on 04-18-2012 at 12:38 pm

    Smaller IC nodes bring new challenges to the art of IC layout for AMS designs, like Layout Dependent Effects (LDE). If your custom IC design flow looks like the diagram below then you’re in for many time-consuming iterations because where you place each transistor will impact the actual Vt and Idsat values, which are now a function of proximity to a well:


    Source: EE Times, Mentor Graphics

    Analog designs are most sensitive to variations in Vt and current levels, especially for circuit designs that need precise matching.

    Engineers at Freescale Semiconductor wrote a paper about Layout Dependent Effects and presented at CICC to quantify how much Vt and Idsat would change based on the location of MOS devices to the edge of a well.


    Well Proximity Effect (WPE), Source: Freescale Semiconductor

    What they showed was Vt became a function of proximity to the well edge and its value could shift by 50mv:


    Vt variation. Source: Freescale Semiconductor

    Drain current levels can vary by 30% based on proximity to the well edge:


    Id variation. Source: Freescale Semiconductor

    EDA developers at Mentor Graphics decided to create a different IC design methodology to provide earlier visibility to the IC design team about how LDE is impacting circuit performance. Here’s the new flow:


    Source: EE Times, Mentor Graphics

    Design constraints about matching requirements are entered at the schematic design state, then fed forward into an LDE estimator module for use during placement. A constraint would define the maximum change in Vt or Id levels between transistors that require matching.

    While layout placement is being done the LDE estimator module can quickly determine how each MOS device Vt and Id values are impacted, then compare that to the design constraints provided by the circuit designer, all before routing is started. The layout designer can continue to rearrange transistor placement until all constraints are passing.

    Notice how there was no extraction and SPICE circuit simulation required during this LDE estimation phase, the layout designer is interactively placing MOS devices and verifying that the layout is passing or failing the constraints set by the circuit designer.

    Test Results
    A two-stage Miller OTA amplifier circuit was designed and put through the new methodology.

    Schematic capture and layout were done with Pyxis, extraction using Calibre and circuit simulation with Eldo. The target Gain and Bandwidth specs were first met by transistor sizing and circuit simulation, with results shown below:


    The first layout iteration was done without the traditional IC flow shown, no LDE estimation was used however the extracted netlist failed both Gain and Bandwidth specs:

    Next, layout was done with the LDE estimator module during placement to give the layout designer early feedback on MOS device constraints. The new layout is slightly different from the previous one and most importantly this new layout meets the Gain and Bandwidth specifications:

    Here’s a table that summarizes the change in Vt and Id values for each MOS device compared between the first placement and final device placement:

    Using the methodology of LDE estimation during placement produced an analog opamp with Vt variations that were up to 10X smaller, and Id variations that were up to 9X smaller.

    Summary
    Analog circuits are most sensitive to LDE effects, so you need to consider a new methodology to quickly provide feedback on how good your layout is while you are still interactively placing MOS devices instead of waiting until routing, extraction and circuit simulation are completed. This new methodology is all about early feedback which will actually speed up analog design closure.



    ARM Seahawk

    ARM Seahawk
    by Paul McLellan on 04-17-2012 at 8:27 pm

    I wrote on Monday about ARM’s Processor Optimization Packs (POPs). In Japan they announced yesterday the Seahawk hard macro implementation in the TSMC 28HPM process. It is the highest performance ARM to date, running at over 2GHz. It is a quad-core Cortex A15.

    The hard macro was developed using ARM Artisan 12-track libraries and the appropriate Processor Optimization Pack announced on a couple of days ago. Full details will be announced at the CoolChips conference in Yokohama Japan today. It delivers three significant firsts for the ARM hard macro portfolio, as not only is this the first quad–core hard macro, but also the first hard macro based on the highest performance ARMv7 architecture-based Cortex-A15 processor, and it is also the first hard macro based on 28nm process.

    The ARM press release is here. A blog entry about the core is here.