100X800 Banner (1)

AMS Design using Dongbu HiTek foundry and Tanner EDA Tools

AMS Design using Dongbu HiTek foundry and Tanner EDA Tools
by Daniel Payne on 10-27-2011 at 12:00 pm

Every analog designer needs a foundry PDK (Process Design Kits) and EDA tools to design, layout and verify their AMS chip or IP. This week I had a chance to conduct an email interview with Taek-Soo Kim, VP of Technical Engineering at Dongbu HiTek in Korea. This specialty foundry supplies analog silicon worldwide.

Interview
Q: Tell me about your background and how long have you been at Dongbu HiTek?

A: I have been in the Semiconductor industry for 26years. My main experience is in EDA, developing tools, setting up design methodology during first 15 years. Then I was responsible for design services operation in ASIC biz. Now I have been with Dongbu HiTek for 4 years and main responsibility is all design infrastructure.

Q: How long have Dongbu HiTek and Tanner EDA been working together on PDKs?

A: We have been working with TannerEDA for 2 years now. We started with 0.35um BCD process PDK and then worked on 0.18um BCD node.

Q: What was the deciding factor to make the PDK for the BD180LV process (instead of the Medium-High Voltage, Ultra-High Voltage or Analog CMOS)?
A: Main reason is that this process happened to be the one our customer selected to use.

Q: Can you mention your first Tanner EDA customers that are using this PDK?

A: Unfortunately, we cannot disclose the name of this customer.

Q: Can you mention the end-product or industry that the first customers are using this PDK for?

A: DC-DC converter

Q: What other PDKs do you create ?

A: Cadence, Mentor, Synopsys, SpringSoft, iPDK

Q: How would you compare the effort of creating the PDK for Tanner EDA versus other PDKs?

A: For the first project that was done on 0.35um node, Tanner EDA engineer needed to get used to Dongbu HiTek technology. So, there was many communication going back and forth, but as things progress, thing got better.

Q: What would be the next PDK project that you will work with Tanner EDA on?

A: Plan to work on 60V extension of 0.18um BCD process.

Q: What do you like most about working with Tanner EDA?

A: Since Tanner EDA is not a big company, they are very active and shows very quick response.

Q: At the 180nm node about how long do most customer designs take to go from concept to tape-out?

A: within 6 month

Q: How many silicon re-spins does it take on average for your customers to get silicon designs ready for volume production?

A: Different on case by case but, on the average approximately 1 year.

Q: What is your Analog technology roadmap?
A:

Q: What is new between Dongbu HiTek and Tanner EDA?
A: We just announced a foundry-certified 0.18 micron Analog CMOS PDK.

Q: When did Dongbu HiTek first offer the 0.18 micron node for analog designers?
A: Back in June 2008.

Q: What is unique with the BD180LV process node?
A: It has many analog components to choose from and can operate above 5V. This process has bipolar transistors as well for high performance power devices.

Q: In my PDK for Tanner EDA tools what do I get?
A: You get schematic symbols, simulation models, layout rules and verification structures.

Summary
Users of Tanner EDA tools are all set to design and fab with Dongbu HiTek for their analog and mixed-signal IC designs. PDKs for the 0.18 micron node and 0.35 micron node are ready now and more nodes are planned.


Interview with Eric Esteve IPNest made by Synopsys

Interview with Eric Esteve IPNest made by Synopsys
by Eric Esteve on 10-27-2011 at 11:15 am

Introduction from Hezi Saar: Eric’s latest viewpoints and reports are host onIPnestas well as on Semiwikiand you can find information related to various Interface IP: USB 3.0, PCIe, SATA, DDRn, MIPI, HDMI and more.

Q: Eric, give us a quick introduction about your background as it relates to interface IP
A: I have spent 20 years working as a designer, then FAE, then Marketing for TI and Atmel, before working as a WW Marketing Director for PLDA, where I have launched their PCIe Controller IP.
Working day to day in marketing for Interface IP, I was missing key information about the market size and trends, the vendors, and so on. Thus, when I have started IPnest, three years ago, I have decided to focus on the Interface IP market, and to provide comprehensive market surveys dedicated to IP for USB 3.0, PCIe, SATA, MIPI, DDRn etc… Now, I can say that IPnest is the leader analyst in this niche segment. IPnest has customers all over the world, the list includes: Sony, Inventure, KSIA, Cadence, Cast, Evatronix, PLDA, Rambus,Mentor Graphics, Arasan, Denali, Snowbush, MoSys, Mixel, Intel, Fujitsu, LSI Logic, nSYS, HDL DH and Synopsys!
Q: What are your high level thoughts about the semiconductor industry in general and mobile segment in particular?
A: The semiconductor industry is still growing, with an 8% CAGR for the last 20 years or so, but it is a matter of fact that there is a consolidation, and the ASIC or ASSP design starts are slightly declining. Along with this decline, we can see two major trends: the production level per ASIC are growing, and even more important, the chip complexity (gate count, number of functions) is increasing. If you look at the mobile segment, taking for example the latest platform from TI, OMAP5 (see: blog) you realize that this device is extremely complex.

The chip architecture is based on no less than FIVE CPU cores, related Cache Memory, and several dozens of IP functions, including a “shopping list” for the major Interface IP (several USB 2.0, USB 3.0 OTG, SATA 2.0, HDMI 1.4, LPDDR2 by 2, almost all MIPI specification: CSI-2, CSI-3, DSI, LLI, HSI…). Such a design start is probably equivalent, in terms of design power, to a dozen of ASIC design starts in the 2000. So, yes there are less design starts, but these are in general more complex designs, especially for applications like Wireless Smartphone or Set-Top-Box. In fact these are so complex than the only way to comply with the time to market requirements is to massively rely on design reuse, or IP.

Q: What do you believe are the challenges facing the mobile electronics industry?
A: Eventhough I have worked for TI for 7 years, I am not necessarily an expert of the mobile electronic industry. I think some of the challenges the mobile electronic industry is facing are almost the same than for the other electronic segments: at first, close the “design gap”, which means design chips larger and larger, but with almost the same headcount and design resources and do it when always using the most advanced technology nodes (today 28nm and tomorrow 22nm). These needs push to use the latest techniques, like Design For Manufacturability keeping in mind the huge production volumes expected with a single device, ASIC or ASSP. The requirements which are unique to the Wireless industry are: how to design always more complex application (like 3D Video) keeping the system battery life long enough and try to meet the incredible Time To Market for handset applications, which is probably the more stringent of the industry: on the end user market, the typical delay between two product launch from the same OEM is about 6 months, which is not the delay from concept to engineering samples which is much larger, but obviously push for a design cycle as small as possible, for product being incredibly more complex!

Q: You raise a very interesting point, what is a typical design cycle for SoC targeting mobile electronics (From concept to tape-out and to engineering samples)?
A:see above

Q: Since you (and Synopsys) are focused on interface IP what do you see as the overarching trends for interface IP?
A: Being strongly focused on Interface IP, since 2005, I have seen the massive adoption of the differential, high speed, serial communication techniques inside and outside the box (whichever is “the box”). This has been true for PCI Express replacing PCI, initially at 2.5 Gbps now up to 8 Gbps, for SATA replacing ATA at a speed moving from 1.5 up to 6 Gbps. Outside the box, HDMI is now a standard used in PC, Consumer and Wireless handset and USB is –finally- closing the gap and moving to 5 Gbps with USB 3.0. Amazingly the Memory Controller Interface and memory devices is still based on parallel communication, even if this physical interface is at the edge in terms of feasibility with 3200 Mhz for the DDR4. I don’t know when this interface will move to the same type (high speed, differential, serial) like the other, but I don’t see how it could stay the exception in the future! This will probably be the next “hot topic” for the Interface IP market.

I am also watching closely the different MIPI Interface specifications, as using a standardized communication technique in the mobile industry certainly makes sense, not only from a technical point of view but also as this is a more rational approach.

Q: Which are the most promising interfaces for used semiconductor SoCs targeting mobile market segments and why?
A: Lets take TI’s OMAP5 an an example, we have pretty much the list of most promising interfaces for Application Processor SoC targeting mobile market segments:
· LPDDRn to access external DRAM
· USB 2.0 and USB 3.0 if you want to exchange data with your system, as well as for battery charging.
· HDMI when you want to display video with the system being the source.
· SATA has been freshly introduced, that you will use when you want to store data coming through the system on an external SSD
· MIPI functions, the list of supported specifications is long:
· LLI/uniport to interface with a companion device or/and with a Modem in order to share the same external memory and save a couple of $ on each handset
· CSI: to interface with one or more cameras, one or more CSI-3 and CSI-2 function
· DSI or Display serial interface
· SlimBus, a low performance, serial, low power interface with Audio chips
· UFS: MIPI Interface for mass storage devices

There are also other Interfaces (UART, SDIO and a lot more) which are used in Mobile, as well as in other segments, that I would qualify of being part of a second type, which can be reused internally or acquired through an IP vendor, for a fraction of the price of the above listed interfaces. If we look at the market for the “first type” Interfaces IP listed above, we can see that it is expected to grow up to almost $500M by 2015.

This is the end of Part 1… not the end of the interview. More to come later!

Eric ESTEVE from IPNEST


Parasitic Extraction—My Head Hurts!

Parasitic Extraction—My Head Hurts!
by glforte on 10-27-2011 at 10:08 am

By Carey Robertson, Director of Product Marketing, Mentor Graphics

IC physical verification requires a number of different types of checking, the most familiar being design rule checking (DRC), layout vs. schematic (LVS) checking, and parasitic extraction combined with circuit simulation. Fundamentally, it does not matter whether you are designing an analog, digital or memory circuit, and it does not matter if you are at the cell, block or full-chip level, you still have to meet the manufacturing requirements for that particular process, and you still need to verify that your logical representation matches your physical design. While EDA vendors have been successful in providing DRC and LVS platforms that can address all these different design styles and flows, parasitic extraction, on the other hand, does not fit well into a “one size fits all” solution.

Because extraction is a “means to an end,” you first need to consider what “end” you are trying to address, which means understanding what circuit simulation task would you like to perform. That simulation goal may be timing, noise analysis, signal integrity, IR drop, clock tree analysis, or some other static or dynamic simulation. Next, you need to factor in the design style (memory, analog, RF, digital ASIC, custom digital, SoC, cell libraries, etc.) and the abstraction level (transistor-level, cell-level, block-level, full-chip). One additional point to consider is that extraction models also vary substantially by process node, because as critical dimensions drop below 65 nm, the underlying electrical characteristics are increasingly sensitive to the interactions among adjacent devices. Once all of those criteria are defined, then you are ready to set up the parasitic extraction tool.

But wait, there’s more! As with any engineering problem, there is the standard tradeoff between performance and accuracy. How accurate do your results need to be, and how long are you willing to wait to attain that level of accuracy? What CPU resources are you able to utilize? What frequency are you running at, and do you need to consider all RCLK parasitics or a subset? This accuracy/performance tradeoff applies to downstream simulation as well. A very accurate simulation may require a very detailed netlist. As the detail and complexity of the netlist increases, so will turnaround time (TAT) of the subsequent simulation. Therefore, understanding netlist size and what level of parasitic reduction to apply is critical.

Does your head hurt yet? Does this decision matrix seem a bit daunting?

As you can imagine, the industry has evolved to the point where we have different extraction tools, engines, flows, and sub-flows to address the specialized solutions for both extraction and simulation. The problem is that having so many specialized tools incurs designer overhead (aka headache): higher learning curve costs, more effort to set up tools for multiple scenarios, and various data and use model mismatches.

What extraction users need is a very flexible and intelligent extraction environment that can adapt to all of their requirements with minimum effort. While there may be different models, engines, and data formats “under the covers,” users would prefer to access these capabilities and options through a uniform, parameterized interface, with the ability to easily adjust tradeoffs (such as speed and accuracy) to give them the best overall solution for their immediate needs. Such an environment should not require users to duplicate setup work, such as creating multiple rule decks when switching from one design style or from one node to another. For example, designers might want to do a “quick and dirty” extraction run to check block interconnects, then later refine accuracy on specific critical nets. . Ideally, this would be accomplished without extra setup time or redefining inputs and outputs.

Imagine a set of golf clubs—that’s how extraction should perform. Good golfers tell me they swing each club the same way and let the club do the work (this doesn’t work for me, but that’s what I’m told). In golf, the golfer considers several variables, such as desired height, desired distance, ball position, etc. From there, a club selection is made to meet those requirements. Every club has the same user interface, so the golfer does not have to learn new techniques for each club, but simply employs the same tried and true methods to be successful (If you don’t believe me, it does work well on TV). That’s the use model we should strive for in parasitic extraction, where the user considers the desired outcome first, then selects the best engine or mode of extraction to match that outcome. It should not require learning several new tools.

Click here for more on parasitic extraction.

About the Author
Carey Robertson does a hit a golf ball from time to time. When he is not out losing golf balls, he is the Director of Product Marketing for Calibre’s Circuit Verification product line (LVS, PERC, and Parasitic Extraction products). He has been with Mentor Graphics for eleven years in various product and technical marketing roles. Prior to Mentor Graphics, Carey was a design engineer at Digital Equipment Corp., working on microprocessor development. Carey holds a BS from Stanford University and an MS from UC Berkeley.


ARM TechCon 2011 Trip Report and Sailing Semiconductors!

ARM TechCon 2011 Trip Report and Sailing Semiconductors!
by Daniel Nenni on 10-26-2011 at 9:37 pm

This was my first ARM TechCon, they cordially invited me as media, but it certainly was not what I expected. Making matters worse, I had literally just flown in from a very long weekend sailing in Mexico which was much more interesting and certainly made me much less tolerant of sales and marketing nonsense. My Uncle Jim lives on a sailboat which is currently in Mexico for the Winter. I’ve sailed on the Esmeralda before but she has just been sold so this was a momentous occasion. Uncle Jim has some health issues so he will be a land locked for the rest of his days. Sailing up and down the Coast is very hard work, believe it!

On the semiconductor side, sailing has come a long way since Esmeralda was first launched. The marine electronics available today are amazing and the ability to run those low-power semiconductor devices via the wind and sun is simply incredible. There should be an ARM Inside sticker on every sailboat! Even the shower is 100% solar and let me tell you that water gets hot! Esmeralda can also desalinate saltwater faster than we could drink it! As the picture suggests we were 3G enabled so, yes, I sailed the internet! Uncle Jim did his best to keep Esmeralda up to date but now technology moves much faster than he can.

I chose Tuesday for ARM TechCon to see the keynotes by TSMC’s Dr. Shang-Yi Chiang, my favorite EDA CEO Dr. Wally Rhines, and Cadence Sr VP Dr. Chi-Ping Hsu. Somebody from the conference called me tonight (Wednesday) and asked why I didn’t attend. Well, you gave me a one-day pass that’s why! But seriously, why the strong ARM tactic? The place was jam PACKED with semiconductor professionals. Having 99.99% market share must be nice!

Shang-Yi’s presentation was similar to the one at OIP last week which I blogged about HERE. According to Shang-Yi, the biggest problems to face the semiconductor industry in the years to come will be more economic than technical, citing the increasing costs of wafers as geometry decreases and density increases. He also stated that FinFets will keep semiconductors scaling through the 14nm and 7nm nodes. I certainly hope he is right. I have 4 kids to put through college.

Wally’s presentation was again by far the best. I expected a rehashed version of his OIP speech, “Accelerating Innovation Through Collaboration” which I blogged about HERE, but no, he pulled out another excellent presentation, “Creating Measurable Value Through Differentiation”. Every CEO in the semiconductor ecosystem should memorize this one! Why have I not seen any press on this? SemiWiki blogger Dr. Paul McLellanwill did a more thorough blog on it HERE.

Chi-Ping’s presentation was the biggest disappointment, I actually walked out. I know Chi-Ping from the Avanti days and can tell you that material did not come from him. Cadence marketing people clearly possessed him with infomercials and all! He even mentioned EDA360!?!?!? Richard Goering’s blog on it, “ARM TechCon Address: High Stakes at Low Process Nodes” was much better than the presentation itself.

ARM did not feed the media but thankfully Jim Lai, President of Global Unichip, invited me to lunch so I did not starve. I will blog about our lunch conversation this weekend but let me tell you this, the semiconductor design ecosystem is about to change once again!


Synopsys Journal, now on Itunes

Synopsys Journal, now on Itunes
by Paul McLellan on 10-24-2011 at 9:42 am

Synopsys Journal is a quarterly publication for management dedicated to covering the latest issues facing designers today. It has been published now for two and a half years. Of course, you can go here and, once registered, get a copy of the journal.

But people don’t have a lot of time to read a journal like this so it has been available since the start of last year in an audio form too. And you can now subscribe to it on iTunes so it will just simply appear in the podcast section of your iTunes, and (if you have things set up right) get synced to your iPod, iPhone and/or iPad so you can listen to it in the gym or in your car during time when there aren’t too many other things that you can be doing other than listening to something.

To subscribe to the Synopsys journal on iTunes click here.

The current issue covers Intellectual property (IP). IP is no longer a “nice to have” for chip design. You cannot hope to design a complex chip, and be competitive, without using IP. In this issue, Synopsys takes a look at how the IP market has matured, how design teams are deploying IP today, and put down some markers for the future. Rich Wawrzyniak, Analyst with Semico Research Corporation, sees a future where design teams integrate complete SoCs as subsystems in much the same way as they use IP blocks today. Dr. Seh-Woong Jeong, Executive Vice President for Systems LSI at Samsung Electronics Co., explains how his teams balance the tensions between time-to-market and quality with the knowledge that it now takes almost as long to develop a complex IP blocks as it does the chip itself. And Joachim Kunkel, Senior Vice President and General Manager of Synopsys Solutions Groups, puts forward a case for IP as an enabling technology for businesses that want to innovate through SoC design. In iTunes, each of these interviews is a separate track (or “tune” in iTunes parlance, although in this context that is quite amusing).


Noise Coupling

Noise Coupling
by Paul McLellan on 10-24-2011 at 8:47 am

One of the challenges of designing a modern SoC is that the digital parts of the circuit are really something that in an ideal world you’d keep as far away from the analog as possible. The digital parts of the circuit generate large amounts of noise, especially in the power supply and in the substrate, two areas where it is impossible to completely keep the analog and digital apart. Ideally, from a noise point of view, we’d continue to put the digital and analog on separate chips (so no shared substrate and minimal power supply coupling) but from a cost point of view we have to put them on the same chip and analyze the consequences.

The existing static approaches to this problem, modelling the problem as IR drop in the power supply, have run out of steam for these SoCs with layers of power reduction techniques, and are lacking in both accuracy and capacity for analyzing the effects of transient power supply noise. Trying to do dynamic analysis using SPICE simulators runs into capacity limitations and using a simplified netlist reduces accuracy unacceptably. Substrate noise injection is a chip-wide phenomenon. Getting the analysis wrong can lead to expensive re-spins.

The shift to consumer, and especially to mobile applications, has meant that modern SoCs are designed with low-power taken into consideration from the start, with multiple power islands at different voltages. This makes the verification off transient noise even trickier, especially through the substrate.

Power gating, whereby blocks are completely powered down for extended periods, also makes verification more complex still due to the in-rush current when a block is turned back on. Turning on and off power domains needs to be modeled no just at the die level but also taking into account the package. Low ambient but high transient current, which results from this type of architectural approach is almost a worst case.

A simulation-based approach with large capacity and intelligent modeling is required to verify the power supply noise on this type of SoC. The validation methodology must verify power grid connection issues, investigate voltage sag, identify noise coupling between the various power domains and isolate EM b bottlenecks. The entire substrate must be modeled to see the impact of noise on victim circuits, especially an analog. What is required is nothing less than full Spice accuracy and full-chip capacity for transitent power-ground and EM analysis., along with modeling capabilities for system-level analysis.

Read Arvind Shanmugavel’s full full blog entry here.
Totem white papers is here.
The Apache webinar on mixed-signal power noise analysis is here.


TSMC 2011 Open Innovation Platform Ecosystem Forum Trip Report

TSMC 2011 Open Innovation Platform Ecosystem Forum Trip Report
by Daniel Nenni on 10-23-2011 at 3:00 pm

The TSMC OIP conference was Monday and Tuesday of last week. You have probably NOT read about it since it was invitation only and press was not invited. Slides were not made available (except for Mentor), no photos or video were allowed, it was a very private affair. Given that, I won’t be able to go into great detail but I will give you the impression it left on me and I will share slides from the best vendor presentation given on the second day.

TSMC OIP day 1 was for ecosystem partners (EDA, IP, Design Services) and I would say there were about 200 of us. My badge was courtesy ofSolido Design (I do the foundry work for Solido). Presentations were made by Cliff Hou, Vice President of Design Enablement, and LC Lou, Senior Director of IP Development and a couple of other TSMC guys that I did not know. I have worked with both Cliff and LC over the years and have a great respect and trust for them.

28nm and 20nm were discussed in great detail in regards to design enablement and IP. It was very clear that TSMC is finished with 28nm which ramped 3 times faster than 40nm. All 28nm process nodes: 28HP, 28HPL, 28LP and 28HPM (M=mobile) are in production with thousands of wafers already shipped to customers. This tracks with what I have heard from TSMC’s top customers, 28nm silicon is out and working. The first 20nm production wafers are scheduled for mid 2012. This also tracks with what customers have told me, who are finishing up 20nm PDKS in time for Christmas.

The technical deep dive was on RDRs (restricted design rules) which are new in 28nm. TSMC said it took customers about a month to adjust to RDRs which may be a little optimistic. The 28nm DRM (design rule manual) is significantly larger than 40nm, meaning the rules are more difficult to describe. Feedback I got from customers however was that RDRs made their life easier and without RDRs 28nm would not have yielded well at all.

3D IC was discussed in great detail which is a blog in itself. The takeaway here is that TSMC is leading the way in 3D IC, believe it. The other interesting topic was LDEs (layout dependent effects). New effects are coming at 20nm so you can bet LDE will be a big part of the next round of TSMC reference flows (13.0) you will see at the 2012 Design Automation Conference in San Francisco. These reference flows will probably be at 20nm since DAC is mid 2012, same as TSMC 20nm availability. Early access to process technology by both partners and customers was mentioned throughout the two days and I can tell you TSMC is doing much better with early access than other foundries, which was a clear differentiator for the top fabless companies at 28nm.

Day 2 was for customers which I would guess was close to a thousand people. Rick Cassidy, President of TSMC North America, did the keynote. Side note: Rick is a West Point graduate which may explain his no nonsense speaking style. Shang-Yi was next then Cliff Hou. The hot topic here was FinFets. I have blogged about this before but the message that day was FinFets would have delayed 20nm so TSMC stuck with planar transistors. The FinFet design ecosystem challenge was discussed (3D extraxtion, modeling, etc…) and TSMC flat out asked customers if they wanted FinFets for 14nm (2015). The customers I talked to will look at the technical versus time-to-market trade-offs of FinFets which is still being calculated.

Vendor presentations were next from Mike Inglis (ARM), Aart de Geus (Synopsys), Lip-bu Tan (Cadence), and Wally Rhines (Mentor). Mentor was the only vendor “open enough” to send me slides so that is the only presentation I will mention. According to Wally 28/20nm will be a “Golden Era” for foundries. Massive capital investment by foundries will yield (pun intended) very cost effective wafers that will absorb existing products at the higher nodes. 28/20nm cost and capability will also drive new applications and accelerate semiconductor industry growth for years to come. Absolfreakinlutely!

Wally’s presentation has 45 slides and several important points which should be independent blogs. His last slide is my favorite however, it is his personal collaboration ecosystem. My personal collaboration ecosystem is much larger of course since it includes all of you.


Intel’s Incredible Semiconductor Machine

Intel’s Incredible Semiconductor Machine
by Ed McKernan on 10-21-2011 at 8:15 am

It is hard not to be impressed by Intel’s stunning financial performance since the 2008 downturn. They are on track to post revenue of $55B this year or 50% higher than 2008 while nVidia and AMD will be flat to less than 10% better. More significantly, earnings will be 3X that of 2008. More significantly, in the past 12 months they have funded a $10.5B CapEx budget, bought back $10B in stock and distributed roughly $4B in dividends. As for the stock: it is right around where it was September 1, 2008.

ARM continues to get the glory in the processor world at the expense of all other semiconductor vendors. Their P/E levitates at 69, a place that Intel occupied in the summer of 2000. I worked at Intel in the early 1990s and competed against them off and on with Cyrix and Transmeta until 2002. They are not only a tough competitor; they always have a backup plan that relies on another crank of the process technology to get them out of jams. Moore’s Law is devastating to upstarts and Intel is about to turn the crank one more time with 22nm. I listened to the earnings conference call 3 times with Paul Otellini and Company. It is amazing how matter of fact confident they are with their current execution and with what is coming down the pike.

Here are the facts that I found to be counter to the current thinking on Wall St. First, client computing is up 20%+ year-to-year. Data Center is up only 15%, but a new processor, Romley, looks to kick in soon. ASPs in the client space are flat and this is huge. It means that the integrated graphics is allowing Intel to hold up ASPs while at the same time minimizing the revenue of competitors (i.e. AMD and nVidia).

With regards to the coming Windows 8 O/S release, Otellini clearly communicated the strategy as it rolls out in consumer and enterprise. For consumer, Intel is willing to drop CPU+Graphics and Chipset down to $30 vs. $20 for AMD, a price premium the market is currently willing to pay. It is hard to see how ARM processors would get more than AMD’s price. Furthermore, Intel is funding the ultrabook effort that I am sure will result in some exclusivity over the next year or so.

As panels and other components drop in price, Intel will grow its percentage of the system BOM, something that today is occurring big time in the notebook market. In addition, from the sounds of the call, I am guessing Intel will ramp their NAND joint venture with Micron and begin to build SSDs, which implement a proprietary bus between the x86 mobile processor and the ultrabook fitted drives. Some ultrabook vendors are looking at a combo SSD and HDD to try to win customers over Apple MAC Air due to greater capacity. The SSD would contain the Windows O/S in order to provide faster O/S and Office boot and run time performance. As I read it, Intel says Ultrabooks are $899 this year, $699 next Q4 and with a 22nm Celeron in 2013 they drop into the $500s.

In the enterprise market, Intel said that corporations are only 50% of the way through their WinXP to Windows 7 upgrades cycle and it looks like there is another 18 months to go with this. In essence, Otellini is saying Intel’s client business has clear sailing through 2012 and into 2013. Although Windows 8 comes out in 2012, the SP1 (Service Pack 1) will not be available until sometime in 1H 2013. Historically, corporations hold off on PC upgrade cycles until SP1 is released. By then Intel’s complete line of 22nm Ivy Bridge processors will be deployed. Look for Intel to leverage the McAfee DeepSafe Security technology to keep corporations on board with Intel x86.

Finally, Otellini got around to addressing the competitive threat of ARM in a way that I thought should have been handled at a much earlier time frame (say 24 months ago). The assumption across the broad analyst community was that ARMs low power was inherently a function of architecture. My experience at Transmeta was eye opening in that when you dig down into the details of power and performance, you find that every workload has an optimum processor architecture. ARM has come from the bottom up and seeks to implement a Clayton Christensen version of Innovators Dilemma. If they were to have access to Intel’s Process Technology at the same time as Intel, then their momentum and the desire of customers to be free of Intel could be overwhelming. As Otellini stated, the competition with ARM comes down to Physics and not architecture. When you have equal workloads, then the winning processor is the one that gets the work done with the best transistors. Intel wins on transistors.

Otellini has crafted an internal business model that is becoming more leveraged on the value of process technology and the lead Intel enjoys over TSMC and other Foundries. As a baseline it appears that Intel will continue to serve the client market with x86 processors (including chipsets) with 50%+ gross margins. For enterprise it is 60%+ and for Xeon based servers it is 80%+. The alternative business model that is waiting in the wings is the Leading Edge Foundry model that Apple is best primed to take advantage of. If Apple utilizes 22nm at Intel while they are simultaneously at 28nm with TSMC, then Apple will pay Intel a 60%+ gross margin for the benefit of a 50% die size and power reduction.

There is one more alternative path to the above model that is even more attractive to Intel and Apple. Apple agrees to prepay Intel for a fab expansion outside the US in return for parts with lower ASPs. The reason it is outside the US is because that is where Apple has most of its $81B in cash to avoid US taxes on repatriation. Imagine the competitiveness of an Intel 22nm fab partially funded by Apple based in Israel vs. a TSMC 28nm Fab based in Taiwan.

Full Disclosure: I own Intel and Apple Stock


Oct 27 – Hands-on Workshop with Calibre: DRC, LVS, DFM, xRC, ERC (Fremont, California)

Oct 27 – Hands-on Workshop with Calibre: DRC, LVS, DFM, xRC, ERC (Fremont, California)
by Daniel Payne on 10-20-2011 at 9:56 am

I’ve blogged about the Calibre family of IC design tools before:

Smart Fill replaced Dummy Fill Approach in a DFM Flow

DRC Wiki

Graphical DRC vs Text-based DRC

Getting Real time Calibre DRC Results with Custom IC Editing

Transistor-level Electrical Rule Checking

Who Needs a 3D Field Solver for IC Design?

Prevention is Better than Cure: DRC/DFM Inside of P&R

Getting to the 32nm/28nm Common Platform node with Mentor IC Tools

If you want some hands-on time with the Calibre tools then consider attending the October 27th workshop in Fremont, California.


AMS Design at AnSem

AMS Design at AnSem
by Daniel Payne on 10-19-2011 at 3:40 pm

AnSem has been in the AMS design business since 1998 and uses a variety of commercial EDA tools along with internally developed tools and scripts to automate the process of analog design and technology porting. Their IC designers have completed some 40 AMS projects in diverse areas like:

  • RF CMOS

    • LNA, VCO, Mixers
    • Synthesizers
    • Low-IF/Zero-IF
  • Low Power / Low Voltage

    • 1v wireless TRx
    • Power management
    • Battery operation
  • Data Acquisition

    • Sensor interfacing
    • A/D
    • D/A
  • High Speed Data Communication

    • SerDes
    • Line driver/Receiver
    • PLL, CDR


Commercial EDA Tools

Best in class is the tool selection methodology used at AnSem, with the following EDA tools used for specification, design, optimization and verification:

High level modeling and verification – Matlab Simulink

Top down modeling, bottom up design – VHDL-AMS using Mentor’s Questa ADMS (Eldo, Eldo RF, ADiT). This can be used for models of a receiver, demodulation and PLL circuits.

Digital Simulation – Mentor’s Questa (ModelSim)

Transistor layout and sizing – Internal tools, Tanner EDA (L-Edit, HiPer Layout)

IC Layout – Cadence Virtuoso

Internal EDA Tools
One of the internally developed tools is called AnSem Advanced Proprietary Synthesis Tool (APST) and it fits into the design flow to develop analog cells for the purpose of design re-use and technology porting. Building IP that can be re-used at new nodes is achievable with APST. The transistor sizing done by APST is used in the Tanner EDA tools.

PLL designs are optimized with a tool called PLLOP. RX noise gain and blocking analysis is performed by a tool called MREX. Finally, there are application-specific Matlab toolboxes created to automate tasks like FSK demodulation.

Automated IP
AnSem engineers have created many IP blocks with an automated design approach:

  • VCO with on-chip integrated inductors
  • LNA with on-chip inductors
  • Delta-Sigma A/D: switched-cap
  • Comparator: High Speed, Flash
  • OTAs and OpAmps for low-frequeny
  • OTA-C filters for active-RC type circuits
  • Bandgap reference circuits

Experience with Tanner EDA Tools

AnSem first started using the Tanner EDA tools right from the start in 1998 because they could design their IP blocks and use the tools at a reasonable cost. Today the designs can target even the 40nm node and the design complexity has increased significantly.

Using schematics and layout from Tanner in Cadence Virtuoso is now possible with the Open Access (OA) database. Interoperability is an important trend for EDA tools and has provided AnSem some flexibility. Customers of AnSem user different IC tools so working with industry standards like OA meets the need.

Learning the Tanner tools was intuitive as they are based on the popular Windows operating system and GUI. Designers can extend the features of the tools by scripting, all without having a specialized CAD department.

DRC and LVS are done with Tanner and the rules from Mentor’s Calibre tool are imported.

Conclusion
AnSem has assembled both commercial and internal EDA tools to automate their AMS design for both ASIC and custom IC design work. Customers of AnSem include: Tyco Electronics, Oce Technologies, Phonak, Cochlear, Kawasaki Microelectronics, National Semiconductor and NXP.