Synopsys IP Designs Edge AI 800x100

Did Qualcomm and Apple Attempt to Bribe TSMC?

Did Qualcomm and Apple Attempt to Bribe TSMC?
by Daniel Nenni on 08-30-2012 at 7:45 pm

It’s amazing how these rumors start and go viral. People are literally laughing here in Taiwan. I remember I wrote something that I thought was obviously satire and less than a week later someone repeated it back to me as fact since they “read it on the internet somewhere.”

According to Bloomberg: Apple Inc. (AAPL) and Qualcomm Inc. (QCOM) were rebuffed in separate attempts to invest cash with Taiwan Semiconductor Manufacturing Co. (2330) in a bid to secure exclusive access to smartphone chips, people with knowledge of the matter said.

Both proposals included investments, each of more than $1 billion, for the world’s largest custom maker of chips to set aside production dedicated to making chips exclusively for them, said the people, who declined to be identified because the details are not public.

Sounds like a bribe to me. No wonder why they don’t want to be identified. Dozens of variations have spawned from this rumor so now it is an “internet fact”:

TSMC Spurns Apple, Qualcomm Bids for Guaranteed Fab Capacity
PC Magazine

Apple, Qualcomm failed to buy TSMC chip exclusive rights [report]
ZDNet
Apple, Qualcomm won’t get to hog TSMC chip fab capacity
Ars Technica

TSMC Doesn’t Want Apple, Qualcomm’s Money for VIP Chip Access
DailyTech

Money can’t buy everything: TSMC denied Apple exclusive mobile chips acces
GigaOM

Apple, Qualcomm tried to purchase exclusive access to TSMC chip production
tuaw.com

TSMC rejected billions from Apple and Qualcomm for exclusive access
Inquirer

Semiconductor Producer TSMC Turns Down Bids From Apple, Qualcomm
the Mac Observer

Even IEEE Spectrum got in on it: TSMC’s Morris Chang Says No to Apple, Qualcomm

Would TSMC build fabs for Qualcomm and Apple? Of course, if it was a responsible thing to do for the TSMC stakeholders. Samsung already does that for Apple as an example. Would TSMC accept money from a customer for preferential treatment? Of course not, and anybody who thinks otherwise does not know TSMC or how the fabless semiconductor ecosystem works. And I can assure you that Qualcomm and Apple know.

From the TSMC website:

Our mission is to be the trusted technology and capacity provider of the global logic IC industry for years to come.

Now lets look at the authors of the rumor: Tim Culpan, Ian King, Adam Santariano. Sorry, never heard of them and not an ounce of actual semiconductor experience between them. Just my opinion of course.



Smart mobile SoCs: Made in China

Smart mobile SoCs: Made in China
by Don Dingee on 08-29-2012 at 2:00 pm

One of the comments to previous installments of this series was that there isn’t much left for the merchant suppliers of smart mobile SoCs, considering Apple and Samsung have majority share and design their own parts. The theory is this makes it hard for many suppliers to continue investing at the resource levels needed to bring a complex SoC to market.

Unless, the market we’re talking about is China. Continue reading “Smart mobile SoCs: Made in China”


Mixed-Signal Methodology Guide

Mixed-Signal Methodology Guide
by Daniel Payne on 08-29-2012 at 11:14 am

Last week I reviewed Chapter 1 in the new book: Mixed-Signal Methodology Guide, and today I finish up my review of Chapters 2 through 11. You can read the entire book chapter by chapter, or just jump directly to the chapters most related to your design role or project needs. With multiple authors I was impressed with the wide range of AMS topics they were able to cover from theory to practice. The last sentence in this blog contains a free offer. Continue reading “Mixed-Signal Methodology Guide”


Mixed-Signal Methodology Guide

Mixed-Signal Methodology Guide
by Daniel Payne on 08-29-2012 at 11:14 am

Last week I reviewed Chapter 1 in the new book: Mixed-Signal Methodology Guide, and today I finish up my review of Chapters 2 through 11. You can read the entire book chapter by chapter, or just jump directly to the chapters most related to your design role or project needs. With multiple authors I was impressed with the wide range of AMS topics they were able to cover from theory to practice. The last sentence in this blog contains a free offer, and you can enter to win a free book here.

Chapter 2: Overview of Mixed-signal Design Methodologies
The AMS methodology you choose really depends on what you are designing, so there is no single flow that fits all design styles. Approaches can be top-down, bottom-up, or meet in the middle. There’s a continuum of AMS designs from Analog-centric to Digital-centric.

Historically there were two sets of EDA tools and databases for AMS design: Analog, and Digital. Today however there are common databases and tools for IC design that allow concurrent AMS design.

Based on the amount of Analog or Digital in your design you can choose the appropriate design methodology:

Chapter 3: AMS Behavioral Modeling
SPICE circuit simulation can be used on any transistor-level netlist to predict the analog behavior of blocks of your design however at the expense of long run times. If you were to write a behavioral model of your analog block then it could simulate orders of magnitude faster than SPICE. This chapter has plenty of examples on how to start writing AMS behavioral models:

  • Programmable Gain Amplifier (Verilog-AMS)
  • Analog PGA (Verilog-A)
  • Real PGA Model
  • Digital PGA Model
  • Operational Amplifier (Verilog-A)
  • Digital to Analog Converter (Verilog-AMS)
  • Low-pass filter (Verilog-AMS)

The concept of Real Number Modeling (RNM) is introduced as a method for analog voltages to be represented as a time-varying sequence of real values.

Chapter 4: Mixed-Signal Verification Methodology
At 72 pages this is the longest chapter in the book and reflects that verification consumes more development time and engineering effort than does AMS design, just like in Digital designs. Simulation is the primary tool used to verify an AMS design, unlike in digital where you also have formal methods and Static Timing Analysis.

Assertions are being used in AMS tool flows to capture design intent and report violations. The UVM-MS (Universal Verification Methodology – Mixed Signal) approach offers a direction for a metric-driven verification. Examples of using PSL (Property Specification Language) with Verilog-AMS show how analog events for assertion clocking can be coded.

SystemVerilog Assertions (SVA) are also demonstrated by a mixed-signal Sigma Delta ADC example. Mixed-signal coverage is defined along with mixed-signal metric-driven verification, so engineers can start to measure the effectiveness of their AMS verification.

The Common Power Format (CPF) syntax is used to show how power domains are modeled and the challenges of designing for low power in mixed-signal chips.

Chapter 5: A Practical Methodology for Verifying RF Chips
Jess Chen from Qualcomm wrote a math-filled chapter showing how to verify a direct conversion wireless OFDM link. His SystemVerilog code implements a low noise passband amplifier, IQ modulator, IQ demodulator and baseband amplifier.

Simulation waveforms called Pretzels show the behavior of the RF models:

Chapter 6: Event-Driven Time-Domain Behavioral Modeling of Phase-Locked Loops
Verilog-A and Verilog-AMS was used to model a PLL circuit by decomposing the design into several blocks:

  • Phase Detector
  • Charge Pump
  • Loop Filter
  • Voltage-Controlled Oscillator

Refinements are shown that add jitter and frequency slewing for a PLL. The benefit of the behavioral model is that you can simulate PLL lock time in seconds instead of using SPICE and having to wait up to a week for results.

Chapter 7: Verifying Digitally-Assisted Analog Designs
The concept of using digital circuits to tune an analog block is gaining use, and Art Schaldenbrand from Cadence provides several examples with calibration to show a new verification methodology:

 

  • VCO with calibration
  • Multi-Bit Delta-Sigma ADS with dynamic element matching
  • Active-RC filter

Chapter 8: Mixed Signal Physical Implementation Methodology
Two IC layout flows are highlighted: Custom Design, and Constraint-driven.

The constraint-driven flow uses Pcells (or Pycells if you prefer non-Cadence tools) to introduce more automation and allow for process migration of analog blocks. The idea is to quickly get from schematic to layout, then use the parasitics from layout back in simulation where performance can be measured and then transistors either re-sized or layout changed.

Physical verification is also touched on:

  • LVS (Layout versus schematic)
  • DRC (Design Rule Checking)
  • ERC (Electrical Rule Checking)
  • DFM (Design For Manufacturing)

Chapter 9: Electrically-Aware Design Methodologies for Advanced Process Nodes
Layout Dependent Effects (LDE) can dominate the performance of an AMS design at 28nm and lower process nodes, so using an electrically-aware design methodology can reduce iterations and minimize design time.

During IC layout the EAD flow will take into account several LDE effects that impact variability and performance:

  • Shallow Trench Isolation (STI)
  • Well Proximity Effects (WPE)
  • Length of Diffusion (LOD)

Reliability concerns like Electromigration (EM) can be analyzed while doing IC layout, instead of waiting for final block assembly.

Chapter 10: IC Package Co-Design for Mixed Signal System
Taranjit Kukal of Cadence wrote about System in Package (SiP) as a way to integrate multiple dies with discrete components into a single package. Our smart phones are probably the most successful example of SiP technology in use today. Multiple ways of implementing SiP were shown:

  • Single die in package
  • Multi Chip Module (MCM)
  • RF module
  • 2.5D IC
  • 3D IC
  • 3D Package

Co-design between package and SiP allows for trade-off analysis early in the system design process, optimized I/O locations, and Power Delivery Network (PDN) analysis.

Chapter 11: Data Management for Mixed-Signal Designs
Michael Henrie and Srinath Anantharaman from ClioSoft described how Design Management (DM) of ICs is different than Software Configuration Management (SCM) because of the broad spectrum of design data:

  • Specifications
  • HDL design files
  • Verification test benches
  • Timing and power analysis
  • Synthesis constraints
  • Place & Route
  • Parasitic Extraction
  • Standard Cells
  • Analog Design
  • PDK (Process Design Kits)
  • Custom Layout
  • GDS II
  • Packaging
  • Scripts & Customizations

An AMS design management system should enable:

  • Collaboration across team members
  • Version control
  • Release management
  • Variant development
  • Security and access control
  • Audit trail of changes
  • Integration with bug tracking
  • Design flow integration
  • Checkin / Checkout to control access to cells, blocks and modules
  • Quick response, low disk space use
  • Composite design objects
  • Shared workspaces
  • Visual change analysis
  • Hierarchical design
  • Reuse of PDKs

ClioSoft offers one of a few DM tools that work within the Cadence environment and operates on the concept of a repository:

A DM methodology can enable an AMS team to collaborate efficiently, avoid mistakes of data loss, automate version control, analysis changes visually, plus use IP and PDKs across the organization.

Summary
Even if you are a specialist in AMS design and verification you will benefit from the big picture presented in this new book: Mixed-Signal Methodology Guide. The eleven chapters cover a wide range of relevant topics, plus there are ample references to allow you to further explore a topic. I enjoyed the numerous examples provided and code snippets as a way to learn.

As a reward for those who read this blog to the very end I am offering a free copy of the book to the first two people who post a comment and request their copy.

Also Read

Book Review: Mixed-Signal Methodology guide

Interview with Brien Anderson, CAD Engineer

Managing Differences with Schematic-based IC design


TSMC 28nm Update Q3 2012!

TSMC 28nm Update Q3 2012!
by Daniel Nenni on 08-28-2012 at 7:30 pm

Reports out of Taiwan (I’m in Hsinchu this week) have TSMC more than doubling 28nm wafer output in Q3 2012 due to yield improvements and capacity increases while only spending $3.6B of the $8.5B forecasted CAPEX! Current estimates have TSMC 28nm capacity at 100,000 300mm wafers (+/- 10%) per month versus 25,000 wafers reported in the second quarter. Wow! Talk about a process ramp! As I mentioned before, 28nm will be the most profitable process node the fabless semiconductor industry may ever see!

Continue reading “TSMC 28nm Update Q3 2012!”


Assertion Synthesis

Assertion Synthesis
by Paul McLellan on 08-28-2012 at 2:46 pm

In June, Atrenta acquired NextOp, the leader in assertion synthesis. So what is it?

Depending on who you ask, verification is a huge fraction, 60-80%, of the cost of an SoC design, so obviously any technology that can reduce the cost of verification has a major impact on the overall cost and schedule of a design. At a high-level, verification is checking that the RTL corresponds to the specification. So it follows that without an adequate specification the debugging cycle will just drag out and the design and verification teams will be unable to have confidence that there are no bugs that will cause the chip to fail.


Assertion-based verification helps teams using simulation, formal verification and emulation methodologies, to accelerate verification sign-off. The RTL and test specifications are enhanced to include assertions and functional coverage properties. These are statements that define the intended behavior of signals in the design. Assertion synthesis automates the manual process of creating adequate assertions and functional coverage properties and so makes assertion-based verification more practical.

Modern verification includes a mixture of directed simulation, constrained random simulation, formal verification and emulation. Directed simulation, where the output is explicitly tested for each feature, does not scale for large designs. Constrained random, where an external checker model of the design is used and the output of the RTL checked against the checker, suffers from incompleteness of the checker (since it is so hard to write). Assertion-based verification enhances all these approaches, supplementing the existing checkers with assertions about the internal logic. They thus inject observability into the RTL code. Often features that are hard to verify only looking at the outputs of the RTL are often easy to check using assertions about the internals.

But the challenge of assertion-based verification is creating enough assertions when they must be created manually. Generally one assertion is needed for every 10 to 100 lines of RTL code but it can take hours to create, debug and maintain each assertion. Assertion synthesis is technology that automatically creates high quality assertions to capture design constraints and specifications, and creates functional coverage properties that expose holes in the testbench.


Here’s how it works. Engineers use the RTL and test information as input to BugScope, which automatically generates properties that are guaranteed to hold for the given stimulus set. The new coverage properties provide evidence of holes the the verification does not cover, and the assertions provide observability into each blocks targeted behavior, which, if triggered, indicate design bugs. The assertions are then used with verification tools such as simulators, formal verification engines and emulation. Additional stimulus can be generated to patch coverage holes.

As these steps are repeated, the number of coverage properties (indicating holes in coverage) will decrease and, at verification sign-off, will have been replaced exclusively by assertions.

The Atrenta whitepaper on Assertion Synthesis, which includes more detail and a case-study, is here.


Power, Signal, Thermal and EMI signoff

Power, Signal, Thermal and EMI signoff
by Paul McLellan on 08-28-2012 at 1:55 pm

Increasingly the challenge with SoCs, especially for mobile, is not getting the performance high enough but doing so in a power-efficient manner. Handheld devices running multiple apps need high-speed processors that consume extremely low levels of power both in operating and standby modes. In the server farm, the limit is often getting power into the datacenter and getting the heat out again, and so even in the highest performance part of the market, energy efficiency is paramount. In addition, electronic systems are now subject to regulatory and design requirements such as EMI emission guidelines and surviving ESD tests.

All of this, of course, in an environment where chip area remains important since it drives both cost and form-factor (important especially for mobile where the devices are physically small and so components need to be small too). But there is more to cost than chip area. For example minimizing the package cost and not using the minimal amount of decoupling capacitance.

Optimizing all these conflicting requirements simultaneously requires a more inclusive multi-physics approach, and not doing domain specific (the chip, the package, the board) design and analysis. This is what Apache call chip-package-system or CPS.

The biggest bang for the buck in terms of reducing power is to reduce the supply voltage. But supply voltages are now getting close the the threshold voltage of the transistors which means that the noise margin to keep everything functional shrinks. In addition, in standby mode, we need to control the amount of sub-threshold leakage. This all puts a lot of pressure on keeping the power supply clean all the way from the regulators, through the PCB in through the package and around the power grid on the chip. This is the power delivery network, or PDN. To ensure reliably power, the whole PDN needs to be optimized and validated together.

For high performance designs, thermal analysis is another important aspect of the design. For very high performance designs, such as servers, there may be heat sinks and fans that affect the overall cost of the design. For lower performance designs where the system is physically small, such as a smartphone, there are obviously no fans. But heat dissipation and the thermal analysis that goes along with it is a challenge. The transistors on the chip are affected by temperature and so it is not just a reliability issue but it can be a performance issue too. So integrated thermal analysis of chip, package and system becomes a necessary design step.

Apache has a good overview of all these issues in their white paper on Chip-Package-PCB Design Convergence. This is just one of the white papers that have been pulled together into a microsite that brings together all Apache’s material to do with CPS, including the CPS User Group. The Apache CPS micro-site is here.


Apple’s Victory Will Re-Shuffle the Semi Industry

Apple’s Victory Will Re-Shuffle the Semi Industry
by Ed McKernan on 08-27-2012 at 2:00 pm

Apple’s legal victory over Samsung has been analyzed in thousands of articles and TWEETs since last Friday’s announcement and surely more will follow. Most of the commentary has focused on the first order impact to handset manufacturers. It is not entirely clear how it will all settle but there are sure to be secondary ramifications for Semiconductor Suppliers as it becomes the top discussion in the Executive Suites at Microsoft, Intel, Qualcomm, Broadcom, nVidia and others. A great shift in strategy might take place that until now no one could have foreseen.

Apple’s legal war is in its early stages and the victory over Samsung is likely to be followed by action against other other Android Vendors. Along with the legal war will be a FUD (Fear Uncertainty and Doubt) Campaign as Apple executives send an army of well trained lawyers to their competitors doorstep with a message that it is time to square accounts with a long term, stiff royalty contract or vacate the Andorid ecosystem. What are the alternatives for handset vendors?

Apple knows it can extract higher payments from the handset vendors than from Google, because in the end Android is FREE and handsets are not. Furthermore, handset makers are working off margins that are not as tight as PCs but are trending that way over time. A carefully structured royalty strategy will put some out of business and consolidate the industry around a few players. Samsung was target #1 because they were the largest supplier of smartphones and the favorite of Carriers looking for a low cost alternative to selling Apple.

In the near term we should expect that handset makers will turn their attention to Microsoft O/S based Smartphones. Handset makers are in a quandary though because they don’t know if they will be skinned by Apple and when the skinning takes place. Whether it is a hefty Royalty agreement or a Microsoft O/S Tax, the result in both cases will be significant in terms of a cost adder.

I look for Intel to step in and be a larger supplier of smartphone chips based on being lower cost than nVidia, Mediatek, Broadcom and Qualcomm in 3G solutions. Qualcomm will still have the high terrain with new 4G LTE solutions. Qualcomm wins with 4G LTE baseband no matter who ships. Intel also will align closer with Microsoft this Fall as they promote a high performance story up against Apple’s ARM – A6 based iPhone 5 and the A5X based iPADs. Microsoft still has the huge corporate legacy platform to live off of that they will try to use to maintain presence in the tablet space. Look for Microsoft to begin using a razor-razor blade model with tablets where the hardware is FREE with a full load of O/S and Office. In the high end Ivy Bridge based Tablets, Microsoft and Intel will show a significant performance advantage over Apple iPADs and reminding corporate CFO’s that they need to tell the CIO that whatever ecosystem is bought, it must meet a 4-5 year ROI. Apple will realize they need to build a much higher performing processor for the tablet than the iphone. Expect a split in their processor roadmap in 2013

Intel and Microsoft’s re-found love affair will enable cost reductions for Apple’s competitors that lessen Apple’s royalty impact but not eliminate it. Apple’s victory, in the end is about disrupting the marketplace and causing chaos with handset suppliers and carriers. If it buys Apple 6 months of breathing room, the impact could be tremendous as market share shifts could be dramatic following the introduction of the iPhone 5 in September.

One of the scenarios that I had drawn up earlier this year now appears to be off the table. I expected that Apple would engage with Intel on a foundry agreement in order to outflank Samsung on the cost and performance front. Now Apple will have the freedom to go their own way with TSMC and the question shifts as to whether Samsung will break from their vertically integrated NIH semiconductor model and partner with Intel on x86 based Smartphone and Tablet solutions. This may seem farfetched but the Wintel model has always worked best when there are multiple hardware players fighting it out in the marketplace. Intel and Microsoft both have an interest in having Samsung survive and the Window on x86 in Smartphones and Tablets offers a far broader range of price performance than Windows on ARM. This metric will grow over the next two years as Intel brings Atom to the front of the process line in 2014 at 14nm.

Full Disclosure: I am Long AAPL, INTC, QCOM and ALTR


A Brief History of FPGAs

A Brief History of FPGAs
by Daniel Nenni on 08-26-2012 at 7:30 pm

From the transistor to the integrated circuit to the ASIC, next comes programmable logic devices on the road to the mainstream fabless semiconductor industry. PLDS started in the early 1970’s from the likes of Motorola, Texas Instruments, and IBM but it wasn’t until Xilinx brought us the field programmable gate array (FPGA) in the late 1980’s that PLDs crossed paths with the ASIC world. Today FPGA design starts by far outnumber ASIC design starts and that number is climbing every year. Xilinx also brought us one of the first commercial foundry relationships that would transform the semiconductor industry into what it is today, fabless.

It is a familiar Silicon Valley story, Xilinx co-founder Ross Freeman wanted to create a blank semiconductor device that could be quickly programmed based on an application’s requirements. Even back then semiconductors cost millions of dollars to design and manufacture so this was not only a cost savings, FPGAs also dramatically reduced time to market for electronic products. Fortunately for us, Ross’s employer Zilog did not share this vision and Xilinx was created in 1984.

To minimize start-up costs and risk, the Xilinx founders decided to leverage personal relationships with Japan based Seiko Epson Semiconductor Division. Seiko started manufacturing the first FPGAs for Xilinx in 1985 using a very mature 1.2m process. The first Xilinx FPGA was a 1,000 ASIC gate equivalent running at 18MHZ. Xilinx also pioneered second sourcing for the fabless semiconductor market segments using multiple IDMs for manufacturing to keep costs and risks in check. One of the more notable second sources was AMD who’s CEO at that time, Jerry Sanders, made the infamous statement “Real men have fabs!” AMD is now fabless of course.

In 1995 Xilinx moved production to pure-play foundry UMC which was the start of a very long and very intimate relationship. Xilinx and UMC pioneered what is now called the simulated IDM relationship where the fabless company has full access to the process technology and is an active development partner. I remember visiting UMC and seeing Xilinx employees everywhere. In fact, one of UMC’s corporate headquarters floors was reserved for Xilinx employees. The relationship ended at 40nm as Xilinx moved to TSMC for 28nm in 2010. Rumors had the relationship ending as a result of 65nm yield problems and delays in 40nm which allowed rival Altera, who works exclusively with TSMC, to gain significant market share.

Early Xilinx customers were computer manufacturers Apple, H-P, IBM, and Sun Microsystems (now Oracle). Today “Xilinx is the world’s leading provider of All Programmable FPGAs, SoCs and 3D ICs. These industry-leading devices are coupled with a next-generation design environment and IP to serve a broad range of customer needs, from programmable logic to programmable systems integration.”Xilinx services a broad range of end markets including: Aerospace/Defense, Automotive, Broadcast, Consumer, High Performance Computing, Industrial/Medical, Wired, and Wireless.

Currently Xilinx has about 3,000 employees, 20,000 customers, 2,500 patents, and more than 50% share ($2.2B) of the $4B programmable market. The other notable programmable companies are Altera ($1.8B), Actel (now part of Microsemi), and Lattice ($300M), all of which are fabless. Newcomers to the FPGA market include Achronix and Tabula, both of which will be amongst Intel’s first fab customers at 22nm.

A Brief History of Semiconductors
A Brief History of ASICs
A Brief History of Programmable Devices
A Brief History of the Fabless Semiconductor Industry
A Brief History of TSMC
A Brief History of EDA
A Brief History of Semiconductor IP
A Brief History of SoCs


SpringSoft Laker vs Tanner EDA L-Edit

SpringSoft Laker vs Tanner EDA L-Edit
by Daniel Nenni on 08-26-2012 at 7:00 pm

Daniel Payne recently blogged some of the integration challenges facing Synopsys with their impending acquisition of SpringSoft. On my way back from San Diego last week I stopped by Tanner EDA to discuss an alternative tool flow for users who find themselves concerned about the Laker Custom Layout road map.

Design of the analog portion of a mixed-signal SoC is routinely cited as a bottleneck for getting SoC products to market. This is primarily attributed to the iterative and highly artistic nature of analog design; and the lack of analog design tools to keep up with the rate and pace of productivity tools for digital circuit design. Fortunately, there is a well-known, time-proven tool for analog and mixed-signal design that offers compelling features and functionality. What’s more – with several upcoming enhancements, this tool is very well suited to be a top choice for leading SoC designers who don’t have time to wait and see how the Synopsys Custom Designer / Laker Custom Layout integration is going to play out.

L-Editfrom Tanner EDAhas been around since 1987. It was the seminal EDA software tool offered by Tanner Research. John Tanner – a CalTech grad student and Carver Mead advisee, originally marketed L-Edit as “The Chip Kit” – a GUI-driven PC-based layout editor. Several of the core principles Dr. Tanner embraced when he first started the company continue to be cornerstones of Tanner EDA twenty-five years later:

Relentless pursuit of Productivity for Design Enablement
The tool suite from Tanner can be installed and configured in minutes. Users consistently cite their ability to go from installing the tools to having a qualified design for test chips in weeks. And we’re not talking about some vintage PLL or ADC that’s designed in 350 nanometer. Tanner has L-Edit users actively working at 28nm and 22nm on advanced technologies and IP for high-speed I/O and flash memory.

In addition to improving L-Edit organically, Tanner has embraced opportunities to add functionality and capability with partners. L-Edit added a powerful advanced device generator – HiPer DevGen– back in 2010. It automatically recognizes and generates common structures that are typically tedious and time-consuming; such as differential pairs, current mirrors, and resistor dividers. The core functionality was built out by an IC design services firm; and is now an add-on for L-Edit. More recently, Tanner has announced offeringsthat couple their tools with offerings from BDA, Aldec and Incentia. This is a great sign of a company that knows how to “stick to their knitting” – while also collaborating effectively to continue to meet their users’ needs.

Tanner L-Edit v16 (currently in Beta – due out by year-end) offers users the ability to work in Open Access; reading and writing design elements across workgroups and across tool platforms. Tanner EDA CTO Mass Sivilotti told me “Our migration to Open Access is the biggest single capability the company has taken on since launching L-Edit. This is a really big deal. We’ve been fortunate to have a strong community of beta testers and early adopters that have helped us to ensure v16 will deliver unprecedented interoperability and capability.”

Collaboration with leading foundries
for certified PDKs has been a key focus area for Tanner; and it shows. With foundry-certified flows for Dongbu HiTek, TowerJazz and X-Fab and a robust roadmap, it’s clear that this is a priority. Greg Lebsack commented “Historically, many of Tanner EDA’s users had captive foundries or other means to maintain design kits. Over the past several years, we’ve seen an increasing interest by both the foundries and our users to offer certified PDKs and reference flows. It just makes sense from a productivity and design enablement standpoint.”

Maniacal focus on customer service and support
Tier II and Tier III EDA customers (companies that have more modest EDA budget than a Qualcomm or Samsung) often cite lackluster customer service from the “big three” EDA firms. This is understandable – as much of the time, attention and resource spent by big three EDA companies is directed towards acquiring and keeping large customers. Tanner EDA has many users in the Tier I accounts, but those users tend to be in smaller in research groups or advance process design teams. Their sweet spot has been Tier II and Tier III customers; and they’ve done a great job of serving that user base. One of the keys John Tanner attributes to this is having a core of the support and development teams co-located in Monrovia. “It makes a tremendous difference, says Dr. Tanner, when an FAE can literally walk down the hall and grab a development engineer to join in on a customer call.”

Features and functions that are “just what you want” – not “more than you need”

John Zuk, VP of Marketing and Business Strategy explained it to me this way: “Back in 1987, the company touted that L-Edit was built by VLSI designers for VLSI designers. Inherent in that practice has been the embracing of a very practical and disciplined approach to our product development. Thanks to very tight coupling with our user-base, we’ve maintained a keen understanding of what’s really necessary for designers and engineers to continue to drive innovation and productivity. We make sure the tools have essential features and we don’t load them up with capabilities that can be a distraction.”


While Tanner may have had a humble presence over this past quarter century, the quality of their tools and their company are proven by Tanner’s impressive customer set. A look at the selected customer stories on their website and the quotes in several of their datasheets reveal some compelling endorsements. From FLIR in image sensors, to Analog Bits in IP, to Knowles for MEMS microphones, to Torex for Power Management, Tanner maintains a very loyal user base.

The $100M question is: Will Tanner EDA pick up where SpringSoft Laker left off?