SMT webinar banner3

TSMC 28nm Update Q3 2012!

TSMC 28nm Update Q3 2012!
by Daniel Nenni on 08-28-2012 at 7:30 pm

Reports out of Taiwan (I’m in Hsinchu this week) have TSMC more than doubling 28nm wafer output in Q3 2012 due to yield improvements and capacity increases while only spending $3.6B of the $8.5B forecasted CAPEX! Current estimates have TSMC 28nm capacity at 100,000 300mm wafers (+/- 10%) per month versus 25,000 wafers reported in the second quarter. Wow! Talk about a process ramp! As I mentioned before, 28nm will be the most profitable process node the fabless semiconductor industry may ever see!

Continue reading “TSMC 28nm Update Q3 2012!”


Assertion Synthesis

Assertion Synthesis
by Paul McLellan on 08-28-2012 at 2:46 pm

In June, Atrenta acquired NextOp, the leader in assertion synthesis. So what is it?

Depending on who you ask, verification is a huge fraction, 60-80%, of the cost of an SoC design, so obviously any technology that can reduce the cost of verification has a major impact on the overall cost and schedule of a design. At a high-level, verification is checking that the RTL corresponds to the specification. So it follows that without an adequate specification the debugging cycle will just drag out and the design and verification teams will be unable to have confidence that there are no bugs that will cause the chip to fail.


Assertion-based verification helps teams using simulation, formal verification and emulation methodologies, to accelerate verification sign-off. The RTL and test specifications are enhanced to include assertions and functional coverage properties. These are statements that define the intended behavior of signals in the design. Assertion synthesis automates the manual process of creating adequate assertions and functional coverage properties and so makes assertion-based verification more practical.

Modern verification includes a mixture of directed simulation, constrained random simulation, formal verification and emulation. Directed simulation, where the output is explicitly tested for each feature, does not scale for large designs. Constrained random, where an external checker model of the design is used and the output of the RTL checked against the checker, suffers from incompleteness of the checker (since it is so hard to write). Assertion-based verification enhances all these approaches, supplementing the existing checkers with assertions about the internal logic. They thus inject observability into the RTL code. Often features that are hard to verify only looking at the outputs of the RTL are often easy to check using assertions about the internals.

But the challenge of assertion-based verification is creating enough assertions when they must be created manually. Generally one assertion is needed for every 10 to 100 lines of RTL code but it can take hours to create, debug and maintain each assertion. Assertion synthesis is technology that automatically creates high quality assertions to capture design constraints and specifications, and creates functional coverage properties that expose holes in the testbench.


Here’s how it works. Engineers use the RTL and test information as input to BugScope, which automatically generates properties that are guaranteed to hold for the given stimulus set. The new coverage properties provide evidence of holes the the verification does not cover, and the assertions provide observability into each blocks targeted behavior, which, if triggered, indicate design bugs. The assertions are then used with verification tools such as simulators, formal verification engines and emulation. Additional stimulus can be generated to patch coverage holes.

As these steps are repeated, the number of coverage properties (indicating holes in coverage) will decrease and, at verification sign-off, will have been replaced exclusively by assertions.

The Atrenta whitepaper on Assertion Synthesis, which includes more detail and a case-study, is here.


Power, Signal, Thermal and EMI signoff

Power, Signal, Thermal and EMI signoff
by Paul McLellan on 08-28-2012 at 1:55 pm

Increasingly the challenge with SoCs, especially for mobile, is not getting the performance high enough but doing so in a power-efficient manner. Handheld devices running multiple apps need high-speed processors that consume extremely low levels of power both in operating and standby modes. In the server farm, the limit is often getting power into the datacenter and getting the heat out again, and so even in the highest performance part of the market, energy efficiency is paramount. In addition, electronic systems are now subject to regulatory and design requirements such as EMI emission guidelines and surviving ESD tests.

All of this, of course, in an environment where chip area remains important since it drives both cost and form-factor (important especially for mobile where the devices are physically small and so components need to be small too). But there is more to cost than chip area. For example minimizing the package cost and not using the minimal amount of decoupling capacitance.

Optimizing all these conflicting requirements simultaneously requires a more inclusive multi-physics approach, and not doing domain specific (the chip, the package, the board) design and analysis. This is what Apache call chip-package-system or CPS.

The biggest bang for the buck in terms of reducing power is to reduce the supply voltage. But supply voltages are now getting close the the threshold voltage of the transistors which means that the noise margin to keep everything functional shrinks. In addition, in standby mode, we need to control the amount of sub-threshold leakage. This all puts a lot of pressure on keeping the power supply clean all the way from the regulators, through the PCB in through the package and around the power grid on the chip. This is the power delivery network, or PDN. To ensure reliably power, the whole PDN needs to be optimized and validated together.

For high performance designs, thermal analysis is another important aspect of the design. For very high performance designs, such as servers, there may be heat sinks and fans that affect the overall cost of the design. For lower performance designs where the system is physically small, such as a smartphone, there are obviously no fans. But heat dissipation and the thermal analysis that goes along with it is a challenge. The transistors on the chip are affected by temperature and so it is not just a reliability issue but it can be a performance issue too. So integrated thermal analysis of chip, package and system becomes a necessary design step.

Apache has a good overview of all these issues in their white paper on Chip-Package-PCB Design Convergence. This is just one of the white papers that have been pulled together into a microsite that brings together all Apache’s material to do with CPS, including the CPS User Group. The Apache CPS micro-site is here.


Apple’s Victory Will Re-Shuffle the Semi Industry

Apple’s Victory Will Re-Shuffle the Semi Industry
by Ed McKernan on 08-27-2012 at 2:00 pm

Apple’s legal victory over Samsung has been analyzed in thousands of articles and TWEETs since last Friday’s announcement and surely more will follow. Most of the commentary has focused on the first order impact to handset manufacturers. It is not entirely clear how it will all settle but there are sure to be secondary ramifications for Semiconductor Suppliers as it becomes the top discussion in the Executive Suites at Microsoft, Intel, Qualcomm, Broadcom, nVidia and others. A great shift in strategy might take place that until now no one could have foreseen.

Apple’s legal war is in its early stages and the victory over Samsung is likely to be followed by action against other other Android Vendors. Along with the legal war will be a FUD (Fear Uncertainty and Doubt) Campaign as Apple executives send an army of well trained lawyers to their competitors doorstep with a message that it is time to square accounts with a long term, stiff royalty contract or vacate the Andorid ecosystem. What are the alternatives for handset vendors?

Apple knows it can extract higher payments from the handset vendors than from Google, because in the end Android is FREE and handsets are not. Furthermore, handset makers are working off margins that are not as tight as PCs but are trending that way over time. A carefully structured royalty strategy will put some out of business and consolidate the industry around a few players. Samsung was target #1 because they were the largest supplier of smartphones and the favorite of Carriers looking for a low cost alternative to selling Apple.

In the near term we should expect that handset makers will turn their attention to Microsoft O/S based Smartphones. Handset makers are in a quandary though because they don’t know if they will be skinned by Apple and when the skinning takes place. Whether it is a hefty Royalty agreement or a Microsoft O/S Tax, the result in both cases will be significant in terms of a cost adder.

I look for Intel to step in and be a larger supplier of smartphone chips based on being lower cost than nVidia, Mediatek, Broadcom and Qualcomm in 3G solutions. Qualcomm will still have the high terrain with new 4G LTE solutions. Qualcomm wins with 4G LTE baseband no matter who ships. Intel also will align closer with Microsoft this Fall as they promote a high performance story up against Apple’s ARM – A6 based iPhone 5 and the A5X based iPADs. Microsoft still has the huge corporate legacy platform to live off of that they will try to use to maintain presence in the tablet space. Look for Microsoft to begin using a razor-razor blade model with tablets where the hardware is FREE with a full load of O/S and Office. In the high end Ivy Bridge based Tablets, Microsoft and Intel will show a significant performance advantage over Apple iPADs and reminding corporate CFO’s that they need to tell the CIO that whatever ecosystem is bought, it must meet a 4-5 year ROI. Apple will realize they need to build a much higher performing processor for the tablet than the iphone. Expect a split in their processor roadmap in 2013

Intel and Microsoft’s re-found love affair will enable cost reductions for Apple’s competitors that lessen Apple’s royalty impact but not eliminate it. Apple’s victory, in the end is about disrupting the marketplace and causing chaos with handset suppliers and carriers. If it buys Apple 6 months of breathing room, the impact could be tremendous as market share shifts could be dramatic following the introduction of the iPhone 5 in September.

One of the scenarios that I had drawn up earlier this year now appears to be off the table. I expected that Apple would engage with Intel on a foundry agreement in order to outflank Samsung on the cost and performance front. Now Apple will have the freedom to go their own way with TSMC and the question shifts as to whether Samsung will break from their vertically integrated NIH semiconductor model and partner with Intel on x86 based Smartphone and Tablet solutions. This may seem farfetched but the Wintel model has always worked best when there are multiple hardware players fighting it out in the marketplace. Intel and Microsoft both have an interest in having Samsung survive and the Window on x86 in Smartphones and Tablets offers a far broader range of price performance than Windows on ARM. This metric will grow over the next two years as Intel brings Atom to the front of the process line in 2014 at 14nm.

Full Disclosure: I am Long AAPL, INTC, QCOM and ALTR


A Brief History of FPGAs

A Brief History of FPGAs
by Daniel Nenni on 08-26-2012 at 7:30 pm

From the transistor to the integrated circuit to the ASIC, next comes programmable logic devices on the road to the mainstream fabless semiconductor industry. PLDS started in the early 1970’s from the likes of Motorola, Texas Instruments, and IBM but it wasn’t until Xilinx brought us the field programmable gate array (FPGA) in the late 1980’s that PLDs crossed paths with the ASIC world. Today FPGA design starts by far outnumber ASIC design starts and that number is climbing every year. Xilinx also brought us one of the first commercial foundry relationships that would transform the semiconductor industry into what it is today, fabless.

It is a familiar Silicon Valley story, Xilinx co-founder Ross Freeman wanted to create a blank semiconductor device that could be quickly programmed based on an application’s requirements. Even back then semiconductors cost millions of dollars to design and manufacture so this was not only a cost savings, FPGAs also dramatically reduced time to market for electronic products. Fortunately for us, Ross’s employer Zilog did not share this vision and Xilinx was created in 1984.

To minimize start-up costs and risk, the Xilinx founders decided to leverage personal relationships with Japan based Seiko Epson Semiconductor Division. Seiko started manufacturing the first FPGAs for Xilinx in 1985 using a very mature 1.2m process. The first Xilinx FPGA was a 1,000 ASIC gate equivalent running at 18MHZ. Xilinx also pioneered second sourcing for the fabless semiconductor market segments using multiple IDMs for manufacturing to keep costs and risks in check. One of the more notable second sources was AMD who’s CEO at that time, Jerry Sanders, made the infamous statement “Real men have fabs!” AMD is now fabless of course.

In 1995 Xilinx moved production to pure-play foundry UMC which was the start of a very long and very intimate relationship. Xilinx and UMC pioneered what is now called the simulated IDM relationship where the fabless company has full access to the process technology and is an active development partner. I remember visiting UMC and seeing Xilinx employees everywhere. In fact, one of UMC’s corporate headquarters floors was reserved for Xilinx employees. The relationship ended at 40nm as Xilinx moved to TSMC for 28nm in 2010. Rumors had the relationship ending as a result of 65nm yield problems and delays in 40nm which allowed rival Altera, who works exclusively with TSMC, to gain significant market share.

Early Xilinx customers were computer manufacturers Apple, H-P, IBM, and Sun Microsystems (now Oracle). Today “Xilinx is the world’s leading provider of All Programmable FPGAs, SoCs and 3D ICs. These industry-leading devices are coupled with a next-generation design environment and IP to serve a broad range of customer needs, from programmable logic to programmable systems integration.”Xilinx services a broad range of end markets including: Aerospace/Defense, Automotive, Broadcast, Consumer, High Performance Computing, Industrial/Medical, Wired, and Wireless.

Currently Xilinx has about 3,000 employees, 20,000 customers, 2,500 patents, and more than 50% share ($2.2B) of the $4B programmable market. The other notable programmable companies are Altera ($1.8B), Actel (now part of Microsemi), and Lattice ($300M), all of which are fabless. Newcomers to the FPGA market include Achronix and Tabula, both of which will be amongst Intel’s first fab customers at 22nm.

A Brief History of Semiconductors
A Brief History of ASICs
A Brief History of Programmable Devices
A Brief History of the Fabless Semiconductor Industry
A Brief History of TSMC
A Brief History of EDA
A Brief History of Semiconductor IP
A Brief History of SoCs


SpringSoft Laker vs Tanner EDA L-Edit

SpringSoft Laker vs Tanner EDA L-Edit
by Daniel Nenni on 08-26-2012 at 7:00 pm

Daniel Payne recently blogged some of the integration challenges facing Synopsys with their impending acquisition of SpringSoft. On my way back from San Diego last week I stopped by Tanner EDA to discuss an alternative tool flow for users who find themselves concerned about the Laker Custom Layout road map.

Design of the analog portion of a mixed-signal SoC is routinely cited as a bottleneck for getting SoC products to market. This is primarily attributed to the iterative and highly artistic nature of analog design; and the lack of analog design tools to keep up with the rate and pace of productivity tools for digital circuit design. Fortunately, there is a well-known, time-proven tool for analog and mixed-signal design that offers compelling features and functionality. What’s more – with several upcoming enhancements, this tool is very well suited to be a top choice for leading SoC designers who don’t have time to wait and see how the Synopsys Custom Designer / Laker Custom Layout integration is going to play out.

L-Editfrom Tanner EDAhas been around since 1987. It was the seminal EDA software tool offered by Tanner Research. John Tanner – a CalTech grad student and Carver Mead advisee, originally marketed L-Edit as “The Chip Kit” – a GUI-driven PC-based layout editor. Several of the core principles Dr. Tanner embraced when he first started the company continue to be cornerstones of Tanner EDA twenty-five years later:

Relentless pursuit of Productivity for Design Enablement
The tool suite from Tanner can be installed and configured in minutes. Users consistently cite their ability to go from installing the tools to having a qualified design for test chips in weeks. And we’re not talking about some vintage PLL or ADC that’s designed in 350 nanometer. Tanner has L-Edit users actively working at 28nm and 22nm on advanced technologies and IP for high-speed I/O and flash memory.

In addition to improving L-Edit organically, Tanner has embraced opportunities to add functionality and capability with partners. L-Edit added a powerful advanced device generator – HiPer DevGen– back in 2010. It automatically recognizes and generates common structures that are typically tedious and time-consuming; such as differential pairs, current mirrors, and resistor dividers. The core functionality was built out by an IC design services firm; and is now an add-on for L-Edit. More recently, Tanner has announced offeringsthat couple their tools with offerings from BDA, Aldec and Incentia. This is a great sign of a company that knows how to “stick to their knitting” – while also collaborating effectively to continue to meet their users’ needs.

Tanner L-Edit v16 (currently in Beta – due out by year-end) offers users the ability to work in Open Access; reading and writing design elements across workgroups and across tool platforms. Tanner EDA CTO Mass Sivilotti told me “Our migration to Open Access is the biggest single capability the company has taken on since launching L-Edit. This is a really big deal. We’ve been fortunate to have a strong community of beta testers and early adopters that have helped us to ensure v16 will deliver unprecedented interoperability and capability.”

Collaboration with leading foundries
for certified PDKs has been a key focus area for Tanner; and it shows. With foundry-certified flows for Dongbu HiTek, TowerJazz and X-Fab and a robust roadmap, it’s clear that this is a priority. Greg Lebsack commented “Historically, many of Tanner EDA’s users had captive foundries or other means to maintain design kits. Over the past several years, we’ve seen an increasing interest by both the foundries and our users to offer certified PDKs and reference flows. It just makes sense from a productivity and design enablement standpoint.”

Maniacal focus on customer service and support
Tier II and Tier III EDA customers (companies that have more modest EDA budget than a Qualcomm or Samsung) often cite lackluster customer service from the “big three” EDA firms. This is understandable – as much of the time, attention and resource spent by big three EDA companies is directed towards acquiring and keeping large customers. Tanner EDA has many users in the Tier I accounts, but those users tend to be in smaller in research groups or advance process design teams. Their sweet spot has been Tier II and Tier III customers; and they’ve done a great job of serving that user base. One of the keys John Tanner attributes to this is having a core of the support and development teams co-located in Monrovia. “It makes a tremendous difference, says Dr. Tanner, when an FAE can literally walk down the hall and grab a development engineer to join in on a customer call.”

Features and functions that are “just what you want” – not “more than you need”

John Zuk, VP of Marketing and Business Strategy explained it to me this way: “Back in 1987, the company touted that L-Edit was built by VLSI designers for VLSI designers. Inherent in that practice has been the embracing of a very practical and disciplined approach to our product development. Thanks to very tight coupling with our user-base, we’ve maintained a keen understanding of what’s really necessary for designers and engineers to continue to drive innovation and productivity. We make sure the tools have essential features and we don’t load them up with capabilities that can be a distraction.”


While Tanner may have had a humble presence over this past quarter century, the quality of their tools and their company are proven by Tanner’s impressive customer set. A look at the selected customer stories on their website and the quotes in several of their datasheets reveal some compelling endorsements. From FLIR in image sensors, to Analog Bits in IP, to Knowles for MEMS microphones, to Torex for Power Management, Tanner maintains a very loyal user base.

The $100M question is: Will Tanner EDA pick up where SpringSoft Laker left off?


IP Wanna Go Fast, Core Wanna Not Rollover

IP Wanna Go Fast, Core Wanna Not Rollover
by Don Dingee on 08-23-2012 at 8:15 pm

At a dinner table a couple years ago, someone quietly shared their biggest worry in EDA. Not 2GHz, or quad core. Not 20nm, or 450mm. Not power, or timing closure. Call it The Rollover. It’s turned out to be the right worry.

Best brains spent inordinate hours designing and verifying a big, hairy, heavy breathing processor core to do its thing at 2GHz and beyond. They spent person-months getting memory interfaces tuned to keep the thing fed at those speeds. They spent more time getting four cores to work together. They lived with a major customer or two making sure the core worked in their configuration. They sweated out the process details with the fab partners making sure the handoff went right, and working parts came out the other end. All this was successful.

So, the core has been turned loose on the world at large, for anyone to design with. On some corner of the globe, somebody who has spent a lot of money on this aforementioned processor core grabs an IP block, glues it and other IP functions in, and the whole thing starts to rollover and bounces down the track to a crunching halt.

The EDA tools can’t see what’s going on, although they point in the general direction of the smoke. The IP function supplier says that block works with other designs with that core.

The conclusion is it’s gotta be the core, at least until it’s proven innocent. Now, the hours can really start piling up for the design team and the core vendor, and more often than not it will turn out the problem isn’t in the processor core, but in the complexity of interconnects and a nuance here or there.

Been here? I wanna go fast, and so do you, and the IP suppliers do too, and there are open interfaces like MIPI to help. The harsh reality is that IP blocks, while individually verified functionally, haven’t seen everything that can be thrown at them from all possible combinations of other IP blocks at the most inconvenient times. Enter the EDA community and art of verification IP in preventing The Rollover.

Problemmatic combinations aren’t evident until a team starts assembling the exact IP for a specific design, and the idea of “verified” is a moving target. Synopsys has launched VIP-Central.org to help with this reality. Protocols have gotten complex, speeds have increased, and verification methodologies are just emerging to tackle the classes of problems these very fast cores and IP are presenting.

Janick Bergeron put it really well when he said that Synopsys is looking to share learning from successful engagements. No IP supplier, core or peripheral, wants to be the source of an issue, but some issues are fixed more easily than others. Understanding errata and a workaround can often be more expedient than waiting for a new version with a solution from the IP block supplier, and a community can help discuss and share that information quickly.

Recorded earlier this month, a new Synopsys webinar shows an example of how this can work: verifying a mobile platform with images run from the camera to a display.

The idea of verification standards in EDA wasn’t so functional IP suppliers could say they’ve checked their block, but instead to give designers a way to check that block within a new design with other blocks of undetermined interoperability. “Verified” IP catalogs are a starting point, but the real issues have only started to surface, and verification IP can help avoid your next design being a scene in The Rollover.


A Brief History of ASIC, part II

A Brief History of ASIC, part II
by Paul McLellan on 08-23-2012 at 8:00 pm

All semiconductor companies were caught up in ASIC in some way or another because of the basic economics. Semiconductor technology allowed medium sized designs to be done, and medium sized designs were pretty much all different. The technology didn’t yet allow whole systems to be put on a single chip. So semiconductor companies couldn’t survive just supplying basic building block chips any more since these were largely being superseded by ASIC chips. But they couldn’t build whole systems like a PC, a TV or a CD player since the semiconductor technology would not allow that level of integration. So most semiconductor companies, especially the Japanese and even Intel, started ASIC business lines and the market became very competitive.

ASIC turned out to be a difficult business to make money in. The system company owned the specialized knowledge of what was in the chip, so the semiconductor company could not price to value. Plus the system company knew the size of the chip and thus roughly what it should have cost with a reasonable markup. The money turned out to be in the largest, most difficult designs. Most ASIC companies could not execute a design like that successfully so that market was a lot less competitive. The specialized ASIC companies that could, primarily VLSI Technology and LSI Logic again, could charge premium pricing based on their track record of bringing in the most challenging designs on schedule. If you are building a sky-scraper you don’t go with a company that has only built houses.

As a result of this, and of getting a better understanding of just how unprofitable low-volume designs were, everyone realized that there were less than a hundred designs a year being done that were really worth winning. It became a race for those hundred sockets.

Semiconductor technology continued to get more powerful and it became possible to build whole systems (or large parts of them) on a single integrated circuit. These were known as systems-on-chip or SoCs. The ASIC companies all started to build whole systems such as chipsets for PCs or for cell-phones alongside their ASIC businesses which were more focused on just those hundred designs that were worth winning. So all semiconductor companies started to look the same, with lines of standard products and, often, an ASIC product line too.

One important aspect of the ASIC model was that the tooling, the jargon word for the masks, belonged to the ASIC company. This meant that the only company that could manufacture the design was that ASIC company. Even if another semiconductor company offered them a great deal, they couldn’t just hand over the masks and take it. This would become important in the next phase of what ASIC would morph into.

ASIC companies charged a big premium over the raw cost of the silicon that they shipped to their customers. ASIC required a network of design centers all over the world staffed with some of the best designers available, obviously an expensive proposition. Customers started to resent paying this premium, especially on very high volume designs. They knew they could get cheaper silicon elsewhere but that meant starting the design all over again with the new semiconductor supplier.

Also, by then, two other things had changed. Foundries such as TSMC had come into existence. And knowledge about how to do physical design was much more widespread and, at least partially, encapsulated in software tools available from the EDA industry. This meant that there was a new route to silicon for the system companies, namely ignore the ASIC companies, do the entire design including the semiconductor-knowledge-heavy physical design, and then get a foundry like TSMC to manufacture it. This was known as customer-owned-tooling or COT, since the system company as opposed to the ASIC company or the foundry, owned the whole design. If one foundry gave poor pricing the system company could transfer the design to a different manufacturer.

However, the COT approach was not without its challenges. Doing physical design of a chip is not straightforward. Many companies found that the premium that they were paying ASIC companies for the expertise in their design centers wasn’t for nothing, and they struggled to complete designs on their own without that support. As a result, companies were created to supply that support, known as design services companies.

Design service companies played the role that the ASIC companies’ design centers did, providing specialized semiconductor design knowledge to complement the system companies’ knowledge. In some cases they would do the entire design, known as turnkey design. More often they would do all or some of the physical design and, often, manage the interface with the foundry to oversee the manufacturing process, another area where system companies lacked experience.

One company in particular, eSilicon, operates with a business model identical to the old ASIC companies except in one respect. It has no fab. It actually builds all of the customers’ products in one of the foundries (primarily TSMC).

Another change has been the growth of field-programmable gate-arrays (FPGAs) which are used for many of the same purposes as ASIC used to be.

So that is the ASIC landscape today. There is very limited ASIC business conducted by a few semiconductor companies. There are design services companies and virtual ASIC companies like eSilicon. There are no pure-play ASIC companies. A lot of what used to be ASIC has migrated to FPGA. A Brief History of ASIC Part I is HERE.

A Brief History of Semiconductors
A Brief History of ASICs
A Brief History of Programmable Devices
A Brief History of the Fabless Semiconductor Industry
A Brief History of TSMC
A Brief History of EDA
A Brief History of Semiconductor IP
A Brief History of SoCs


Book Review: Mixed-Signal Methodology Guide

Book Review: Mixed-Signal Methodology Guide
by Daniel Payne on 08-23-2012 at 4:00 pm

Almost every SoC has multiple analog blocks so AMS methodology is an important topic to our growing electronics industry. Authored by Jess Chen (Qualcomm), Michael Henrie (Cliosoft), Monte Mar (Boeing) and Mladen Nizic (Cadence), the book is subtitled: Advanced Methodology for AMS IP and SoC Design, Verification and Implementation. Cadence published the book and I’ve just read the first chapter and deemed the tome worthy to review because it shows both the challenges of AMS design plus verification and discusses multiple approaches, while not favoring one particular EDA vendor tool.

Review of Chapters 2 through 11 are here.

Mladen Nizic
I spoke by phone with Mladen Nizic on Thursday afternoon to better understand how this book came to be.

Q: Why was this book on AMS design and verification methodology necessary?
A: Technology and business drivers are demanding changes in products and therefore methodology. We need more discussion between Digital and Analog designers, collaborating to adopt new methodologies.

Q: How did you select the authors?
A: I knew the topics that I wanted to cover, then started asking customers and people that I knew. We gathered authors from Boeing, Cadence, Qualcomm and ClioSoft.

Q: What are the obstacles to adopting a new methodology?
A: Organizational and technical barriers exist. Most organizations have separate digital SOC and analog design groups, they just do design differently and are kind of isolated. You start to see engineers with AMS Verification Engineer appearing now. The complexity of the AMS designs is increasing with more blocks being added. Advanced nodes bring challenges, which require even more analysis.

Q: Will reading the book be sufficient to make a difference?
A: The whole design team needs to read the book, then discuss their methodology, and start to adapt some or all of the recommended techniques. Analog designers need to learn what their Digital counterparts are doing for design and verification.

Q: Why should designers spend the time reading about AMS methodology?
A: To become better rounded in their approach to design and verification. You can also just read the chapters that are of interest to each group.

Q: Where do I buy this book?
A: Right now you can pre-order it at LuLu.com, and soon afterwards on Amazon.com.

Q: Can I buy an e-Book version?
A: There will be an e-Book version coming out after the hard-copy.

Q: Is there a place for designers to discuss what they read in the book?
A: Good idea, we’re still working on launching that, so stay tuned.

Chapter 1: Mixed-Signal Design Trends & Challenges

The continuous time output of Analog IP blocks are contrasted with the binary output of Digital IP blocks and various approaches to Mixed-signal verification are introduced:

To gain the verification speed you must consider abstracting the analog IP into behavioral models, instead of only simulating at the transistor level with a SPICE or Fast-SPICE circuit simulator.

Topics raised: mixed-signal verification, behavioral modeling, low-power verification, DFT, chip planning, AMS IP reuse, full-chip sign-off, substrate noise, IC/package co-design, design collaboration and data management.

Other chapters include topics like:

  • Overview of Mixed-Signal Design Methodologies
  • AMS Behavioral Modeling
  • Mixed-Signal Verification Methodology
  • A Practical Methodology for Verifying RF Designs
  • Event-Driven Time-Domain Behavioral Modeling of Phase-Locked Loops
  • Verifying Digitally-Assisted Analog Designs
  • Mixed-Signal Physical Implementation Methodology
  • Electrically-Aware Design Methodologies for Advanced Process Nodes
  • IC Package Co-Design for Mixed-Signal Systems
  • Data Management for Mixed-Signal Designs.



Book Review: Mixed-Signal Methodology guide

Book Review: Mixed-Signal Methodology guide
by Daniel Payne on 08-23-2012 at 4:00 pm

Almost every SoC has multiple analog blocks so AMS methodology is an important topic to our growing electronics industry. Authored by Jess Chen (Qualcomm), Michael Henrie (Cliosoft), Monte Mar (Boeing) and Mladen Nizic (Cadence), the book is subtitled: Advanced Methodology for AMS IP and SoC Design, Verification and Implementation. Cadence published the book and I’ve just read the first chapter and deemed the tome worthy to review because it shows both the challenges of AMS design plus verification and discusses multiple approaches, while not favoring one particular EDA vendor tool.

Review of Chapters 2 through 11 are here.

Mladen Nizic
I spoke by phone with Mladen Nizic on Thursday afternoon to better understand how this book came to be.

Q: Why was this book on AMS design and verification methodology necessary?
A: Technology and business drivers are demanding changes in products and therefore methodology. We need more discussion between Digital and Analog designers, collaborating to adopt new methodologies.

Q: How did you select the authors?
A: I knew the topics that I wanted to cover, then started asking customers and people that I knew. We gathered authors from Boeing, Cadence, Qualcomm and ClioSoft.

Q: What are the obstacles to adopting a new methodology?
A: Organizational and technical barriers exist. Most organizations have separate digital SOC and analog design groups, they just do design differently and are kind of isolated. You start to see engineers with AMS Verification Engineer appearing now. The complexity of the AMS designs is increasing with more blocks being added. Advanced nodes bring challenges, which require even more analysis.

Q: Will reading the book be sufficient to make a difference?
A: The whole design team needs to read the book, then discuss their methodology, and start to adapt some or all of the recommended techniques. Analog designers need to learn what their Digital counterparts are doing for design and verification.

Q: Why should designers spend the time reading about AMS methodology?
A: To become better rounded in their approach to design and verification. You can also just read the chapters that are of interest to each group.

Q: Where do I buy this book?
A: Right now you can pre-order it at LuLu.com, and soon afterwards on Amazon.com.

Q: Can I buy an e-Book version?
A: There will be an e-Book version coming out after the hard-copy.

Q: Is there a place for designers to discuss what they read in the book?
A: Good idea, we’re still working on launching that, so stay tuned.

Chapter 1: Mixed-Signal Design Trends & Challenges [Online]

The continuous time output of Analog IP blocks are contrasted with the binary output of Digital IP blocks and various approaches to Mixed-signal verification are introduced:

To gain the verification speed you must consider abstracting the analog IP into behavioral models, instead of only simulating at the transistor level with a SPICE or Fast-SPICE circuit simulator.

Topics raised: mixed-signal verification, behavioral modeling, low-power verification, DFT, chip planning, AMS IP reuse, full-chip sign-off, substrate noise, IC/package co-design, design collaboration and data management.

Other chapters include topics like:

  • Overview of Mixed-Signal Design Methodologies
  • AMS Behavioral Modeling
  • Mixed-Signal Verification Methodology
  • A Practical Methodology for Verifying RF Designs
  • Event-Driven Time-Domain Behavioral Modeling of Phase-Locked Loops
  • Verifying Digitally-Assisted Analog Designs
  • Mixed-Signal Physical Implementation Methodology
  • Electrically-Aware Design Methodologies for Advanced Process Nodes
  • IC Package Co-Design for Mixed-Signal Systems
  • Data Management for Mixed-Signal Designs.

Also Read

Interview with Brien Anderson, CAD Engineer

Managing Differences with Schematic-based IC design

ClioSoft Update 2012!