RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

What I Didn’t Know about Electronic Design Automation

What I Didn’t Know about Electronic Design Automation
by Daniel Payne on 03-04-2014 at 7:36 pm

I started using internal EDA tools at Intel beginning in 1978 and have worked in the commercial EDA industry since 1986, so it was a delight to read a chapter about EDA in Nenni and McLellan’s newest book: Fabless – The Transformation of the Semiconductor Industry. Starting in the 1970’s the authors talk about EDA, Phase One and how painfully manual the whole process of designing an Integrated Circuit was. I’ll never forget working at Intel at the time and performing manual Design Rule Checks (DRC) on an IC layout, when I stopped to ask my manager, “Hey, what about using a software program to automate this tedious task?”


Continue reading “What I Didn’t Know about Electronic Design Automation”


Dr. Walden Rhines Vision on Semiconductor & India

Dr. Walden Rhines Vision on Semiconductor & India
by Pawan Fangaria on 03-04-2014 at 11:00 am

Last month India Electronics & Semiconductor Association (IESA) held its Vision Summit at Bangalore in which luminaries from across the semiconductor and electronics industry presented their views about the future of this industry and India’s progress. Dr. Walden C. Rhines, Chairman and CEO of Mentor Graphicspresented interesting facts and trends about the semiconductor industry in his keynote speech. Dr. Rhines, a great technologist, strategist and visionary, I admire, particularly talked about what makes India best suited to embrace the fabless opportunity in the overall semiconductor ecosystem. I’m moved by his insight into Indian semiconductor business dynamics, strengths, weaknesses and how he rightfully cites a sustainable opportunity for India to focus upon. So what are the trends? Which segment is gaining traction?


If we look at the semiconductor market, fabless semiconductor design segment is showing the highest growth rate (16% CAGR), 29% of total IC revenue at $78B in 2013. Among top 50 semiconductor companies, 13 are fabless, that includes Qualcomm, Broadcom, AMD and Nvidia at 4[SUP]th[/SUP], 11[SUP]th[/SUP], 13[SUP]th[/SUP] and 16[SUP]th[/SUP] rank respectively. Another interesting fact is that the fabless revenue is highly concentrated with the topmost company garnering 19% and top 5 companies
together making 48% of total fabless revenue.


Look at the rise of fabless semiconductor companies with start of the new millennium until 2008 and then moderation. As of today, according to GSA estimate, out of a total of 1284 semiconductor companies 1011 are fabless.


Semiconductor IP (SIP) business is another segment that is seeing consistent growth. Again in this market, revenue is highly concentrated with single topmost company (ARM) taking away the lion’s share at 34% and top 5 companies together (ARM, Synopsys, Rambus, Tessera and Imagination) making 73% of the total SIP revenue.Dr. Rhines also talks about rising tape-outs at leading edge technology nodes (28nm and below), however still there is a large opportunity at older technologies (65nm and above) with 43% of IC production. IoT (Internet of Things) was cited to be the catalyst for yet another transformation of semiconductor industry. While that takes time, it’s ripe opportunity for India to raise its stake in fabless design and SIP business where it is showing strength. While new fabless start-ups are declining in the west they are growing in India.


Let’s look at where India stands in the Fabless Universe. India is among the top 5 semiconductor design locations across the world with 18 of the top 20 U.S. semiconductor companies and 20 European companies having their R&D centers in India. Considering overall semiconductor & electronics, 1031 MNCs have their R&D centers in India. Look at the right side pie chart, a considerable 5.3% of SIP companies are headquartered in India. All this data shows that there is good potential for India to grow in fabless semiconductor business. Dr. Rhines cites an example of Qualcomm on how it started with R&D services and became a fabless powerhouse revolutionizing the wireless communications industry.It was interesting to note from Dr. Rhines slides about the statistics on how the top talent in India is churned through IIT (Indian Institute of Technology) JEE (Joint Entrance Examination). While that is definitely a benchmark in India, I would like to add, there are some other excellent and effective regional engineering colleges in India; there are examples of professionals from some of these colleges who have set examples on the world map.


Dr. Rhines goes further citing the young and creative workforce with rising experience levels that makes rapid economic gains for India. It’s among top 5 destinations for foreign investments. In 2012, India had $25.5B of FDI (Foreign Direct Investment).There were nice examples of business model innovations. He cited examples of foreign “flagged” Indian companies, e.g. Beceem Communications, Redpine Signals and HelloSoft which have their headquarters in U.S. and design centers in India with the teams working with system architects over virtual networks.


Dr. Rhines then talks about “out-of-the-box” architectural innovations and how Mentor’s Veloce2 was developed with collaboration between U.S. and India teams. Its virtual stimulus concept and test bench acceleration were conceived and developed in India. This is a nice example of users and tool developers collaborating in close proximity.The complete set of the keynote slides is posted on IESA website here. My personal opinion as a concluding remark is that India must avail this opportunity in fabless world. It’s okay to have a fab, but if there is still close to 45% of IC production in 65nm and above, it may be beneficial to remain fabless because, in my opinion (I may be proved wrong) older fabs can slash fees much faster than gain in ROI on new fab. Of course, one needs to keep advancing towards leading edge technologies.

More Articles by Pawan Fangaria…..

lang: en_US


Synopsys Announces Verification Compiler

Synopsys Announces Verification Compiler
by Paul McLellan on 03-04-2014 at 8:00 am

Integration is often an underrated attribute of good tools, compared to raw performance and technology. But these days integration is differentiation (try telling that to your calculus teacher). Today at DVCon Synopsys announced Verification Compiler which integrates pretty much all of Synopsys’s verification technologies (including the technology acquired in the SpringSoft acquisition) into a single tool. Verification Compiler is a complete portfolio of integrated, next-generation verification technologies that include advanced debug, static and formal verification, simulation, verification IP and coverage closure. Together these technologies offer a 5X performance improvement and a substantial increase in debug efficiency, enabling SoC design and verification teams to create a complete functional verification flow with a single product.


Existing methodologies using disparate tools are not very efficient, involving, as they do, duplicate steps, incompatible databases, multiple debug environments, inconsistent coverage metrics which all impact ease-of-use and productivity. And not in a good way.

Verification compiler ties together the big pieces of verification: static and formal verification, simulation, VIP, debug and coverage. The goal is to “shift left” and find more problems earlier in the design cycle and achieve coverage goals earlier.

Under the hood there is a huge amount of raw technology in the various engines, and then the uniform way of accessing it, debugging it and sharing data means that it is easy to use and the performance of the engines is not lost in translation and other inefficiencies. Verification Compiler really is a single product with native interfaces and consistent databases.

There is more than just integration of existing Synopsys technology. Probably the biggest new raw technology is that the formal analysis has been completely rebuilt from scratch with a lot more power, more performance and more capacity, an increase of 3-5X with full support for low power and clock domains.

There are big changes in debug too, focused on the challenges of large SoC development which has a large software component: interactive testbench debug, transaction debug, accelerated time to waveform for Zebu, AMS debug, power-aware debug, HW/SW debug. All with a common user-interface, way of displaying waveforms etc.

So how big a difference does this all make? Overall, a 5X performance increase (of course it is design dependent, ymmv, in some cases much more, occasionally less).

  • 3-5X improvement on static and formal verification
  • 4X on constraint runtime during simulation
  • 10X+ compile turnaround time with partition compile
  • 2X native power simulation
  • 2X faster verification IP
  • 2X with native FSDB
  • 4X with native Siloti

In addition to raw performance increase, Synopsys and their early partners reckon an increase of 3X in productivity due to concurrent verification, automated setup, and integrated flows and methodology. The concurrent verification methodology is supported by the licensing approach Synopsys have taken. One Verification Compiler license actually gives you three keys so you (or your team) can concurrently run static/formal, simulation, and debug. All from a single license key. The component parts are also available for separate licensing (so you can still, for example, have lots of VCS licenses for regressions).


So in summary:

  • Next-generation verification technologies, including static and formal verification, provide 5X performance improvement
  • Native integration of simulation, static and formal verification, VIP) debug, and coverage technologies into a single product boosts performance and productivity
  • New advanced SoC debug capabilities built on the easy-to-use Verdi[SUP]3[/SUP] debug platform enhance debug efficiency
  • Complete low power verification with native low power simulation, X-propagation simulation, next generation low power static checking and low power formal verification
  • A broad portfolio of VIP AMBA, Ethernet, MIPI, PCIe and more, integrated with simulation and debug for highest performance and productivity
  • Concurrent verification licensing enables 3X productivity improvement overall

Verification Compiler is in limited customer availability and will be in full release in Q4.

Much more detail on the Synopsys website here.


More articles by Paul McLellan…


Does Multiprotocol-PHY IP really boost TTM?

Does Multiprotocol-PHY IP really boost TTM?
by Eric Esteve on 03-04-2014 at 4:33 am

I have often written in Semiwiki about high speed PHY IP supporting Interface protocols (see for example this blog), the SoC cornerstone, almost as crucial as CPU, GPU or SDRAM Memory Controller. When you architect a SoC, you first select CPU(s) and/or GPU(s) to support the system basic functionality (Processor for Mobile application, Networking, Set-Top-Box etc.), then you define the various protocols to be supported by this SoC to interface with the functional system and the outside world. For an enterprise system, you will have to select one or several protocols among Ethernet (10G KR & KR4, 40G or 100G), PCI Express 3.0/2.1/1.1, SATA 6G/3G/1.5G or OIF CEI-6G and CEI-11G, to name a few. If you have read this previous blog (or if you have been exposed to high speed PHY utilization), you know that a 12 Gbps PHY IP design is complex, resource intensive and time consuming, as it may sometimes require several Silicon Test Chip before being 100% functional.

Chip makers have to face another challenge. As the cost of IC design rapidly increases due to the reduction in feature sizes, companies are no longer designing products that target just a single application. The SoC design must be architected to utilize multi-protocol physical layer (PHY) IP which can be connected to multiple different protocol controllers/MACs. Here comes the multi-protocol PHY concept. We may have to take a look at the picture below before going further:

The Physical layer is made of two sub-functions, the Physical Media Attachment (PMA), essentially analog and hard-wired, and the Physical Coding Sub-layer (PCS), digital and soft coded. If you architecture the physical layer in such a way that the PMA can be common to various (say N) interface protocols, and mux N protocol specific PCS, you can optimize analog design resources and cost, and certainly accelerate schedule when comparing with the design of N completely different PHY (except if you have an infinite source of analog designers, but does it really happen in any company)?

We can see the benefit in term of cost and schedule for the PHY IP vendor, but does it also imply a benefit for the chip maker, IP vendor’s customer? When this chip maker is targeting multiple applications, and need to optimize cost, designing only one SoC (or one SoC platform) which can be configured to address these various segments, integrating a single, multi-protocol PHY will certainly improve this IP “cost of ownership”. The PHY integration into the SoC workload will be minimized (one PHY IP instead of N PHY IP). The PHY qualification, using expensive lab hardware, and probably validation boards, will be simplified. Nevertheless, the two most important benefit will be NRE expenses improvement (divided by N in theory), and even more important, better time-To-Market (TTM). In fact, the chip maker benefit from the same TTM improvement than the PHY IP vendor! In the real world, designing and validating a multi-protocol PHY IP takes probably longer than for a single protocol… but the diversity of protocols is such that no IP vendor could bring to the market these N protocol specific PHY as quickly as this multi-protocol PHY.

Does a multi-protocol PHY IP improve TTM for Enterprise SoC chip makers? Certainly yes…providing that this PHY IP supports the protocols you need for your SoC interfaces. Let’s take a look at the protocol list supported by Synopsys’ 12G PHY:

  • IEEE 802.3 10G and 40G backplane (XAUI, KR & KR4), port side 40G, 100G (CR4 & CR10), and 10G (XFI, SFF-8431/SFI)
  • IEEE 802.3az Electrical Energy Efficient
  • SGMII, and QSGMII
  • PCI-SIG PCI Express (PCIe) 3.0/2.1/1.1
  • SATA 6G/3G/1.5G (Rev 3.2)
  • OIF CEI-6G and CEI-11G
  • CPRI, OBSAI, JESD204B

This multi-protocol PHY targets the SoC Enterprise market, and most of the relevant protocols are supported. We have not mentioned the PHY specific, complexes features, up-to-date PHY design techniques, like CTLE and DFE, PRBS or in-situ testing:

  • Multi-featured (CTLE and DFE) receiver and transmitter equalization: adaptive equalizers have many different settings, and in order to select the right one there needs to be some measure of how well a particular equalization setting works. The result will be to improve Rx jitter tolerance, ease board layout design, and improve immunity to interferences.
  • Mapping the signal eye and output the signal statistics via the shown JTAG interface: this allows for simple inspection of the actual signal. This in-situ testing method can replace very expensive test equipment (when a simple idea gives the best results!)
  • The pseudo-random bit sequencer (PRBS) generator send patterns to verify the transmit serializer, output driver, and receiver circuitry through internal and external loopbacks (keep in mind that Wafer level Test equipment are limited in frequency range, such a circuitry allows running test at functional speed on a standard testers).

If you are interested by Eye diagram measurement, and more specifically want to know how to reduce PCI Express 3 “fuzz” with multi-tap filters, you definitely should read this blog from Navraj Nandra (Marketing Director PHY & Analog IP with Synopsys). The very didactical article explains how adaptive equalization works, Inter Symbol Interferences (ISI), as well as help to understand how signals contain different frequency content, illustrated by four examples of forty bit data patterns. Navraj has been able to explain advanced signal processing concepts by using simple words, and this is everything but simple to do!

From Eric Esteve from IPNEST

More Articles by Eric Esteve…..

lang: en_US


ARM Lab in a Box

ARM Lab in a Box
by Paul McLellan on 03-02-2014 at 5:57 pm

St. Francis Xavier said “Give me the child until he is seven and I’ll give you the man.” ARM is not going for them quite that young but this week they announced their “lab in a box” for participating universities worldwide. It is actually a joint launch between the ARM University Program (which is not new) and various partners. Since ARM doesn’t actually make any silicon themselves, they can’t supply everything needed themselves even if they wanted to.

So what is in the “lab in a box” (LiB)?

The LiB package includes hardware boards from ARM partners, software licenses from ARM, and complete teaching materials ready to be immediately deployed in classes. Current partners supplying hardware boards include Freescale and NXP. The full contents of the box are as follows:

  • 10 x ARM-based development boards
  • 100 x ARM Keil MDK-ARM Pro 1-year, renewable software tools licenses
  • A complete suite of teaching materials from ARM, including lecture note slides, demonstration codes, lab manuals and projects with solutions in source.

Not surprisingly, Cambridge University is one of the first participants to get a LiB package. After all they are only a couple of mile away from ARM HQ and many of the founders of ARM graduated from the Cambridge Computer Laboratory (as did I). So you might expect that it would be the Computer Laboratory that loves the LiB. But the quote in the press release comes from Dr. Boris Adryan who appears to be in the department of genetics. He said:We were delighted to be one of the first institutions to receive the ARM University Program’s Lab-in-a-Box on Embedded Systems. It has immediately proven itself to me as an excellent resource for our research and teaching activities. The ARM-based materials it contains are helping us to connect our teaching of systems biology with the world’s latest embedded computing and sensing technology.

On April 14th and 15th, Xilinx and ARM are hosting two one-day workshops specifically for training faculty and researchers on the ARM SoC Lab-in-Box. The LiB is based on the ARM Cortex-M0 DesignStart processor core and Xilinx Vivado Design Tools. The one day workshop comprises lectures, hands-on exercises, and opportunities to network with experts from ARM and Xilinx. Details, including a link for registration, are here.

If you are not a “participating university” there are other ways to learn about ARM, program one, build applications and so forth. Probably the most accessible today is Raspberry Pi. This is a credit-card sized computer. It went on sale almost exactly 2 years ago (on February 29th so it is hard to say “two years ago today” this year) and sold 100,000 units on the first day. Since then over 2.5 million units have shipped.

The idea behind a tiny and cheap computer for kids came in 2006, when Eben Upton, Rob Mullins, Jack Lang and Alan Mycroft, based at the University of Cambridge’s Computer Laboratory, became concerned about the year-on-year decline in the numbers and skills levels of the A Level students applying to read Computer Science. There isn’t much any small group of people can do to address problems like an inadequate school curriculum or the end of a financial bubble. From 2006 to 2008, they designed several versions of what has now become the Raspberry Pi. The project started to look very realisable. Eben (now a chip architect at Broadcom), Rob, Jack and Alan, teamed up with Pete Lomas, MD of hardware design and manufacture company Norcott Technologies, and David Braben, co-author of the seminal BBC Micro game Elite, to form the Raspberry Pi Foundation to make it a reality. Three years later, the Raspberry Pi Model B entered mass production and within a year it had sold over one million units.

Buy a Raspberry Pi on Amazon (eligible for Prime) without ethernet hereor with here. $29.99 or $39.99 respectively.

Details of the Lab in a Box announcement here.

But wait, there’s more. At Embedded World in Nuremburg last week, Freescale announced the smallest ARM ever. Yes, that is a golf ball. It is a 1.6mm by 2mm package using wafer-level chip-scale packaging. Internet-of-things-ready.


More articles by Paul McLellan…


Baskin-Robbins Only has 31 Flavors. Atmel has 505 Microcontrollers

Baskin-Robbins Only has 31 Flavors. Atmel has 505 Microcontrollers
by Paul McLellan on 03-02-2014 at 4:59 pm

Actually these days even Baskin-Robbins has more, but not 505. But as it says in the title, Atmel have 505 different microcontrollers. That’s a lot. Some are AVR, both 8 bit and 32 bit, and some are various flavors of ARM (all 32 bit) ranging from older parts like the ARM9 to various flavors of Cortex ranging from the M0 (tiny microcontroller with no pipeline or cache) up to A5. Of course the ARM product line goes all the way up to 64-bit Cortex-A57 and so on but they are not in any sense of the word microcontrollers and are really only used in SoCs and not standalone products.

But with 505 choices, how do you pick one. It turns out that Atmel have made it relatively easy for you. They have a microcontroller product finder that allows you to put in your hard constraints and it will narrow down the choices. For example, if you want your microcontroller to have at least 64 Kbytes of flash then there are only 257 out of the 505 that will suit you. For each parameter you can set minimums and maximums (except for the yes/no choices).

The things that you can constrain the selection on are:

  • how much flash memory (0 to 2Mbytes)
  • pin count (6 to 324)
  • operating frequency (1 to 536MHz)
  • CPU architecture (pick from 8-bit AVR, 32-bit AVR, ARM 926 and 920, ARM Cortex M0, M3, M4, A5)
  • SRAM (30 bytes to 256 Kbytes)
  • EEPROM (none to 8 Kbytes)
  • Max I/O pins (4 to 160)
  • picoPower (yes or no)
  • operating voltage (various ranges from 0.7V to 6V)
  • operating temperature (various from -20[SUP]o[/SUP]C to 150[SUP]o[/SUP]C)
  • number of touch channels (none to 256)
  • number of timers (1 to 10)
  • watchdog (yes or no)
  • 32KHz real time clock (yes or no)
  • analog comparators (0 to 8)
  • temperature sensor (yes or no)
  • ADC resolution (8 to 16 bits)
  • ADC channels (2 to 28)
  • DAC channels (0 to 4)
  • UARTs (0 to 8)
  • SPI (1 to 12)
  • TWI (aka I[SUP]2[/SUP]C) interface (none to 6)
  • USB interface (none, device only, host+OTG, host and device)
  • PWM channels (0 to 36)
  • Ethernet interfaces (none to 2)
  • CAN interfaces (none to 2)

Wow, that is a lot of options. But a couple of dozen choices and you can narrow down your choices to something manageable. The interface looks like this:


Let’s pick a microcontroller. I want an ARM Cortex of some flavor. Already choices are down to 189. I want 32K to 128K of flash (now down to 73 choices). I want it to run at an operating frequency of at least 64 MHz (now down to 10). I want 4K of SRAM (turns out all 10 choices already have that much). I need 4 timers. I am now down to 2 choices:


They are the ATSAM3S1C and the ATSAM3S2C. They are both ARM Cortex-M3s. The first has 64K of flash and the second 128K. I can click on the little PDF icon and get a full datasheet for these microprocessors. If I don’t like the choices and I have some flexibility on specs then obviously I can go back and play with the parameters and get some new options.

Or I can click on the “S” to order samples (you have to already have an account with Atmel to do this).

Or click on the shopping cart to get a list of distributors in various parts of the world where I can actually place an order. It even tells me how many each of them have in stock.

The Atmel Microprocessor Product Finder is here.


More articles by Paul McLellan…


Xilinx vs. Altera DSP

Xilinx vs. Altera DSP
by Luke Miller on 03-02-2014 at 1:00 pm

Did you know in the Xilinx Virtex 28nm series you can REALLY run the DSP at 741 MHz? I say ‘really’ as you know dear reader, not all the FPGA claims of speed and usage tends to live up to reality. I cannot stand marketing games where you can run at a GHz ‘But’ and then comes the list of gotcha’s. Don’t believe me? Whaaat? Well let’s take a journey through a REAL example of a Parallel FIR filter that runs in REAL silicon TODAY running the DSP at 741 MHz. By the way, my kids laugh every time I say FIR filter.

Why did the digital filter designer get attacked by ‘People for the Ethical Treatment of Animals’? “He was selling FIRs’.

So let us begin, what is a FIR filter? ‘FIR’ stands for Finite Impulse Response. DSP 101. They are the heart of all digital systems, they cannot live without them. All they do is convolve a steam of data (signal) with weights or coefficients to shape or filer the data. Below is a diagram of the operations. Multiply’s and Add’s or better known as MACs.

The need of Parallel FIR filtering arises when the data rate is the same as the clock rate or often called ‘systolic FIR’. What allows the designer to run at 741 MHz is the architecture of the DSP48 slices in the Xilinx FPGAs. Xilinx’s DSP are the widest and fastest with no games or tricks. I encourage you to read the following DSP user guides from Xilinx.

· FIR Compiler v7.1 product guide, PG149
· 7 Series DSP48E1 Slice, UG479

The results of the parallel, 80 Tap FIR filter is: Target device: V7 160T -3; Resource: 0 LUTs, 2528 FFs, 332 DSP48; Timing: 1.348ns (=742MHz). If you did not need a Parallel FIR, you would only use 40 DSP. Fun stuff!

Let me remind you once again, you can do this today, this is not UltraScale, and this is Xilinx 28nm. Now, there is perception that Xilinx and Altera are fighting it out and it does not matter who is leading and all is well and let’s get some coffee. That is not the case, for me one of the most important jobs I do is recommend what FPGA to use. The silicon choice is extremely relevant and any competent designer knows this. Choosing the wrong FPGA vendor will mean a board design that will cost another $300k and 3 months more of integration time. Having a design in one FPGA verses spilling out into two FPGAs means winning or losing a design bid. Below is a simple DSP compare of Xilinx and Altera at 28nm.

Xilinx Virtex-7 has up to 3600 DSP, 18×25 width @741 MHz. = 2.7 TMACS potential
Altera Stratix-V has up to 3926 DSP, 18×18 width @500 MHz. = 1.9 TMACS potential

(Note: in symmetric mode, the above numbers would be 2X. Xilinx 5.3 TMACs, Altera 3.9 TMACs)

Now, I was conservative here on two fronts, the first are the DSP bit widths, if you needed wider than 18×18 widths, Xilinx has 38% more bit width than Altera + another 800 GMACs. Or when compared to Xilinx, Altera effectively has 1.2 TMACS. Can you tell me that is not important or not relevant? My designs need to run in silicon, not on paper.

lang: en_US


Sir Hossein Yassaie, CEO of Imagination Technologies, Keynote!

Sir Hossein Yassaie, CEO of Imagination Technologies, Keynote!
by Daniel Nenni on 03-02-2014 at 12:00 pm

Semiconductor IP is a focus of this year’s Design Automation conference and I’m excited to see a keynote by one of the leaders of this market segment. Even more interesting, Dr. Hossein Yassaie was knighted by the Queen in Her Majesty’s New Year Honours 2013. The award was given in recognition of his services to technology and innovation. Imagination Technologies also collaborated on the IP chapter in our book Fabless: The Transformation of the Semiconductor Industry and Hossein answered the question “What is Next for the Semiconductor Industry?” in Chapter 8. It really is an honor to work with Imagination Technologies and Sir Hossein Yassaie, absolutely.

The Great SoC Challenge ( IP to the Rescue!)

The system-on-chip (SoC) has revolutionized the semiconductor and electronics industries, providing ever-more compact designs that incorporate increasingly large amounts of functionality and performance. This integration, together with process technology scaling, has led to mass availability of affordable, low-power mobile and consumer products.

As the number of discrete IP blocks on a typical SoC continues to escalate, and as the complexity of each of those blocks is also on the rise, SoC developers are challenged to meet not only integration demands but also tight schedules, power and thermal constraints, aggressive cost targets, support for a dizzying array of standards and more. This is compounded by an insatiable consumer appetite for always-on global connectivity, long battery life and of course access to the latest technologies, such as ultra-high definition video and photorealistic graphics.

Silicon IP providers have stepped in to ensure continued innovation by providing high-performance, low-power technologies that help SoC companies meet their design requirements in this ever-more complex environment. IP will play an increasingly vital role going forward. By offering a comprehensive IP portfolio and market driven platforms across CPU, graphics, video and vision technologies, IP providers will help companies overcome their design challenges and address a growing number of exciting new applications and market opportunities.

The Design Automation Conference (DAC) is recognized as the premier event for the design of electronic circuits and systems and for electronic design automation (EDA) and silicon solutions. Since 1964, a diverse worldwide community of many thousands of professionals has attended DAC. They include system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives as well as researchers and academicians from leading universities. Close to 60 technical sessions selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, and methodologies and technologies.

A highlight of DAC is its exhibition and suite area featuring leading and emerging EDA, silicon, intellectual property (IP) automotive, security and design services providers. The conference is sponsored by the Association for Computing Machinery (ACM), the Electronic Design Automation Consortium (EDA Consortium), and the Institute of Electrical and Electronics Engineers (IEEE), and is supported by ACM’s Special Interest Group on Design Automation (ACM SIGDA).

More Articles by Daniel Nenni…..

lang: en_US



Effect of Inductance on Interconnect

Effect of Inductance on Interconnect
by Daniel Nenni on 03-02-2014 at 11:00 am

In previous design generations interconnect could safely be modeled by extraction using just R and C values. Parasitics in interconnect are important because they can affect the operating frequency or phase error in circuits like VCO’s. The need to model parasitics properly in wires is just as applicable in PA’s, LNA’s and for clock lines, or any other place there is critical interconnect in high speed analog or RF circuits.Several things have changed that are now compelling designers to look more closely at interconnect parasitics. Up until now inductance was something that could be ignored. But with higher frequencies, even simple wires inside circuits are starting to look like transmission lines. The rule of thumb has been that when the length of the signal path was long enough to become some percentage of a wavelength that the line itself starts to become a concern for signal integrity. The question is what is the critical length in designs?

Lets first look at operating frequency to see at what scales we will have to more closely examine interconnect effects. At 900MHz the wavelength in a metal conductor is ~192mm, a very large dimension relative to IC circuit interconnect. One percent of that is a whopping 1923 microns. But if we move up to 12GHz we see that the wavelength is
14.2mm, and one percent of that is 144 microns. Now we see we might actually encounter circuit elements that are around this size.

What is needed are some objective data on what happens when wires are modeled with and without inductance as circuit signal paths exceed two orders of magnitude less than the wavelength.

We have run some test cases using a straight metal line to determine the magnitude of the discrepancy between predicted and actual signal performance that comes from ignoring inductance where the wires are more than one percent of the wavelength. Phase error specs for signal lines are often capped at 1 degree of phase error. The spec for signal amplitude accuracy usually needs to be better than +/-5%.

Here is what we see from simulation using PeakView to generate electromagnetic models for a 200um line running at 25GHz. This line length is 3% of a wavelength.

This is the added impact seen by considering inductance in our analysis. It is evident that even when the line length is just 3% of a wavelength significant effects can show up.

It is recommended that design flows for high speed analog and RF circuits include an accurate EM based method for including inductive effects in circuit performance analysis. PeakView offers this capability in its HFD option. HFD allows for inductive effects to be easily included in circuit simulation runs. No manual work is required and it is fully integrated with the LVS and LPE flow. PeakView is a high performance fullwave electromagnetic solver providing accurate resistance, capacitance and inductance information that is usable by designers for transient circuit simulation.

About Lorentz Solution, Inc.
Lorentz Solution, Inc. is the industry leader in supplying electromagnetic (EM) design capabilities to the RF, high-speed analog and high-speed digital design community. PeakView™ EM Design Platform, Lorentz’s flagship product, is widely adopted by top IDM, fabless companies and semiconductor foundries. Based in Santa Clara, California, USA with initial funding from US-based VC firms, Lorentz Solution is continuing its multi-year profitable growth.

lang: en_US


Gobi, the Jewel in Qualcomm’s Crown

Gobi, the Jewel in Qualcomm’s Crown
by Paul McLellan on 03-01-2014 at 5:19 pm

Back in the 1990s in the middle of the 2G GSM era, cell-phone manufacturers would display a “triangle of difficulty” with a large base labeled radio, a middle smaller part labeled baseband and a little triangle on top labelled software. The idea was that the radio was incredibly difficult, then the baseband chip and there wasn’t a lot of work on software since there wasn’t that much in a phone of that era. But they would point out that that was then and now the triangle was inverted. Radio was basically a solved problem, baseband was still a challenge but more and more manpower was being consumed by software. It was true. The radios didn’t change much, a new baseband at each process node with different speed/power tradeoffs, but more and more of the differentiation was moving into software. Plus nobody was yet trying to integrate the radio onto the pure-CMOS baseband chip, that challenge lay in the future.


But that was then. Then came 3G and, not that long after, LTE. Radio went back to being an unsolved problem. Building the radio interface, also known as the modem, was right on the limit of what was possible given the silicon performance. To make things worse, the power budget could not go up much even though the computational load was much higher.

Qualcomm had distinguished themselves in the 1990s by commercializing CDMA but their claim to success in that era was not that they were better at designing modems than anyone else. Then suddenly they were ahead by a long way. The Gobi series of modems started by supporting both CDMA and GSM on the same modem, along with GPRS and EDGE (data standards).


Several teams tried to design LTE modems. When ST-Ericsson was shut down, its LTE modem was one of the few things of value and was kept by Ericsson in the divorce. Intel needed an LTE modem and it had one in design with the Infineon acquisition, and then went and got another one through an acquisition of Fujitsu’s LTE team. As an indication of how difficult it is, Intel has not managed to get its LTE modem onto its own process and has to manufacture it via TSMC. Almost all the other LTE modems were late.

The current version of Qualcomm’s Gobi handles LTE with speeds up to 300 Mb/s and backward compatibility to 3G, EV-DO (data), and GPS.

Initially LTE modems were standalone chips. For some applications that is still desirable. Apple designs its own application processors (A4…A7) but it uses a Qualcomm Gobi modem as a separate chip. One theory is that designing an LTE modem is too hard even for Apple (although with their cash they could buy a modem design team or a whole company). The other is that getting approved by all the different operators is something they would rather leave to Qualcomm who have to do it anyway and if they integrated the modem on the Ax chips they would have to go through that certification process separately. With a separate chip, modem certification goes on in parallel with application processor design.

Very early on, Qualcomm had the Gobi modem integrated onto its own Snapdragon processors as a single chip. They were way out in front doing this. nVidia is just getting there now. Broadcom (after acquiring Renasas LTE line) now has an integrated product. Intel still isn’t there and won’t be any time soon. Mediatek still has separate chips but is apparently sampling an integrated product.

The result: Qualcomm is far and away the market leader in baseband application processor chips (at least the merchant market, both Apple and Samsung create their own). In LTE-based baseband chips, analysts give it 95% market share. After all the other guys didn’t have anything when it was needed and only now (literally, there are daily announcements from MWC in Barcelona this week) are announcing handsets incorporating their products.

Presumably the handset manufacturers will want to make at least one or two of these competitors viable, for security of supply and price negotiation. So Qualcomm’s business will probably grow but the percentage share will probably shrink. But Qualcomm already have Snapdragon sampling in TSMC’s 20nm process, out ahead of everyone else again. Correction: it looks like just the Gobi modem is in 20nm, not yet Snapdragon (although the modem is far harder to port than a pure digital SoC so it shouldn’t be far behind).

Even the competitors are not bullish. As a senior marketing manager of nVidia said:“We are never going to be at the level of Qualcomm. We have to take baby steps.”



Recently Qualcomm announced Octocore versions of Snapdragon with 64 bit processors and a specialized chip for Automotive built in 20nm.

In the 1990s, Qualcomm’s secret weapon was that they understood CDMA better than anyone else, having invented it. In the last 7 or 8 years it has been their Gobi modem technology, first in 3G but especially for LTE along with the fact that they have had a single chip integrated application processor and modem. They have out-executed the rest of the industry.

As always, mobile is a fascinating industry to watch.


More articles by Paul McLellan…