RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Closer Look at the QCOM $40M Investment in China!

A Closer Look at the QCOM $40M Investment in China!
by Daniel Nenni on 12-14-2014 at 7:00 am

Last Thursday night was the 20[SUP]th[/SUP] annual GSA Awards Dinner which probably hosts one of the largest collections of semiconductor executives. Think of a movie or music awards show with all of the trimmings including Jay Leno as the keynote. I don’t know the exact head count but there were 160 dinner tables with 10 plates per table. The GSA (Global Semiconductor Association) used to be called the FSA (Fabless Semiconductor Association) which means the IDMs can play with us too but it was mostly fabless.

About GSA:
The Global Semiconductor Alliance (GSA) mission is to accelerate the growth and increase the return on invested capital of the global semiconductor industry by fostering a more effective ecosystem through collaboration, integration and innovation. It addresses the challenges within the supply chain including IP, EDA/design, wafer manufacturing, test and packaging to enable industry-wide solutions. Providing a platform for meaningful global collaboration, the Alliance identifies and articulates market opportunities, encourages and supports entrepreneurship, and provides members with comprehensive and unique market intelligence. Members include companies throughout the supply chain representing 30 countries across the globe. www.gsaglobal.org

Earlier in the day Qualcomm announced the China investment with Walden International which is interesting because the CEO of QCOM (Steve Mollenkopf) and the founder of Walden International (Lip-Bu Tan) were both there last night and very friendly. Even more interesting, the founder of Walden International is also the CEO of Cadence and QCOM is a very big customer of Cadence. ARM and TSMC are also investors in Walden and both of their CEOs were there as well. Cadence is also very active with ARM and TSMC (I see Lip-Bu in Taiwan the more than any other EDA CEO) so it is indeed a very small pond we swim in. Given that China consumes roughly half of the semiconductors we produce I think you will see more of these types of deals amongst “fabless friends”, absolutely.

Qualcomm Ventures, the investment arm of Qualcomm, was formalized in 2000 and currently has a portfolio of more than 120 active companies across the globe. Qualcomm Ventures also invests in seed stage companies through its QPrize” Mobile Internet Startup Competition which evaluates and awards seed funding for promising early stage Chinese entrepreneurs.

If you look at their portfolio page you will see some very interesting and innovative companies that no doubt integrate QCOM technology (FitBit, Xiaomi). If you hover over the 100+ logos you will see the future of mobile and IoT especially in India and China.

Dr. Morris Chang Exemplary Leadership Award
The Dr. Morris Chang Exemplary Leadership Award recognizes individuals, such as its namesake, Dr. Morris Chang, for their exceptional contributions to drive the development, innovation, growth and long-term opportunities for the semiconductor industry.

  • 2013 Dr. Sehat Sutardja, Chairman, CEO and Co-Founder and Ms. Weili Dai, President and Co-Founder, Marvell Technology Group Ltd. (Marvell)
  • 2012 Sir Robin Saxby, Founding CEO, ARM
  • 2011 Dr. Henry Samueli, Co-founder and Chief Technology Officer, Broadcom Corporation
  • 2010 Dr. John Hennessy, president of Stanford University
  • 2009 Dr. Aart de Geus, Chairman & Chief Executive Officer, Synopsys, Inc.
  • 2008 Dr. Eli Harari, Chairman and CEO, SanDisk Corporation
  • 2007 Gordon Campbell, Executive Director, TechFarm Ventures
  • 2006 Kamran Elahian, Chairman, Global Catalyst Partners
  • 2004 Jen-Hsun Huang, CEO, President and Co-Founder, NVIDIA Corporation
  • 2003 Dr. Irwin Mark Jacobs, Chairman of the Board and Chief Executive Officer of QUALCOMM Incorporated
  • 2002 Bernard V. Vonderschmitt, Co-Founder and Chairman of the Board, Xilinx, Inc.
  • 2001 Michael L. Hackworth, Chairman of Cirrus Logic
  • 2000 Dr. Robert S. Pepper, former President and CEO, Level One Communications
  • 1999 Dr. Morris Chang, Chairman and CEO, TSMC

The most interesting award given out last night was the Dr. Morris Chang Exemplary Leadership Award. Interesting because it went to Dr. Morris Chang for the second time (Dr. Mark Lui accepted for Morris). This will probably keep the semiconductor rumor mill churning for a while and you read it here first!

Also Read: Fab-U-Less! The 2013 Global Semiconductor Awards!


Virtual Emulation Extends Debugging Over Physical

Virtual Emulation Extends Debugging Over Physical
by Pawan Fangaria on 12-13-2014 at 7:30 am

Amid burgeoning complexity of SoC verification with ever increasing hardware, software and firmware content, verification engineers are hard pressed with learning multiple tools, technologies and methodologies and still completing SoC verification with full accuracy in time. The complexity, size and diversity of SoC has increased while time-to-market has decreased, thus compressing the time to design and verify. The engineering team scattered across the globe need access to limited common verification resources, especially hardware emulators in order to gain confidence in complete SoC verification. Emulators are generally stored in labs and are accessed physically, thus limiting their access to a larger team. What if the emulator environment is virtualized and is made available to the verification team across the globe? It will unleash the resource capability, scale up verification and improve design and verification team productivity by large extent.

Lauterbach’shardware assisted debug toolset TRACE32 and Mentor’sVeloce emulator provide a unified debugging environment that can be used in virtual (for models and simulators) as well as physical (for FPGA prototypes and silicon) mode.

Veloce OS3 emulation system provides a complete verification environment (integrated with Questa simulation environment) for software and hardware engineers through a unique architecture that enables simulation-like interactive debug and fast turnaround for SoC configurations that can scale up to 2 billion gates. It’s standards compliant (SCE-MI 2) providing interoperability with other software-based hardware simulators. Providing visibility of all nodes at all times, it can be used for pre- as well as post- silicon debug.

Through VirtuaLAB environment, multiple users across the globe can simultaneously run verification remotely on the same machine. The Veloce OS3 VirtuaLAB peripherals are reconfigured instantly to support multiple projects and rapidly shifting priorities. This enables global emulation resource management which maximizes engineer productivity and verification ROI. The verification environment is flexible enough to allow hardware stimulus through physical connection, software stimulus through virtual connection, or a combination of the two.

In physical connection, the difference in voltage levels and speeds between actual silicon and Veloce emulation model is solved by Veloce iSolve products (that includes iSolve ARM Cortex JTAG Trace) through a range of solutions including Ethernet, USB, multimedia, and debug.

Veloce offers a co-modelling environment where a user can use virtual environments (e.g. C models) in a host computer connected to the actual hardware design (RTL) implemented in Veloce. This approach can be used to replace hardware solutions (e.g. physical debug probe) with a software model, thus enabling concurrent users to access a single build of design. In the above diagram, untimed transactions are being transferred over TBX (Veloce TestBench Xpress) link to Veloce with the vJTAG transactor translating these into a stream of JTAG pin level transitions. This improves performance compared to co-simulation environment which transfers the individual JTAG pin level transitions over the TBX link.

Veloce iSolve allows users to connect the real debug hardware used on a FPGA prototype or silicon platform. Virtual Probe replaces the physical solution, enabling a virtual connection of the JTAG debugger to the design through the TBX co-modelling interface. Both iSolve and Virtual Probe support the same debugger UI environment.

A Virtual Probe is implemented within the co-model host using both Lauterbach and Mentor software. The T32MCIServer replaces the debug hardware; the vJTAG transactor interface takes the JTAG sequence transactions from the T32MCIServer and sends them over TBX to the Veloce which recreates IO pin signals at the SoC boundary.

TRACE32 contains a debug probe, a real-time trace tool, and a logic analyzer which are architecture independent and can be operated either standalone or integrated together through a system controller. It supports all common microprocessor architectures in the embedded world.

A detailed description about connecting TRACE32 and iSolve ARM Cortex JTAG Trace as well as connecting TRACE32 and Virtual Probe along with other details about this methodology is provided in a whitepaperat Mentor website.

Both, the physical and virtual methods have distinct advantages; the physical connection for final testing of a design before tape-out provides a very high level of confidence that the real hardware after manufacturing will work with the real debugger; the virtual connection provides highly flexible enterprise level solution for multiple concurrent remote users to develop software and testbenches that use TBX.

More Articles by PawanFangaria…..


Will 3DIC Ever Be Cheap Enough for High Volume Products?

Will 3DIC Ever Be Cheap Enough for High Volume Products?
by Paul McLellan on 12-12-2014 at 8:00 pm

More news from the 3DASIP conference. Chet Palesko of SavanSys Solution had an interesting presentation with the same title as this blog (although this blog draws from several other presentations too). Chet took a look at what aspects of 3D are likely to get cheaper going forward. He took as a starting point that stuff that is not used only for 3D is probably already close to as cheap as it is going to be. For example, 3D chips involve bumping as does flip-chip. So although there may be some small amount of incremental improvement in cost, most of the improvement has already happened driven by flip chip. And, as an interesting aside, flip chip is still more expensive than wire-bond which is why it is only used for about 17% of chips. You only use flip chip if you need it for other reasons than cost (smaller package, higher performance etc).

Another thing that I had not thought of is that 2.5D is expensive compared to true 3D. Of course you have to manufacture the interposer, which is done in a non leading edge process such as 65nm. But the TSV and bumping it requires is very costly since it is large so not many interposers-per-wafer. There are some savings since the die don’t need TSVs, but they still all need bumping. Qualcomm, in particular, maintain that interposers are the wrong way to go (not that they have any 3D chips in production though). At a conference in Europe recently someone had numbers showing 2.5D was 6 times the cost of true 3D although that seems high to me (but I’ve not seen those numbers).

So 3DIC is only the best choice if no other technology meets the product requirements, typically things like memory bandwidth, physical package size, ultra low power and so on. Most technologies are ones of last resort in that they are more expensive but have other advantages (embedded passives, multi-chip modules, even SoCs). A few are cheaper and thus displace the existing technology. Surface mount displaces through-hole assembly, until recently, each new process node displaced the previous one (at least for digital logic).

Earlier Jan Vardarman had listed out the big pictures areas where improvement are needed:

  • EDA tool availability (especially pathfinding and thermal analysis)
  • Assembly: stacking of die
  • Wafer thinning and the temporary attach and debond process
  • Thermal, especially logic on memory (since hot spots may take DRAM out of spec)
  • Test methodology and the need for known-good-die (KGD) making wafer sort so much more important and costly
  • Yield of the whole process
  • Setting up the whole supply chain and who does what

To keep it simple, let’s assume a 3DIC with just two die, a bottom die and a top die. The bottom die cost drivers are TSV creation, TSV reveal (basically CMP), TSV creation yield loss, thin wafer yield loss, testing/known-good-die costs.The top die cost drivers are the RDL process cost and the wafer bumping cost. Then assembling the two die has silicon to silicon process cost, silicon to silicon yield loss, silicon to substrate and substrate cost. So which of these will have big improvement potential? See the tables below. The column on the right summarizes whether improvement is likely. Click on the tables to enlarge them.




Chet looked at some alternatives but to wrap up he told us we were asking the wrong question. The right question is “Will the markets that require 3DIC grow fast enough to drive volume manufacturing?” After all, performance, power and miniaturization trends are continuing. And there will be some cost reduction for 3DIC which can open up new markets. One caveat is that the cost of 2.5D packaging (such as the Xilinx FPGA) should not be extrapolated to 3D. Even so, 3D is unlikely to ever be the cheapest solution and if you can package-on-package (like the Apple Ax chips) or fanout-wafer-level (FOWLP) you will.

But in the same way people design SoCs if they cannot use an FPGA or do a board-level design, people will use 3D when nothing else will do. Like the memory stacks announced by Micron, Samsung and SK-Hynix, the recently announced graphics integrations by Nvidia and AMD/ATI, the Xilinx high-end FPGAs and so on. These are all (or soon will be) in production but at comparatively low volumes.


More articles by Paul McLellan…


Benefits of Using Schematic Driven Layout

Benefits of Using Schematic Driven Layout
by Daniel Payne on 12-12-2014 at 12:00 pm

Most IC designs are developed by a team of professionals, often separated into distinct groups like front-end and back-end, logical and physical designers. Circuit designers use tools like schematic capture at the transistor-level to create a topology, then begin simulating the netlist with a SPICE simulator. Layout designers can manually place transistors, contacts, vias, cells and interconnect between cells. How should the circuit designers communicate their layout preferences to the layout designers?

A recommended approach is to use Schematic Driven Layout (SDL), a technique where on the front-end you can start to control where the initial placement of physical transistors and cells will happen, all without having to be an IC layout expert. Every schematic device gets a corresponding layout device, reducing the amount of time spent in LVS (Layout Versus Schematic). Time to market is the familiar driver in all electronics companies, so saving time by using an SDL flow can really help out.

There’s a webinaron this topic scheduled by Tanner EDA for Tuesday, December 16th at 11:00 AM Pacific time. Expect to learn the following concepts:

  • Schematic Capture with the S-Edit tool
  • Layout editing with the L-Edit tool
  • Schematic Driven Layout (SDL) starting with S-Edit
  • Using assisted routing in an SDL design flow
  • Eliminating manual routing
  • ECO tracking

Related – IC Place and Route for AMS Designs

Both circuit design engineers and layout designers would benefit from attending this SDL presentation. From past Tanner EDA webinars I know that the presenters use the EDA tools live, instead of just canned screen shots. Thuong U is the AE doing the presentation, and he has 13 years of experience with these EDA tools.

Related – Adding a Digital Block to an Analog Design

Manual routing is a technique often used in AMS designs in order to get tighter control over matched device requirements, however by adding assisted routing you can get results more quickly while maintaining the accuracy. Schematic Driven Layout isn’t just offered by the big three in EDA, so having this at Tanner EDA gives you some choice in buying tools for your next AMS project.

Webinar

Register for this free webinar online.

Related – Affordable AMS EDA Tools at DAC


3D, The State of the State

3D, The State of the State
by Paul McLellan on 12-11-2014 at 8:00 am

I have been at the 3D ASIP conference that is held every year in Burlingame. It is far and away the best place to get a snapshot on what is going on in 3D (and 2.5D) IC design each year. One of the presentations was by the guys from Yole on where the industry is right now. Other presentations were on pathfinding, power reduction (did you know Jerry Frenkil likes Lone Star Beer?), building TSVs, planarity during assembly and much more.


In the past the conference has been all about how 3D is going to happen “real soon now” but each time a 3D design was announced it would turn out that when the teardown was done it didn’t use thru silicon vias (TSVs) or any true 3D technology at all. It would be another year before something happened. But this year things started to happen.

3D and 2.5D are still too expensive for most applications, but the ones that can take advantage of the benefits and sell them to their customers are moving. Don’t expect 3D chips to show up in your cell-phone any time soon but in routers, and servers and high-end stuff it is starting to move. Samsung did a study that showed that compared to package on package designs (like the current Apple Ax designs) package size was down, power was down and bandwidth was up insanely. Designs where those are valued make sense.

There are some pretty high profile 3D things happening. The earliest was the Xilinx high end FPGA but that ships in very low volume at very high prices so isn’t really generating enough learning to get prices down. Samsung has announced DDR4 3D DRAM DIMM modules. The Micron hybrid memory cube is about to start shipping in volume: 4 memory chips on top of a logic chip with all the control logic. SK Hynix and Samsung have also announced 3D memory products moving into high volume manufacturing.

Announced, but not shipping are that AMD (ATI) will use 3D stacked high bandwidth memory (HBM) on 20nm in 2Q next year. Nvidia will do the same in 2016. Intel has said they have the technology but haven’t announced anything but then they are Intel. Even at the lower end of the market Matrox has announced next generation GPU modules powered by AMD 3D stacks. If you want to be in graphics, get with the program.

There are other designs too. The common characteristic is that they are at the high end of the market so that although 3D is still expensive, it has higher performance/bandwidth and for those markets that need it the ends justifies the cost. I have no idea of the cost of Micron’s HMC but they admit it is a lot more than just buying the same amount of regular DRAM. But the performance is so much higher that if you are building high end servers or routers then you can justify it.

The big driver in SoC these days is mobile, but it is too cost-sensitive so nobody seems to expect 3D to appear there any time soon. Qualcomm are on record of saying that the costs of interposers are too high for now and given all the growth in mobile is in the low end that you shouldn’t expect to see it any time soon.

For really big chips the savings from 2.5D can be big. A couple of years ago I was at a 3D workshop where eSlilicon ran their cost model on the Xilinx parts and reckoned they were saving 80% by doing a 2.5D interposer with 4 small chips versus trying to run a maximum (reticle limit) sized die on an immature process.


So 2015 is the year of 3D. It is coming. Yes, only in high priced products for now, but as the learning from running in volume starts to percolate through the supply chain it will get more economical. There don’t seem to be any major technical problems (we know how to build TSVs, even though they will get better; we know how to attach and debond a backing material so that when we thin the die it is manageable; the stress issues with different thermal expansions seem manageable). The big issues seem to be to run a lot of volume and to decide who does what in the supply-chain.

IoT is a 3D market. They are not going to go to 16nm. They need sensors, RF, analog and digital in the same package at cheap cost. 3D is clearly the way that will eventually get there. Not next year, but eventually.


More articles by Paul McLellan…


A Functional Verification Framework Spanning Simulation to Emulation

A Functional Verification Framework Spanning Simulation to Emulation
by Daniel Payne on 12-11-2014 at 2:00 am

Software engineers and firmware designers can find bugs, update their code and re-distribute to the users. In the consumer electronics world this means that my smart phone apps get updated, and my Android OS gets updated on a somewhat regular basis, however on the hardware side the design and verification of an SoC must be close to perfect, because there’s no easy way to do a field upgrade and the cost of a product recall can put your company out of business. Performing functional verification on an SoC is the focus of my blog today, and there are several tasks required during the verification process:


Verification Activities, source: Wilson Research Group and Mentor Graphics

Our semiconductor industry has created and adopted the UVM (Universal Verification Methodology) as a standardized approach that can improve electronic product quality during verification. A SystemVerilog simulator used during verification can run maybe 10-1,000 clock cycles per second, depending on the block size and at that speed it just isn’t fast enough to boot an operating system, or paint a screen with pixels.

Related – Coverage Driven Analog Verification

In typical fashion, where there’s a need, there’s often a product to meet that need, and for chip designers there are emulators which can run a few million clock cycles per second, allowing us to boot an operating system and fill a screen. The initial usage of UVM was focused on SystemVerilog simulators, so what about writing UVM testbenches that can also run on emulators?

Anoop Saha of Mentor Graphics wrote a detailed, 16 page white paper on this topic: From Simulation to Emulation – A Fully Reusable UVM Framework, and you may download it online. The idea is that verification engineers should be able to run tests in both a SystemVerilog simulator or emulator, support SystemVerilog in emulation, and provide faster test results in emulation than simulation.

To make your testbench compatible with an emulator use two components:

  • An HDL (Hardware Description Language) static component with DUT (Device Under Test), which runs on the emulator
  • The dynamic testbench running on the simulator, which contains the HVL (Hardware Verification Language) behavioral code

Here’s what these two blocks look like where yellow is the HVL, blue is the HDL, and green is the top-level:


Two top UVM Testbench

The interaction between HVL and HDL can further be refined by using a virtual interface construct where testbench components access SystemVerilog interfaces through a BFM (Bus Functional Model) interface. Shared parameters between HVL and HDL are defined in a common params_pkg file:


Acceleration ready two-top UVM architecture

The HVL code is strictly untimed in this approach because that would not work with emulation and slow down emulation. If you really need delays in a testbench then add them only inside of an HVL to HDL function call.

Related – Improving Verification by Combining Emulation with ABV

To keep the communication minimal between HVL and HDL we use a transaction level only, where testbench objects on the left-hand side use a virtual interface handle to access signals in the HDL on the right-hand side.


Communication between HVL and HDL

Testbench components on the HVL side can:

  • Call tasks and functions inside the BFMs
  • Drive and sample DUT signals
  • Start BFM threads
  • Configure BFM parameters
  • Read BFM status

Virtual interface connection between HVL and HDL

Transaction-level modeling allows communication in both directions between HVL and HDL:


Inbound and Outbound Communication

Make sure that all of your function arguments are synthesizable to enable this two-way communication.

Transaction objects can use dynamic arrays or queues, so that you can model activity like Ethernet transactions. Streaming communication is accomplished through SCE-MI (Standard Co-Emulation Modeling Interface) pipes, where receiving and sending are running independent threads. Here’s an example of a SCE-MI pipe operation where the HVL sends a whole packet, then the HDL receives that across multiple clock cycles.


SCE-MI Pipes example

It is now possible to reuse a UVM testbench across both simulation and emulation by following the methodology of:

  • Creating an untimed HVL testbench, and a timed HDL with the DUT and BFM logic
  • The untimed HVL code has no delays
  • HDL code must be synthesizable as defined by the emulator platform
  • Remote function calls are used to communicate between HVL and HDL, no direct HDL signals called from any HVL class

Emulation is attractive because of its high speed, providing speed improvements over pure simulation by 50 to 5,000X. By using this approach you gain the verification benefits of using an emulator in simulation acceleration mode:

  • Includes assertions
  • Functional coverage
  • Power aware
  • Replay-based debug
  • Reporting
  • Profiling

The complete 16 page white paper, including coding examples is online now.


IDT bolsters RF portfolio amid LTE boom

IDT bolsters RF portfolio amid LTE boom
by Majeed Ahmad on 12-10-2014 at 7:00 pm

The global rollout of fourth-generation wireless (4G) infrastructure requires new architectural frameworks for RF devices with demands like high linearity. Integrated Device Technology (IDT) Inc. is confident that its high-performance RF solutions for high-bandwidth communications will open a new window of opportunity in the continuing evolution of Long Term Evolution (LTE) technology.

The base station hardware now represents almost 50 percent of the wireless infrastructure market. And according to Earl Lum, President of EJL Wireless Research, LTE technology could comprise 90 percent of all RF base station system shipments by 2018.

According to Greg Waters, IDT’s President and CEO, the wired and wireless communications infrastructure business now represents two-third of IDT’s revenue. He particularly mentioned RF business tied to the burgeoning LTE networks as an exciting opportunity. “Fourth-generation base station manufacturers make up around US$50 to US$70 billion business,” Waters said. “At the same time, however, RF components in the 4G base stations need higher precision because of size and power constraints.”

Base stations traditionally generate a lot of power, but that has to change because the size of base stations is shrinking in the LTE-centric 4G networks. Another crucial challenge is the co-existence of 2G, 3G and 4G signals within the same spectral band. It is now imperative for base stations in the 4G environment to scale from macros to small cells while they improve in reliability as well as data throughput.


RapidIO is used in 100 percent of 4G rollouts

Here, Waters said, IDT’s expertise in RapidIO backplane communications will be highly valuable in reducing the overall network latency. “New LTE-Advanced and C-RAN designs are now adopting the RapidIO interconnect standard and it’s being used 100 percent in 4G rollouts.” Another prominent advantage that IDT claims to bring to its RF is expertise in signal integrity. “Signals from prepaid 2G users interfere with 3G and 4G users and that degrades quality of service (QoS) for the high ARPU business,” he added. “IDT’s expertise in signal integrity is critical in countering noise and interference.”

RF switch launch

IDT’s RF product portfolio includes mixers, digital step attenuators, modulators/demodulators and RF timing that encompass the design footprint from antenna to data converters. The San Jose, California–based chipmaker says it has over 40 RF products in production or sampling. IDT also claims to have captured 50 percent RF market share in China’s 4G network build-out.

The RF chips now represent the company’s fastest growing business. “We started from zero only five short years ago,” said Dave Shepard, VP and General Manager of IDT’s RF & Timing Division. “We are expanding our RF operations by 50 percent over the next 15 months.”

IDT has recently made an entry into the RF switch market with the launch of the F2912 device that offers low insertion loss and high isolation and linearity. It is aimed at 2G, 3G and 4G base stations, microwave backhaul and front haul, test equipment, CATV head-end, WiMAX radios, and general switching applications.


IDT’s F2912 RF switch

The F2912 chip supports the frequency range of 300 KHz to 8 GHz in order to achieve broad bandwidth without sacrificing performance across the entire frequency range. It boasts high isolation of 60 dB at 2 GHz to reduce signal leakage between adjacent RF port paths. It also features high OIP3 of +64 dBm to reduce intermodulation distortion. Moreover, the F2912 RF switch offers operating temperature range of -55 to 125°C for high reliability in harsh thermal environments.

More product details about the F2912 RF switch are available at http://bit.ly/1IiF9y3

Image credit: IDT Inc.

Majeed Ahmad is author of Age of Mobile Data: The Wireless Journey To All Data 4G Networks that chronicles the evolution of mobile data technology and how that eventually led to pure data LTE network architecture.


TSMC Gets Ready for IoT

TSMC Gets Ready for IoT
by Paul McLellan on 12-10-2014 at 11:36 am

With all the talk about 14/16nm and 10nm it is important to realize that older processes are still important. Eventually 16nm may end up being cheaper than 28nm but for the time being 28nm seems to be a sort of sweet spot, not just cheaper than every process that came before it (which was true for every new node) but also cheaper than every process that will come after it (which is new territory for the semiconductor industry). If you are designing an application processor for a smartphone then you will move to the new nodes as fast as you can. But other markets, in particular products for the internet of things (IoT) don’t need that. They need low power, digital/analog/RF integration and so on. This creates new opportunities in the non-bleeding-edge process geometries.

With the explosive growth phase of smartphones over, IoT is expected to provide a lot of the high growth consumption of semiconductor for the coming few years. PC is nearly flat, smartphone growth will mostly be at the low end of the market with the high end now being mostly a replacement market.


TSMC has introduced ultra-low power versions of some of its mature processes. The current status is that ultra low power versions of 0.18um and 90nm are in production and 50nm, 45nm and 28nm ULP processes will take risk production in 2015. There is also integrated RF and flash. These are especially attractive for IoT designs that need extremely low power and connectivity. Some IoT applications (such as automotive) are not all that power sensitive since there is a large battery available, but others such as wearables require very long periods between recharges, and still others are predicted to need a battery that lasts for the life of the product or they scavenge power from their local environment.


Some details of the process. First they operate at a lower Vdd which reduces both standby and active power (and leakage). They are optimized for the 0.5-0.7V range. The tailored eHVT device enables an over 70% reduction in standby power. However they can also work at higher voltages at 1.1V (40LP) and 1.2V (55LP).

Most IoT designs don’t seem to need really high performance nor billions of transistors since both would consume too much power. But they need the combination of very lower power operation, especially in standby where they will spend most of their life, and RF (since they need connectivity through cellular, WiFi, Bluetooth or some other radio interface).


So the bottom line is that the new processes are compatible with the existing eco-system at 28HPC. But the operating voltage is reduce by over 20%, active power by over 30%, standby power by over 70% and the capability to build an SoC that includes RF and embedded flash, perfect for the IoT market.


Intel has Another First for 14nm Production!

Intel has Another First for 14nm Production!
by Daniel Nenni on 12-10-2014 at 7:00 am

An interesting thing happened while I was researching a slide from Bill Holt’s “Advancing Moore’s Law” presentation at last month’s analyst meeting. Slide #19 mentioned that Intel was the first to use “air gap” dielectric spaces to improve performance in a digital logic flow for microprocessors. I know a certain foundry that is actively researching air gapping but for production this is a pretty major announcement that did not get the proper accolades in my opinion. The benefit of using the “ultimate” low-K dielectric can be huge however the devil is in the details.

To clarify I’m hoping the esteemed members of SemiWiki can help with the following questions:

[LIST=1]

  • What (additional) design restrictions does this impose?
  • What are the metal width and space options, to enable the “sealing” dielectric to surround the air gap?
  • What happens around vias?
  • What % of a typical wire could have a surrounding air gap?
  • Are general SoC designers ready for the additional tradeoffs (less wiring flexibility for reduced RC interconnect delays and noise coupling)?

    The first mention of this I found was from a paper at the 2010 International Interconnect Technology Conference. Researchers from Intel confirmed that the design constraint of a fixed spacing between interconnect lines allows for the use of air gaps in manufacturing to increase circuit speeds. Coincidentally one of the other references that came up when Googling around is a blog on the Coventor website “Got Air Gaps?” from Ryan Patz of Applied Materials which is definitely worth a read. “Coincidentally” because I will be moderating a panel with Coventor at IEDM next week in San Francisco on process variation:

    Survivor: Variation in the 3D Era
    It’s a jungle out there. The era of 3D semiconductors, 3D NAND Flash, FinFETS and unprecedented process complexity introduces new pitfalls for the cunning engineer to overcome. Find out how the best and the brightest are outwitting the competition with creative ways to navigate the treacherous landscape of advanced IC design and manufacturing. They know the key to survival in dealing with process variation is to …

    Reduce It. Contain It. Understand it.
    Join a group of rugged survivors at an interactive panel discussion, moderated by one of the original castaways from the IC island, Daniel Nenni of SemiWiki, and featuring:

    • Rich Wise, Lam Research
    • Jeffery Smith, TEL America
    • Tomasz Brozek, PDF Solutions
    • Tom Dillinger, Oracle Corporation
    • Jan Hoentschel, GlobalFoundries
    • David Fried, Coventor

    Location: “Carmel Room” at Hotel Nikko, San Francisco
    Date: Tuesday, December 16, 2014
    Time: 5:30pm -8:30pm (Cocktails and hors d’oeuvres) Panel begins at 6:00pm

    MORE INFO HERE

    You do not need an IEDM badge for this so please stop by if you can and meet the people behind the semiconductors that make modern life, well, modern.

    About Coventor

    Coventor, Inc. is the market leader in automated design solutions for developing semiconductor process technology, as well as micro-electromechanical systems (MEMS). Coventor serves a worldwide customer base of integrated device manufacturers, memory suppliers, fabless design houses, independent foundries, and R&D organizations. Its SEMulator3D modeling and analysis platform is used for fast and accurate ‘virtual fabrication’ of advanced manufacturing processes, allowing engineers to understand manufacturing effects early in the development process and reduce time-consuming and costly silicon learning cycles. Its MEMS design solutions are used to develop MEMS-based products for automotive, aerospace, industrial, defense, and consumer electronics applications, including smart phones, tablets, and gaming systems. The company is headquartered in Cary, North Carolina and has offices in California’s Silicon Valley, Waltham, Massachusetts, and Paris, France. More information is available at http://www.coventor.com.


  • CTO Interview with Dr. Wim Schoenmaker of Magwel

    CTO Interview with Dr. Wim Schoenmaker of Magwel
    by Daniel Payne on 12-09-2014 at 7:00 pm

    I visited the Magwel booth at DAC in June and chatted with Dundar Dumlugol the CEO about their EDA tools that enable 3D co-simulation and extraction. Since then I’ve made contact with their CTO, Dr. Wim Schoenmaker to better understand what it’s like to start up and run an EDA company. Magwel’s history goes back to 2003 when Wim Schoenmaker and Peter Meuris founded the company based on their research work at IMEC to analyze and simulate the entire layer stack of a semiconductor structure.


    Wim Schoenmaker, Ph.D.

    Related: EDA for Power Management ICs at DAC

    Q&A

    Q: Why did you start Magwel and what challenges were you trying to solve?

    Around 2000 at IMEC I was involved in characterization of on-chip passive structures and it was not clear (in those days) what the impact of semiconductor junctions might have on the quality factors of integrated passives. Partial solutions existed in which the semiconducting regions were replaced by metals with high permittivity and moderate conductance or insulating regions. During the last three decades (1970-2000) technology CAD or TCAD being the discipline to model semiconductor processes and devices became a mature field. The situation at that time was that there existed two worlds of modeling: the TCAD world for devices (and processes) and the Maxwell field solvers world that originally was an out spin of accelerator and wave guide design. My ambition was to set up a merge between both views. I decided to rethink the TCAD device modeling approach from scratch but this time to include the full electromagnetic picture and not merely the electric-only view. Being the TCAD team leader at IMEC and having a background in lattice gauge theories I was well equipped to take on such a job.

    Moreover, resolving the problem of on-chip passives was strongly supported by several leading persons at IMEC such that seed funding could be accessed to start MAGWEL.

    Q: What do you like best about being a CTO?

    Being a CTO at an SME (Small or Medium Enterprise) of the scale of MAGWEL, the meaning of this title is more a formality than a manual for filling daily activities. In practice I am still very much involved in the actual development of our products and this is the kind of work I enjoy the most. A serious part of my activity goes into setting up collaborative research and development projects. These projects serve as stepping stones towards mature products. For example, we have currently running EU funded FP7 project to explore novel routes to extract SPICE and Spectre compatible models for electron charge and heat transport. This project also addresses uncertainty quantification (variability).

    The common denominator of such projects is that advanced research results in the mathematical and computer science community are applied in microelectronic engineering. More recently, together with ON Semiconductorwe are cooperating in a research project on ESD network verification, funded by the national institute of science and technology (IWT). In general, my job is to perform feasibility studies of novel approaches to deal with yet unsolved design challenges. In general, such exploratory activities consume budget that is not recovered in a short term by product sales and therefore these funding channels must be tapped. This requires writing convincing
    and well structured research proposals, for which I am the key responsible.

    Q: What trends in semiconductor design and EDA are you most concerned about these days?

    Over the last years I have witnessed a decline of generic in-house scientific expertise within the large-scale IDMs. I mean, nowadays there is less room for research by looking over the fence. The clear cut between core business and the scientific developments that take place in neighboring fields has become sharper. There is an outsourcing of tool development and the consequence is that SMEs will take over the job to enter into unknown territory of adapting novel developments asillustrated above. It is not really a concern since it opens possibilities. Needless to say that a substantial risk is involved since not all ideas turn out to work as nice as one could forecast at start. Nevertheless, over the years one develops a six sense for what may have a good change to succeed and what is doomed to fail.

    Q: What kind of advise would you give to someone starting up a technology company today?

    This is a hard one.

    First of all, convince yourself that you have a product that is really different from what is already out there and that the product deserves to be put in the market. If you are not really enthusiastic about it, nobody else will be. It may turn out that you are the only one that is enthusiastic, so be it. The next advice is to be prepared for good times as well as bad times.

    The worst motivation to start a technology company is the ambition to become rich overnight. Next, of course, it is crucial to listen to the customers. They should tell you what is important and needed. Do not take the attitude to tell customers what they should need.

    Q: What is your best accomplishment at Magwel so far?

    One of my qualities is probably ‘persistence’. I keep on hammering on a problem until it crumbles and a solution is found. With STmicroelectronicswe have developed a version of our electromagnetic TCAD solver in the transient regime. (The results were published in IEEE Transaction on CAD and IEEE Transactions on Electron Devices in June 2014). It was applied to deal with fast transient and high current surges in ESD protection devices. I completed the inclusion of Lorentz force effects in the solver at ST’s request. The results were the cherry on the pie of many years of research. The permanent confrontation with measurement results were deterministic for the quality of the tool. This is the guideline of all our products. We are not satisfied if our products match outcomes of other simulation tools. The true comparison is by matching Silicon data.

    Q: Why is it that start-ups tend to have more innovation than the big three in EDA?

    I think it intrinsic to the EDA business: start-ups are like evolutionary experiments in nature: some species survive and others perish. When they do survive and turn out to be valuable, there comes a big load of marketing costs and sales costs with it. Such costs have not much to do with the intrinsic technology and it is a better business model to share these costs. Here come the big three into the picture. It is a matter of efficiency to distribute the marketing, sales and other costs over a large product folio.

    Related: Ensuring ESD Integrity

    Q: How do the advances in foundry processes create new challenges for Magwel?

    Processes get more complex and down-scaling makes devices more vulnerable. Therefore, physical mechanisms that were harmless on coarse devices and layouts can be deteriorating at down-scaled devices. As a consequence, rule-based designs must be replaced by physical designs taking into account more subtle aspects of the underlying physics. Building simulation tools that are capable of doing so is a challenge, especially if you want to apply it to the big layouts. New foundry processes are a blessing for MAGWEL since they trigger the need for our products. MAGWEL is rooted in the physical approach of design problems.

    Looking back we can say that sometimes it was an overkill by incorporating too much physics but we learned important lessons. There is a healthy friction between accuracy (the physics approach to understand what is going on) and efficiency (the engineering approach to make things work). Our product development team is a mixture of engineers, physicists and computer scientists complemented with collaboration with mathematicians leading to the best results.

    Q: What will success look like at Magwel in twelve months?

    This year (2014) turns out to be very successful in terms of growth of revenue thanks to product sales. We have new products coming up that are now in development with core partners such as SPX, our product for extracting substrate parasitic Bipolar transistors.SPX is a unique tool that can simulate and extract substrate parasitic Bipolars at the chip level, which is a quantum leap over what TCAD tools can do. Our ESDitool which includes the modeling of the highly non-linear response of protection devices clearly fills a need. Our product PTM-ET, the power transistor modeler with electro-thermal transient simulations will be upgraded in the next release to deal with much larger (full) chip heat capabilities. Our electromagnetic TCAD solver, which was originally developed in the frequency regime is now also available in the transient regime to simulate ultra-fast large signals.

    My expectations are high for next year in terms of increase of revenue and additional hiring to realize the developments that are requested by our customers. We expect a growth of over 50% in 2014, following a 55% growth in 2013. Magwel has entered an accelerating growth phase thanks to the success of its products in the marketplace.

     


    SPX: Substrate Parasitic Bipolar Extraction, chip-level 3D substrate noise integrity analysis

    Related: Electro-Thermal Simulation of Power Transistors

    Also Read:

    IROC Technologies CEO on Semiconductor Reliability

    CEO Interview: Jens Andersen of Invarian

    CEO Interview: Jason Xing of ICScape Inc.