CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

IEEE S3S Rump Session: “What Does IoT Mean for Si Technology?”

IEEE S3S Rump Session: “What Does IoT Mean for Si Technology?”
by khaki on 09-20-2015 at 12:00 pm

For the second year in the row, Gartner’s Emerging Technologies Hype Cycle puts Internet of Things (IoT) at the Peak of Inflated Expectations. Not only many online forums are inflated with debates on IoT-related topics, but more importantly virtually all semiconductor companies made announcement pertaining their plans to address this potentially massive market. With the internet of people reaching a plateau, IoT is considered by many as a new wave for the continued growth of the semiconductor industry.

However, apart from vague discussions related to the system cost or long battery-life requirements, little is discussed on the implications of IoT on wafer manufacturing and chip design. The fact that legacy CMOS technologies in fully amortized fabs are advertised as the solution to IoT market to lower the leakage current at the same time that leading edge technologies, mainly 28nm, are being tweaked to lower the cost and active power, is a testimony to the existence of a wide range of opinion regarding IoT requirement. This is in part due to the extremely wide range of complexity that one can imagine for IoT: from a simple egg-counter in the fridge to a system that understands human emotions.

A smartphone plus its user (even a toddler) can be viewed as a smart IoT node where the user provides a very wide range of computing capability. However, this smart computing power is beyond the capability of many classical IoT nodes being discussed today. The ultimate application of IoT can be unlocked once such level of smartness is implemented either on the server side or better yet at each node.

At the 2015 edition ofIEEE S3S Conference, we are fortunate to have a diverse panel of experts to cover “What does IoT mean for Si technology” at the Rump Session. The open atmosphere of the session and the participation of an audience that drives the semiconductor technology in R&D, manufacturing, and design as well as academia provide a unique opportunity for debate. I am chairing this year’s panel session and the panelists are:

  • Christophe Chevallier, Ambiq Micro
  • Stanley S.C. Song, Qualcomm Technologies Inc.
  • Ali Niknejad, University of California, Berkeley
  • Norikatsu Takaura, Hitachi

The conference is held Oct. 5-8, 2015 at The DoubleTree by Hilton Sonoma Wine Country. The Rump Session will be held in the evening of Wednesday, Oct. 7th after the conference cook-out. I hope I see many of you at the conference, but if you cannot make it, I am soliciting questions for the panelists. Please feel free to drop me an email or post your questions as a comment.


7 Deadly Sins in Product Strategy for EDA Startups

7 Deadly Sins in Product Strategy for EDA Startups
by Bernard Murphy on 09-20-2015 at 7:00 am

If you google “7 deadly sins of startups” you get lots of hits on mistakes for social networking ventures, only a few of which are relevant to EDA startups. In EDA you have to demonstrate real growth quickly with a very tech-savvy audience in a handful of bluechip accounts. So throw away the research you did on the web because it isn’t going to help. Success is still possible though if you avoid some of the following blunders. I’ve committed a few myself and have seen up close the anguish of the others.

1.The “me-too” product
It’s amazing how often would-be entrepreneurs look at a successful product in the industry and think “hey, I could do that too”. This is a huge warning sign that you don’t have an original idea and that your differentiation will at most be in quality of result. The first problem is obvious. The second is a delusion unless you can prove 10X superiority over market leaders. That requires a fundamentally different approach, (aka an original idea). Why isn’t 3X good enough? Because everyone knows at least some of that advantage will evaporate in a production environment and 1.5X – 2X isn’t enough to justify a switch from a leader who’ll almost certainly catch up and who already provides bundled pricing and local support. (And don’t even think you’re going to compete on cost, unless you’re a Walton.)

If you don’t have an original idea, think harder or find a partner who does have an original idea.

2.The “one-customer” product
This is deceptively easy to fall into but equally dangerous. You have a good contact at a major account, you build a custom product and they’re happy with the result. They swear everyone in the industry is going to need this. But any single customer is a terrible judge of the general market because they’ve seen very few flows. What you built is unlikely to translate to another design team, much less another account. Above all, do not get dragged into “integration products” (build me a flow). These never translate, even within a company.

Service companies often fall into this trap. Some products developed in services make the transition to general application, but not many.

Check with multiple prospects in multiple companies and dig deep before you decide you’ve found a trend. Don’t cling to the first positive thing you hear as validation – make sure you get the whole response, warts and all.

3.The “nice-to-have” product
You found general agreement that your product would be useful but is it essential? Is solving this problem their organization’s priority 1 or 2? Unless you hear a resounding YES, they’re grading this a nice-to-have product, which they’ll never buy. Avoid building stuff to improve productivity. Vitamins are always nice-to-have. And avoid solving short-term problems. If the big guys are promising a solution in a year, forget it (sometimes they don’t come through, but the risk is still too high).

Get multiple independent viewpoints; the majority should grade this as must-have. Don’t be afraid to ask about solutions from competitors. Disqualifying bad ideas quickly is a good thing and you may uncover new ideas.

4.The “one license per account” product
OK, so you found a product that has broad appeal and is an absolute must-have. But only one engineer in the whole company needs it and then maybe only for a couple of months each year. Your total available market just shrank to maybe 40 licenses. Design companies are prepared to pay up to a few tens of $K for an EDA product, even if it cures cancer. That may give you an interesting lifestyle company, but an exit is out of reach given lack of room to grow.

A product has to be able to grow to at least multiple 10’s of seats if not 100’s of seats per account. Check that your customers agree your product could do this. (Engineers are often scared to ask customers this kind of question. Don’t be. Reasonable customers expect well-run startups to size the opportunity.)

5.The “revolutionary as long as you radically change your design flow” product
This was one of my sins – I built a product around IP-XACT design creation. Looked good because IP-XACT is an industry standard and several other companies were already on-board. But for others adoption required ditching RTL-based design at the SoC level and switching to IP-XACT-based design. That’s a huge change. Design teams look for incremental flow changes; tight schedules make major changes too risky. There are exceptions (when a team can start from scratch) but you can’t build a viable business on exceptions.

Run, don’t walk away from anything that requires a radical shift in methodology. Even SystemVerilog allowed for incremental adoption.

6. Uncritical love for your product
Passion for your creation is essential but it can also blind you to problems, especially deep problems in translating the concept to user needs. You can’t rely on your team or prospects to tell you your baby is ugly; your team wouldn’t dare and prospects would rather avoid conflict (at least before they have bought). Absent critical feedback you fail to see the need for major course corrections and your business mysteriously evaporates.

You need to decide early on if you want to be right or you want to be successful. You probably can’t have both. Recruit an independent advisor whose job is to tell you what you don’t want to hear and can contribute to discussions on corrective action.

7. Confusing cost with opportunity cost
It’s important to be frugal when running a startup, but there are limits. Reinventing commodity components (netlist/RTL readers and GUIs are two obvious examples) will tie up resources for much longer than you think and will ultimately cost more in slower progress on real differentiators and lost goodwill than you would have spent on off-the-shelf solutions

Never forget you are in a race to demonstrate differentiated product traction with key customers within 5 years. Wasting any of this effort on reinventing commodity functionality you could have bought (or partnered to get) is crazy.

It’s possible you could defy one or more of these guidelines and still have a winner. But then you’re betting against the house. Maybe you’ll win, but I doubt you’ll find investors.

More articles by Bernard…


We’re Number Two, We Try Harder

We’re Number Two, We Try Harder
by Paul McLellan on 09-19-2015 at 7:00 am

One of the big surprises I got at Synopsys’ ARC conference is that ARC is #2 in terms of share of licensed microprocessor shipments. I think most readers of Semiwiki would know ARM is #1 but would guess that MIPS (now owned by Imagination Technologies) is #2. But you’d be wrong, ARC is over twice as big.

Last week Synopsys held the ARC Processor Summit. The first keynote was by Aart, who I guess I should describe officially as one of Synopsys’ two Co-CEOs. His presentation was titled IoT: From Silicon to Software which is the tagline that Synopsys has adopted as the acquisition of Coverity (and other companies) has put them firmly in the software space as well as the EDA and IP spaces. Aart said that in semiconductor there have really been just a few big waves. First the PC wave of computation. Next the mobility wave when mobile phones first came on the scene. The third wave was the combination of mobility and computation which gave us smartphones, and the next wave is IoT, which is really “smart everything.”

The second keynote was from Linley Gwenapp about the licensed microprocessor market. His forecast is for a CAGR of 13% through 2018, faster than the overall IP (or EDA) market. In 2014 there were 15.3B chips shipped containing one or more licensed microprocessor cores. The share broke out like this:

[TABLE] style=”width: 300px”
|-
| Company
| 2014 share
|-
| ARM
| 72%
|-
| Synopsys ARC
| 10.6%
|-
| Imagination MIPS
| 4.7%
|-
| Cadence Xtensa (Tensilica)
| 3.9%
|-
| Cortus
| 1.4%
|-
| Andes
| 1.2%
|-
| Other
| 0.2%
|-

Linley pointed out that process technology means that there is no longer a free lunch for raising transistor budgets. The Apple’s and Qualcomm’s of the world are pushing down to 14/16nm and on to 10nm. Lots of others are holding back at 28nm for the sweet spot of price/xtor. There is growth in cores going on with the number of cores per chip going from 2.9 in 2014 to a forecast of 4.6 in 2019. ARM owns the application processor main socket (where the architecture is visible to the apps programmers) but there are lots of other sockets up for grabs. Integration reduces the number of chips but not the number of cores. Indeed, many “peripherals” such as SERDES or Bluetooth also contain a microprocessor core. There is a trend for the top biggest suppliers to do more of their own IP although, in all cases, stlll licensing the main application processor architecture from ARM:

[TABLE] class=”cms_table_grid” style=”width: 500px”
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | All Standard
| class=”cms_table_grid_td” | One Key IP in-house
| class=”cms_table_grid_td” | Almost all IP in-house
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Mediatek
Spreadtrum
Marvell
AllWinner
Rockchip
| class=”cms_table_grid_td” | Apple
Huawei (HiSilicon)
Samsung
| class=”cms_table_grid_td” | Qualcomm
Intel
Nvidia
|-

After the keynotes I sat down with John Koeter (who I interviewed just a few weeks ago) and Mike Thompson, who is the marketing guy for ARC processors and subsystems.

Synopsys acquired ARC through the Virage Logic acquistion 5 years ago. I can honestly say I expected Synopsys to largely ignore it and let it die on the vine, and focus their attention on partnering with ARM. But they did not. This made for a rocky relationship with ARM at first since they clearly both need each other for the high end (where ARC doesn’t have an offering) and their most leading edge customers in the most leading edge processes. Since acquiring the ARC business, Synopsys have tripled its size. They now have around 250 people working on ARC. They reckon the TAM for the low and medium end microprocessor cores is $400M so there is lots of room for growth.


They gave me the big picture view of their product line. There are 4 lines:

  • the low end EM cores compete with ARM’s Cortex M line (and cores from other vendors). There are 7 cores. Three-stage pipeline, no cache. The lowest end of all is the 650LE which is a fixed core but doesn’t require a per unit royalty
  • the mid-range HS family HS34, HS36, HS38 for high performance embedded (with capability to add instructions)
  • subsystems for embedded vision (one out now, a second in development for ADAS) which contain 2-4 HS cores and special object detection (convolutional neural networks CNN)
  • roll-your-own-processor with tools to create a custom ASIP (application specific instruction processor) based on the Target compiler technology and the CoWare Processor Designer (both acquired a few years ago)

Since acquiring ARC they have upgraded the instruction set from v1 to v2 (added instructions, increased code density). The cores are very popular in storage and high-end networking. Broadcom’s home entertainment division has standardized on ARC. Currently Synopsys estimate they ship 1.7B units per year containing ARC cores, with perhaps 2.5-3 cores per chip, perhaps 4B cores per year. Another interesting statistic: about 90% of their customers write their own instructions and so customize the core to their own needs.

See also John Koeter: How To Be #1 in Interface IP

The ARC Processor page on the Synopsys website is here.


TSMC OIP: What to Do With 20,000 Wafers Per Day

TSMC OIP: What to Do With 20,000 Wafers Per Day
by Paul McLellan on 09-17-2015 at 4:42 pm

Today it is TSMC’s OIP Ecosystem Innovation forum. This is an annual event but is also a semi-annual update on TSMC’s processes, investment, volume ramps and more. TSMC have changed the rules for the conference this year: they have published all the presentations by their partners/customers. Tom Quan of TSMC told me that they will also provide a subset of the presentations TSMC gave to open the day.

The semiconductor business is driven by several large markets, the biggest of which is mobile. Fun statistics of the day are that mobile grew 26% from 2014-15 to shipments of 1.9B units. Since there are 4.3B worldwide mobile users, this means that the annual replacement rate is close to 50%. Global mobile traffic is forecast to go up 10X in 5 years from 30EB/yr in 2014 to 292EB/yr in 2019 (EB is exabyte).

For the future, the three big markets other than mobile are Internet of Things (IoT), Automotive, and High-performance Computing (HPC).

Let’s start with IoT: the market has a forecast CAGR from 2013 to 2018 of 21%. But the market is ripe in that 99.4% of devices are notconnected, so by 2022 the average house is forecast to have 500 smart devices. Of course every time you blink the IoT forecast goes up by a billion units but for sure it is real.

The big opportunity in automotive in the medium term is driverless cars or, before that, advanced driver assist systems (ADAS). Google’s driverless cars have done over 2M miles (with 16 minor accidents, all the fault of the other vehicle). Delphi/Audi drove a vehicle across the US from SF to NY (that I wrote about during DAC). Tesla will have autopilot in all their cars. One interesting potential change that autonomous vehicles might bring is to ownership. If you could have a car on-demand whenever you wanted one, would you own your own vehicle at all. Your car plan in a decade might be like your cellphone plan today, with various options depending on usage.

HPC is required to provide the back-end for all those mobile devices, typically in large datacenter aka cloud computing. The need for low latency and location awareness means that the mobile device needs to be providing local intelligence, but then low latency connect to the datacenter will be required too. This means that there will be upgrade cycles to all the base stations, of which there are (literally) millions.

TSMC provides a wide range of processes for different types of silicon. The process nodes mentioned here are where TSMC is working on bringing the process up; volume production is one (or sometimes two) nodes behind.

[TABLE] class=”cms_table_outer_border” style=”width: 240px”
|-
| class=”cms_table_outer_border_td” | Application
| class=”cms_table_outer_border_td” | Technology
|- class=”cms_table_outer_border_tr”
| class=”cms_table_outer_border_td” | MEMS
| class=”cms_table_outer_border_td” | 0.13um
|- class=”cms_table_outer_border_tr”
| class=”cms_table_outer_border_td” | Image Sensor
| class=”cms_table_outer_border_td” | 40nm
|- class=”cms_table_outer_border_tr”
| class=”cms_table_outer_border_td” | Embedded flash
| class=”cms_table_outer_border_td” | 28nm
|- class=”cms_table_outer_border_tr”
| class=”cms_table_outer_border_td” | RF
| class=”cms_table_outer_border_td” | 16nm
|- class=”cms_table_outer_border_tr”
| class=”cms_table_outer_border_td” | Logic
| class=”cms_table_outer_border_td” | 7nm
|- class=”cms_table_outer_border_tr”
| class=”cms_table_outer_border_td” | Analog
| class=”cms_table_outer_border_td” | 16nm
|- class=”cms_table_outer_border_tr”
| class=”cms_table_outer_border_td” | High voltage
| class=”cms_table_outer_border_td” | 40nm
|- class=”cms_table_outer_border_tr”
| class=”cms_table_outer_border_td” | Embedded DRAM
| class=”cms_table_outer_border_td” | 40nm
|- class=”cms_table_outer_border_tr”
| class=”cms_table_outer_border_td” | BCD/power
| class=”cms_table_outer_border_td” | 0.13um
|-

R&D overall is up 19% year-on-year from 2014 to 2015. It was $1.9B in 2014 and will be $2.2B in 2015. OIP has grown and now has over 200 PDKs, 7500 technology files and 8500 IP blocks. The wafers enabled by this IP grew at a CAGR of 22% from 2005-14. Capex is up 10-16% from 2014 to about $10.5B to $11B, compared to $9.5B last year. Total capacity is 1.6M 8″ equivalent wafers per month, over 20,000 per day, up 12% year-on-year.

UPDATE: I totally messed up the title of this blog and the computation. It is over 50,000 wafers per day or over 200 per hour.

New processes are ramping faster than ever. N40 ramped in 35 months. N28 ramped in 22 months. N20 ramped in 3 months. N16 is ramping even faster. At this rate volume production will be faster than qual!


The second presentation was by Jack Sun, TSMC’s CTO. I tried to take notes on the processes but there was too much information. I’ll revisit this once I get some slides to work from. But in the meantime, here are a few highlights.

  • N10 will be risk production in Q4 of 2015. Development is on-track.
  • N7 will be risk production Q1 of 2017. SRAM test-chip is functional.
  • 16FFC will be risk production in Q2 2016
  • 16FF+ is in volume production, with a couple of dozen takeouts and 50 more expected before end of year

The key new process coming soon is 16FFC, which is the third generation of 16nm process. Speedup is 65% vs 28nm and 40% vs 20nm. Or a power saving for 70% vs 28nm or 60% vs 20nm. It can go down to 0.55V. TSMC have repeatedly stated that 16FFC will be a long-lived node, which I take to mean that 16FFC will be cheaper per transistor than N28. The design rules are the same so migrating designs and IP should be fairly straightforward. There is a new library coming that will allow operation down to 0.4V, with a focus on minimizing the non-gaussian variation.

N10 has a scale factor of 50% versus 16FF+, with a performance improvement of 20% or a power saving of 40%. There are 3 different Vt and gate-length bias covering a wide range of leakage/speed envelopes. N10 SRAM is yielding well, SERDES runs at 56Gbps with 22% better power efficiency than 16FF+.

N7 has a further speed improvement of 10-50% versus N10, or a power saving of 25-30%. It will be 1.6X the density. Risk production will be Q1 of 2017. Initially libraries for mobile, but new second generation libraries with taller cells for HPC. Special SRAM for HPC too, with 25% better performance. There is an ARM Cortex-A57 test chip showing 40-45% are reduction.

But the roadmap doesn’t end there. TSMC is doing research on Ge FinFET, III-V NFET, gate-all-around nanowires, 2D crystal, directed self-assembly, multi-e-beam direct write, inverse computational lithography. And, of course, EUV. TSMC have achieved 90W source power in-house. ASML have demonstrated 130W. They are working jointly to get all the settings worked out for 125 wafer/hour production.

Other segments. CMOS Image Sensor (CIS):

  • FSI front image sensor
  • BSI back image sensor (the die is thinned and the light comes through the back)
  • BSI/ISP back image sensor flipped onto an image signal processor
  • NIR near-infra-red

MEMS

  • accelerometer
  • pressure sensor
  • motion sensor
  • microphone
  • new gas sensors
  • new biometric sensors

Emerging new memories:

  • eRRAM
  • eMRAM

This is all from my handwritten notes. If you spot errors then correct me in the comments.


The Future of Moore’s Law

The Future of Moore’s Law
by Daniel Payne on 09-17-2015 at 12:00 pm

I’ve lived in Silicon Valley then moved north to the Silicon Forest (aka Portland, Oregon) in 1995, and thankfully we have a lot of high-tech companies here like: Intel, Lam Research (Novellus), Lattice Semi, Qorvo, Synopsys, Mentor, Cadence, Northwest Logic, etc. There’s a global industry organization called SEMI that serves the manufacturing supply chain for micro-electronics and nano-electronics, including:

  • Semiconductors
  • Photovoltaics (PV)
  • High-Brightness LED
  • Flat Panel Display (FPD)
  • Micro-electromechanical systems (MEMS)
  • Printed and flexible electronics

SEMI is over 40 years old and they have a Pacific Northwest breakfast forum on the topic: The Future of Moore’s Law. This event will take place on Friday, October 30th from 7:30AM-11:00AM at Mentor Graphics in Wilsonville, Oregon. I’ll be attending the event, so look forward to my trip report in another blog.

The list of speakers and topics looks quite impressive because it covers a wide range including semiconductor fab, semiconductor equipment, packaging, testing, research and EDA software.
[TABLE] cellpadding=”5″ cellspacing=”5″ style=”border-collapse: collapse; margin: 1em 0px; color: rgb(0, 0, 0); font-family: Arial, Helvetica, ‘Nimbus Sans L’, sans-serif; font-size: 12.9996px; line-height: 15.4305px”
|-
| Time
| Speaker
| Program
|-
| 7:30am
|
| Breakfast & Registration
|-
| align=”left” valign=”top” | 8:00am
|
| valign=”top” | Tim Cleary
Sr. Director of Marketing
Cascade Microtech
Moderator – Welcome Remarks
|-
| align=”left” valign=”top” | 8:05am
| align=”left” valign=”top” |
| align=”left” valign=”top” | Dr. Walden C. Rhines (Biography)
Chairman and Chief Executive Officer
Mentor Graphics
Design Perspectives and Challenges
|-
| align=”left” valign=”top” | 8:25am
| align=”left” valign=”top” |
| align=”left” valign=”top” | Dr. Chris Spence (Biography)
Vice President, Advanced Technology Development
ASML
Lithography Perspective and Challenges
|-
| valign=”top” | 8:45am
|
| valign=”top” | David Bloss (Biography)
Director, Fab Equipment, Global Supply Management
Intel Corporation
|-
| valign=”middle” | 9:05am
| align=”left” valign=”top” |
| valign=”top” | Networking Break
|-
| valign=”top” | 9:25am
| align=”left” valign=”top” |
| valign=”top” | Vamsi K. Paruchuri, Ph.D., Senior Manager, BEOL Technology Research, IBM Research @ Albany Nanotech
7nm and beyond
|-
| valign=”top” | 9:45am
| align=”left” valign=”top” |
| valign=”top” | John Hunt (Biography)
Senior Director, Engineering
ASE Inc.
Innovation in Packaging for Mobile Applications (Abstract)
|-
| valign=”top” | 10:05am
| align=”left” valign=”top” |
| valign=”top” | Dave Towne (Biography)
Senior Technical Analyst
Yole Développement
|-
| valign=”top” | 10:25am
| valign=”top” |
| valign=”top” | Networking
|-
| valign=”top” | 11:00am
| align=”left” valign=”top” |
| Adjourn
|-

Registration
Pricing for Early Bird registration before October 16th is $55.00 for SEMI members and $75.00 for non-members. Where else can you network with such an interesting and influential group of people as this?

REGISTER


The Internet of Sensors

The Internet of Sensors
by Paul McLellan on 09-17-2015 at 7:00 am

The internet of things (IoT) has a number of key attributes: low power, security, connectivity. But almost every IoT application involves sensors of one sort or another. The visual sensors are built using CCD arrays, they are basically low-resolution cameras, but the mechanical ones are typically built using MEMS technology. These includes things like accelerometers, gyroscopes, compasses and even microphones.

For example, a multi-axis accelerometer, along with some clever signal processing software, can count your paces, tell whether you are walking, running or cycling and more. Smartwatches and more limited function Fitbit-like devices use this to track your activity. Before smartphones, one of the main drivers of MEMS accelerometers was their use in automotive sensors for airbag deployment. Also, filters for smartphones which are built using MEMS techniques despite having no moving parts.

See also Acoustic Resonators for RF: MEMS with No Moving Parts

They are also used to measure the G-forces in racing crashes. For example, since 2002, the radio earpieces used in INDYCAR racing also contain a 3-axis accelerometer that is used to measure the effect on the driver’s head in an impact. The data is streamed to a 90-second buffer in the car’s black box. One big attraction of MEMS is that the devices are physically small and light. There is not a lot of spare room inside a smartphone and even less inside an INDYCAR driver’s communication earpiece. The NFL had been doing something similar on a voluntary basis (with the accelerometers in the helmet, players are banned from wearing earpieces) but the program has been suspended for now.

MEMS stands for Micro ElectroMechanical Systems. In practice what this means is building very small mechanical systems using semiconductor manufacturing techniques. Sometimes the MEMS devices are constructed standalone but sometimes the electronics is integrated onto the same substrate. Today MEMS is an $11B business with double-digit growth forecast to reach $21B in 2020.

During SEMICON West there was a MEMS panel sesssion. The entire theatre was full with many rows of people standing behind the chairs (and me behind them). There is a lot of interest in MEMS for IoT.

One challenge in the MEMS market is that there hasn’t really been anything new recently. If you look at the legend to the graph above then every product segment was invented years ago. If the forecast of 50B IoT devices by 2020 is even approximately real, then this will drive volume in the MEMS market even without new product segments. But with many suppliers in each segment it is hard to have true product differentiation. It is important that MEMS devices are quick and easy to design so that they can perfectly be matched to their markets.

One attractive area of future potential growth is medical devices. Implantable (or in your contact lens) glucose sensors and other bio-medical sensors have the possibility of revolutionizing various aspects of medical care. Instead of measuring your blood pressure once in a blue moon when you visit a physician, it could be monitored continuously. Or your heart EEG. I believe that something like this will prove to be the killer app (or rather the keep-you-alive app) that will make us all wear a smartwatch; just getting our text messages on our wrist is not enough.

Historically MEMS devices have been built in specialized foundries and often each device required its own tweaks to the process. But the design and manufacture is becoming more standardized. This is similar to what has happened in IC process technology, where custom processes that used to be common are now vanishingly rare.

However, it is not just manufacturing that is becoming more standardized, the methodologies by which they are designed are too. Coventor was not on the panel session, although by coincidence their booth was only about 20 feet away. MEMS designers generally use two separate approaches to predicting the multi-physics behavior of their device designs. Either highly simplified, hand-crafted models or computationally slow finite-element analysis (FEA). While each of these approaches has merit, neither approach can accurately predict the dynamic behavior of the entire device while taking full account of multi-physics coupling effects. As a result, MEMS designers have historically resorted to expensive and time-consuming prototyping on real silicon. Coventor’s MEMS+ is a different kind of FEA, based on a unique MEMS-specific library of high-order, parametric finite elements. These elements provide the accuracy and generality of FEA and the simulation speed of hand-crafted models. Because MEMS+ models run extremely fast compared to conventional FEA, designers can simulate their entire MEMS device, including gas damping effects and control circuitry. This approach is leading to a more standardized approach to MEMS design. What is really required is the equivalent of a PDK in the IC design world, a kit of pre-defined and pre-characterized basic structures out of which real MEMS devices can be created.

See also Coventor, Lego and IoT in Denmark

Earlier this year Coventor did a webinar (along with ARM and Cadence) Addressing Smart Sensor Design for SoCs and IoT. One of the topics covered was how to create a mechanical MEMS component in Coventor’s MEMS+ environment. You can watch a replay of the webinar here.


IoT does NOT lack tools!

IoT does NOT lack tools!
by Daniel Nenni on 09-16-2015 at 4:00 pm

Rarely does a month go by without acquisitions in the fabless semiconductor ecosystem. Not surprisingly one of the most read pages on SemiWiki is the EDA Merger and Acquisitions Wiki with more than fifty seven thousand views. It really is a nice family tree, one which we (Daniel Payne) are diligent on keeping current. One of the most accretive EDA acquisitions this year in my opinion is Tanner EDA and I will tell you why.

EDA Mergers and Acquisitions Wiki – SemiWiki.com


IoT Lacks Tools, Says EDA Vet

The above is a recent headline of an EETimes article featuring Alberto Sangiovanni-Vincentelli, a Berkeley professor and co-founder of EDA giants Cadence and Synopsys.

“The Internet of Things is just an intermediate step on the way to the sensor dominated world” where the numbers of networked sensors will exceed the population by several orders of magnitude, said Alberto Sangiovanni-Vincentelli, a Berkeley professor and co-founder of EDA giants Cadence and Synopsys.

“My passion is the science of design. I hate seat-of-the-pants design, leaving engineer free to design is a recipe for disaster,” he said. “What excites me is abstracting the meaning of a design and applying it to everything,” he added.

This may certainly apply to Berkeley, Cadence, and Synopsys but it does not apply to Caltech and Tanner EDA. There is a “Brief History of Tanner EDA” which traces their roots back to Caltech and Carver Mead’s seminal textbook on VLSI design HERE. While I understand Alberto’s points, I’m more of a seat-of-the-pants guy and believe the majority of the IoT designs will be from small to medium sized groups of entrepreneurs which will also consist of seat-of-the-pants kind of people.

An example of an IoT design is MEMS (Micro-electro-mechanical systems) where very small devices such as sensors, gyroscopes, accelerometers, and resonators are integrated into an SoC for IoT applications. Tanner EDA is all about MEMS and has a nice video, webinar, and whitepapers to get you started:

Micro-electro-mechanical systems (MEMS), is the technology of very small devices, such as sensors, gyroscopes, accelerometers and resonators. Whether your design is purely MEMS or a combination of MEMS and IC SoC, these tools can meet your most challenging needs. Thisvideo introduces the Tanner tool flow for MEMS design.

Tanner MEMS Tool Suite Overview and Demo; Thiswebinarwill show how Tanner L-Edit MEMS Design and SoftMEMS™ 3D Solid Modeler can be used to shorten your design cycle and improve the manufacturability of your MEMS devices.

Meeting MEMS Design Challenges with Unique Layout Editing and Verification Features–Part 1: A big difference between MEMS layout and IC layout is the use of unique, irregular shapes. Unlike conventional CMOS IC design, where layout shapes are usually Manhattan style (such as rectangles and rectilinear polygon) or polygon with 45-degree edges for routing, MEMS design utilizes a much broader variety of geometries, due to its wide application in mechanical, optical, magnetic, fluidic and biological fields. This two-part paper describes how and why support and ease of use for implementation of irregular shapes, including curves and all-angle polygons, is a critical criterion differentiating MEMS-oriented CAD tools from conventional IC-oriented tools. (Part 1focuses on layout editing;Part 2on verification.)

The bottom line is that Tanner EDA made AMS tools both affordable and easy to use, two things that are critical in the developing IoT market. If you ask me that is why Mentor acquired Tanner EDA and is keeping the brand as a separate business unit, absolutely.


Re-Thinking Server Design

Re-Thinking Server Design
by Alex Lidow on 09-16-2015 at 12:00 pm

The demand for information is growing at an unprecedented rate. Our insatiable appetite for communication, computing and downloading, is driving this demand. With emerging technologies, such as cloud computing and the internet of things, not to mention the 300 hours of video being loaded to YouTube every minute, this trend for more and faster access to information is showing no signs of slowing. What makes the transfer of information at high rates of speed possible are racks and racks of computing equipment – data centers. What is required to run these computing engines is having electrical power delivered efficiently at extremely precise levels to various parts of numerous servers. Converting a distributed 48 VDC to individual processors running at 1 VDC precisely and efficiently is the crux of the challenge for today’s power conversion systems designers. This challenge must be met in order to meet the demands of the information explosion – simply put, the delivery of power needs to keep pace with the expansion of computing power.

How will power conversion systems continue to improve in order to keep pace with the rapid improvements in computing power and the need for efficient data centers? Traditionally, power conversion has been accomplished using silicon-based power transistors, but it is well known that silicon components are reaching their theoretical performance limits. A higher performing base material is needed for semiconductors, if the demand for more and faster communications and computational tasks continue…demands will certainly grow.

Fortunately, in the past few years alternative higher performing materials, such as gallium nitride, have emerged. This material has the potential to perform more than 1000 times better than silicon. From a performance point of view, gallium nitride (#GaN) is one of the most promising technologies and, what is really exciting, is that GaN has been shown to be price competitive with traditional silicon technology – and is already less expensive to produce. This is a disruptive technology.

So we wrote a book…


Given our experience with GaN technology we created new power conversion solutions using GaN devices and made performance comparisons with silicon power transistors traditionally used in power conversion systems.
Overall, this book is an aid to leading edge power systems designers to understanding and adopt gallium nitride power transistors for use in the ever-demanding application of DC-DC conversion for computing platforms, and to examine possible new directions for delivering efficient power to computing equipment within data centers. As the first book to re-examine Datacom power architecture using non-silicon based semiconductors, we examine new power conversion solutions with specific hardware examples.

The book shows how the dramatic improvement in switching performance of gallium nitride transistors as compared to silicon not only permits vast improvements in existing converters, but prompts a fresh look at changing power conversion system architectures.
In very specific ways, this book examines the benefits of enhancement-mode gallium nitride (eGaN®) FETs in power conversion applications with an input voltage range centered around 48 V with load voltage as low as 1 V. Examples of conventional PWM isolated converters, unregulated isolated converters of both hard-switched and soft-switched designs, and finally non-isolated converters using eGaN FETs are considered.

By combining the discussion of power systems architecture and GaN technology performance, we propose, create, and test a new power delivery architecture that takes advantage of the superior performance attributes of GaN. eGaN FETs and integrated circuits have demonstrated their ability to enable new power delivery approaches that can significantly improve overall system efficiency, power density, and cost.

Buy your copy now at: http://epc-co.com/epc/Products/Publications/DC-DCConverterHandbook.aspx


FPGA Prototyping: From Homebrew to Integrated Solutions

FPGA Prototyping: From Homebrew to Integrated Solutions
by Paul McLellan on 09-16-2015 at 7:01 am

Years ago, when FPGA prototyping started, there were no solutions that you could go out and buy and everything was created as a one-off: buy some FPGAs or an FPGA-based board, and put it all together. It was a lot of effort, nobody really knew in advance how long it would take, there was very limited visibility for debug and the whole thing was basically unsupportable. There is more discipline these days but even so, roughly half of all FPGA prototyping is done in a proprietary way that doesn’t scale as designs get larger and lacks more and more desirable features. The other half of the market uses an integrated solution that ties together FPGA-based hardware, the software for getting the design up and running, debug and daughter boards for hardware interfaces.

Last week I talked to Johannes Stahl of Synopsys about the new solution that they are announcing today. He told me that for some time, Synopsys has had a free book, the FPGA-based Prototyping Methodology Manual which was available for download if you answered a few questions. From those questions the top 5 care-abouts turned out to be:

  • mapping to the FPGA
  • debug visibility
  • performance
  • limited capacity
  • turnaround time

Today Synopsys is announcing an integrated solution combining ProtoCompiler software and HAPS-80 hardware that addresses these issues and:

  • reduces time to high-performance prototype to under 2 weeks
  • built-in debug captures over 1000 RTL signals per FPGA at speed, integrated with Verdi
  • 100MHz system performance
  • scalable up to 1.6B ASIC gates, which is around 7B transistors using the usual rules of thumb
  • fast parallelized tool flow

The fast bringup addresses three steps. First, an automated flow including partitioning and automatically inserting all the multiplexors necessary to get signals between the FPGAs. Second, reduced hardware and debug bringup time, and finally getting the performance optimized in multi-FPGA configurations (which is most of them). Bringup is less than two weeks, so not quite the one day that emulation has achieved today, but also not the multiple months that FPGA prototyping used to entail.

The performance increase is driven by new proprietary multiplexing that delivers 2X higher performance, system timing improved by up to 60% and better P&R guidance bringing another 10% timing improvement. Plus, under the hood, there are the latest Xilinx Virtex UltraScale VU440 devices with 26-million-ASIC-gates capacity per FPGA

These mean that for a single FPGA configuration they can achieve 300MHz, for a multiple-FPGA solution that does not involve signal multiplexing it can achieve 100MHz and designs requiring the new high-speed pin multiplexing, 30MHz. These speeds mean, for example, that you can boot a system to the OS prompt in less than a minute. HAPS-80 also enables at-speed operation of real world I/O.

The system is scalable from a single module, delivering 26M ASIC gates up to an enterprise system with 1.6B. Increasingly, in fact, enterprises are putting FPGA prototyping into the datacenter so that it can be shared among different engineers. This can either be done by putting generic hardware on the network, or else for a critical design configuring one or more systems and making them available to be shared.

For people who have been using HAPS-70, the previous generation, everything is backwards compatible. Cables and connectors are the same, the daughter-boards are the same, the form-factor is the same as Haps Trak 3. The software flow through ProtoCompiler is the same.
There are really two somewhat separate reasons for wanting to make use of FPGA protptying solutions. The first is to get the hardware debugged by exercising the hardware with large amounts of realistic software load. The second reason is to get the software development done without needing to wait for prototypes to be manufactured. There are, of course, alternatives to this: emulation, virtual platforms, hybrid emulation. Which is most appropriate depends to some extent on the stability of the design. If the RTL is changing extensively, then bringing up FPGA prototyping is less attractive since it takes a couple of weeks by which time it is obsolete. But when it is close to stable then it is far and away the fastest running solution and so the most attractive. If you need to validate a lot of hardware against a lot of software then this is the sweet spot.

Everything is available now. Faster bringup, higher performance, more visibility, large capacity, accelerated tool flow, backwards compatibility. What’s not to like?

The Synopsys blog on FPGA prototyping, Breaking the Three Laws, is here. The HAPS product page is here.