RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

FinFET vs FDSOI – Which is the Right One for Your Design?

FinFET vs FDSOI – Which is the Right One for Your Design?
by Daniel Nenni on 04-08-2015 at 4:00 am

As a professional conference goer I can see definite trends when it comes to topics and attendance. Thus far this year I have seen a double digit increase in attendance, which is great. The question is why? Why is the fabless semiconductor ecosystem leaving the safety of their cubicles and computer screens in droves to mingle amongst the masses? The answer I believe is that modern semiconductor design is moving faster than ever before and people are scrambling to keep up, absolutely.

When it comes to choosing a topic and organizing a session I have the advantage of SemiWiki analytics. I see what thousands of semiconductor people search for, read, and share, which is why the first session I did for EDPS was an introduction to FinFETs in 2013. Last year it was all about IP Integration issues and this year it is the design issues between FD-SOI and FinFET. Tom Dillinger of Oracle will keynote, below is the abstract and session summary. After the keynote there will be a panel discussion with Tom, Kelvin Low of Samsung Foundry, Boris Murmann of Stanford University, Jamie Schaeffer of GlobalFoundries, and Marco Brambilla of Synapse.

EDPS is more of a workshop than a regular conference. The advantage is that a workshop is smaller and more interactive. Not only do you get to see experts speak and interact with the audience, you get to have breakfast, lunch and dinner with them as well.

FinFET vs FDSOI – Which is the Right One for Your Design?
The emergence of multiple transistor technology options at today’s deep submicron process nodes introduces a variety of power, performance, and area tradeoffs. This session will start with an overview of the FinFET and Fully-Depleted Silicon-on-Insulator devices (FD-SOI, also known as Ultra-Thin Body SOI), in comparison to traditional bulk planar transistor technology. The session will then delve into a detailed discussion of the architectural and circuit implementation tradeoffs of these new offerings, to assist designers make the right choice for their target application.

This session will delve into the design tradeoffs associated with leading semiconductor manufacturing nodes, covering advanced bulk planar, Fully-Depleted SOI, and FinFET device options.

The kickoff presentation will establish a technical foundation for these processes, followed by a discussion of hands-on experiences from experts who are leading advanced chip designs and process implementations in these technologies. After the kickoff and brief presentations from the expert panel, attendees are encouraged to participate in a question-and-answer session, to explore specific process selection and implementation tradeoff decisions.

The kickoff will start with an introduction to bulk planar, FD-SOI, and FinFET devices – i.e., device cross-sections (and the associated parasitic elements); device fabrication options; and, sources of device variation. The compact models for these device types useful for circuit design and simulation will be reviewed. Advanced process technologies introduce additional device and circuit layout considerations – e.g., layout dependent effects, layout parasitic extraction (and parasitic reduction) around the device, and the importance and impact of lithographic uniformity in circuit layouts.

The kickoff will then move to circuit-level design considerations for library logic cells, for these different process options. Analog cells also have a key impact upon technology choice, and will be discussed as well (albeit briefly).

Finally, the kickoff will cover broader design methodology tradeoffs for these process options, specifically methods for making path-level and block-level power/performance optimizations. The design implementation methods for power, performance, and area (PPA) closure are key differentiating features of these technologies.

With this background, the expert panel will discuss some of their recent design and development experiences – choosing the optimum technology, making global and local implementation choices to meet PPA goals, and accommodating process technology variation in design closure.


US is the Ultimate Leader in Semiconductor Business

US is the Ultimate Leader in Semiconductor Business
by Pawan Fangaria on 04-07-2015 at 6:00 pm

Last year in November when I looked at the world’s top20 semiconductor companies with Samsungand TSMCbeing at the second and third rank respectively, first being Intel, I computed the sales numbers of the companies based on their countries and found that Taiwan and South Korea accounted for 34.5% of the total sales of the top20 companies. That provided a well founded perception that these countries in Asia were leading the semiconductor business. I blogged about it “Look Who is Leading The World Semiconductor Business” and received comments from the community that it’s no surprise, companies in South Korea and Taiwan have been leading it since long. Well, South Korea and Taiwan, in fact APAC region does lead in foundries. A blog on 300mm Fab Capacity provides actual data about that. However, look at the following bar chart from an IC Insights report.

The companies headquartered in US have the lion’s share of 63% sales in fabless IC, and that is going to further increase after completion of Qualcomm’sacquisition of CSR, the second largest fabless company in Europe and Intel’s acquisition of Lantiq, the third largest fabless company in Europe. Also US leads in IDMs with more than 50% market share. The total shown in the graph does not include foundry sales, so I’m not talking about that, because then TSMC can definitely change the equations. Bye the way, in this chart, Samsung’s sales from its Austin facility has been counted as sales from South Korean companies.

There are a couple of point to note here, South Korea has just half of US market share in IDMs, and Japan, Europe and Taiwan are nowhere near US market share in IDMs. China has negligible presence in IDM. Although Europe will have improved figures in IDMs after completion of NXP’sacquisition of Freescaleand Infineon’sacquisition of International Rectifier, it will still be much lower compared to US market share in IDMs. In case of fabless business, South Korea and Japan have negligible presence and others including Taiwan are much lower compared to US market share in fabless. Europe’s just 3% share in fabless market will further erode after CSR and Lantiq will start being counted as US headquartered companies.

Now let’s take a closer look at 300mm Fab Capacity blog. I am reproducing that bar chart here –

It’s apparent that based on headquarter location of companies, US is just next to South Korea in 300mm wafer capacity. There is a large difference between headquarter figure (28%) and fab location figure (15%) of US fabs. The reason is simple; keeping the fabs out of US locations provides significant cost advantage to US companies. Also South Korea and Taiwan definitely have significant cost advantage in this capital intensive foundry business.

One more indicator about US lead in R&D of semiconductors was mentioned in my earlier blog, “Who Leads Semiconductor Innovation”. In this it was clear that US semiconductor companies, led by Intel, spent the most on R&D activities for semiconductors. They also have largest R&D Expense / Sales ratios. That reminds me about one of our forum discussions in Semiwiki on patents and innovations. There, it was evident that sales are important and equally important are investments into R&D to further grow the sales; both need to complement each other. The US is doing well in that aspect. There are companies outside US as well which are doing well in these aspects. However in my view a cluster of such companies is in the US and that creates an innovative and developing environment there.


Breakfast was Fab: West Coast Wafers to Wall Street

Breakfast was Fab: West Coast Wafers to Wall Street
by Paul McLellan on 04-07-2015 at 7:00 am

SEMI describes themselves as “the global industry association serving the manufacturing supply chain for the micro- and nano-electronics industries.” That is a pretty broad remit. One of the things that they do as a neutral party is produce the World Fab Forecast. This is actually a bottom-up database that tracks fabs as they are built, equipped, ramped to volume, upgraded and expanded and, eventually, closed. Of course this is of great interest to people selling the equipment and material required, but also it impacts the entire semiconductor ecosystem. This is especially true in the fabless/foundry ecosystem. If TSMC builds a 16FF+ Gigafab then Apple, Qualcomm, nVidia, Xilinx and lots of others are affected, not to mention Samsung, Global Foundries, UMC and Intel who compete with them.


Christian Dieseldorff maintains the database and also presented at the latest SEMI Silicon Valley SEMI Breakfast Meeting. The forecast covers 3 years of detailed information with a future forecast of 1½ years. To give an idea of how extensive the database is, it covers 1174 fabs, what are officially called “front-end facilities” to distinguish them from test, package and assembly and are nothing to do with FEOL or what we call front-end in EDA. They are owned by 500 companies. There are 58 future facilities starting high-volume manufacturing (HVM) in 2015 or later, with 218 facilities currently either in construction or equipping.

Another aspect to be tracked is consolidation. Recent events (many of which we have covered here at Semiwiki) affect 46 fabs from 4” to 12”. Here they are:

  • Fujitsu winds down chip production
  • Global Foundries “buys” IBM’s semiconductor business
  • Grace and Hua Hong merge
  • Cypress and Spansion merge
  • Infineon buys International Rectifier
  • Triquint and RFMD merge (Qorvo)
  • Freescale and NXP (will) merge

Where are all the big fabs? In 2015 there are 1999 fabs equipping with 16 of them investing over $800M including Samsung, SK Hynix, Sandisk/Toshiba, TSMC, Intel, GloFo and SMIC.


Fab equipment has been a rollercoaster. But actually as a rollercoaster it would be a bit boring since it is too predictable, two years up, followed by two years down. This has gone on every 4 years since 1999. But it looks like this year will be the second up year and, for the first time in living memory, 2016 should be an up year too.


One area where historically there has been overinvestment in fabs has been DRAM. In the last few years this has moderated resulting in DRAM prices firming (which is a large part of the recent growth in the overall semiconductor market). In fact there were 40 dedicated DRAM fabs in 2007 and now we are down to 15. But the capacity of new fabs is huge compared to the old, with the 5 new fabs this year adding over 30K wpm with the 10 closing dropping about the same amount.

One area with a big decline is not the number of fabs or capacity but the number of companies building fabs. In the 2004-7 era there were nearly 50 companies building fabs but in the 2014-16 era that is down to 13. Obviously this is largely due to fables (and fab-lite) but also consolidation and companies going out of business too.


One new phenomenon is the IoT market. As I have said before, this is primarily a market for old processes, and old processes typically run in old fabs which are mostly 8”. In fact in 2007 there were 200 8” fabs with a capacity of 5.7M wpm, which dropped to 183 by 2013 with a capacity drop of 11% to 5.1 wpm. However, capacity will increase 4% by 2018 to 5.4M wpm although the number of fabs will again drop to 180.

Bottom line is that fab expansion rates slowed to below 2% in 2012 and 2013. They should increase around 3% year on year from 2015 to 2018. This reflects growth of the overall semiconductor market of 9% in 2014, forecasts for 3-8% in 2015 and 3-9% in 2016.


How Pebble Reinitiated the Inning for Smartwatch

How Pebble Reinitiated the Inning for Smartwatch
by Pawan Fangaria on 04-06-2015 at 7:00 pm

The effort for adding phone function into watch had started much earlier in 1999 when several tech companies joined the crusade to enter the big watch market. Notable among them were Samsung, IBM, Microsoft, Fossiland Sony Ericsson. The effort lasted for about a decade before showing its signs of fatigue. Microsoft SPOT (Smart Personal Objects Technology) closed in 2 years after its launch in 2006. Sony Ericsson’s MBW series was a little known that contained a small OLED display and worked with Sony Ericsson phone via Bluetooth. It vibrated on an incoming call, displayed caller identity, and notified on new text messages. However it was priced very high to the tune of $400. Samsung tried its next version S9110 in 2009 which was thinnest of all other smartwatches at that time and had a touch screen, Bluetooth, email and MP3 support. However, these smartwatches couldn’t takeoff, mainly because of their bulkiness, frequent changing, extra service subscriptions in some cases, and so on. The companies started focusing on the flourishing smartphone market instead of investing into smartwatches.

Between all this scepticism over initiation of a new smartwatch market was this 22 years old engineering graduate in systems design, Eric Migicovsky quietly developing a smartwatch. The Allerta inPulsereleased in 2010 is not known to many of us. It was actually developed by Migicovsky and his team before he started Pebble Technology. The inPulse worked with BlackBerry smartphone at that time. This is the time Migicovsky groomed himself in real smartwatch technology and its requirements, but he was not so lucky in the first-time business. Every business has its own challenges. A large inventory of inPulse watches was ahead of their sale and couldn’t sell. He couldn’t get further funding after his initial fund from a venture capital firm, Y Combinator and a few angel investors. But these challenges also provided him good learning about future strategies to work upon.

Firm in his ideas about a smartwatch that will be appealing to people to have it on their wrists, this time Migicovsky initiated a kick-starter campaign in April, 2012 to crowd fund his project on Pebble smartwatch. It was like a pre-booking for the $150 smartwatch at discounted prices of $99 for first few hundred backers and then at $115. The campaign had an stellar success; within a month it raised more than $10 million, more than 100 times the initial target. This was the most successful crowd funding of that time that set this record with about 69 thousand people investing into the Pebble project. The Pebble smartwatch started shipping in early 2013 that reached one million in number by the end of 2014.

The hidden advantage which is apparent in this kind of funding is that you already have your customers, backers and well wishers lined up; the tall challenge here is to convince them into your project. You directly talk to your customers at first hand and build the strongest support group. Migicovsky was able to do that, he knew exactly what is needed for a smartwatch and created the much coveted, long awaited new smartwatch market. Interestingly, he used an army of about 70 bloggers for the campaign to reach to every single potential customer.

So, what were those specific features in Pebble that rejuvenated the fatigued smartwatch market and gave it a fresh breathe? Pebble created an entirely new market for smartwatches. One of the very important mantra Migicovsky used is to let the watch adopt people’s habits instead of asking them to adopt the watch. In his view the most important core functions for the watch were notifications and phone calls. He exactly deciphered this from the fact that people looked at their smartphones for messages and phone calls for about hundred times a day. What if they could do this just by looking at their wrist watches instead of lifting the phone every now and then? Again for this much favour done by a watch on a wrist, one shouldn’t be paying a hefty amount like $400. Another important clue Migicovsky took it that these watches shouldn’t trouble people to charge them every day or even worse, twice a day. A smartwatch should work like any other watch for long on a single charge. The battery in Pebble lasts for about seven days on a single charge.

The credit for the long battery life goes to e-paper display, the feature in operating system on Pebble that lets other apps run in the background, and no packing of stuff that are not meant for a watch. The advantage of e-paper display is that it can be perfectly viewed in the direct sun light. Pebble has sports activity tracking, fitness tracking features, other activity tracking like sleep or walk detection, calories etc., notifications for phone calls, text messages, emails, etc., remote control for smartphones (where one could dismiss any phone call or notification from the smartwatch), and so on without any burden on battery life. The sensors for these activity tracking are actually inside the watch which can work from wherever it is.

It’s a small size watch (1.26”) with ultra-low power transflective LCD, vibrator, magnetometer, accelerometer, and ambient light sensors. It’s versatile to connect with any iOS as well as Android device (including smartphones and tablets) using Bluetooth 2.1. In a later upgrade it supports BLE 4.0 as well. It’s waterproof and can be used by divers. The Pebble team used all the feedback received on inPulse to improve this blockbuster smartwatch called Pebble.

While Migicovsky primarily focused on the core functions of Pebble, he developed an SDK (Software Development Kit) called PebbleKit for app developers in the community who could use it as an open platform and develop innovative apps that could ease and automate several of our daily activities. This was another innovative idea of pooling innovations from the community which could go much beyond a single person’s imagination. Today, Pebble has built a large community of more than 25000 developers who have already built more than 6000 apps and watchfaces for Pebble appstore, and still continuing. Many companies including Mercedes-Benz, GoPro, iControl, and others also have joined the initiative. The idea from Mercedes-Benz includes an app which can make your wrist shake when it sees any obstruction on the road while driving. The PebbleKit provides a complete customizable wearable platform for the watch; the app store has a watchface generator where images can be uploaded to generate an specific watchface. Pebble team is also building an operating system, the platform designed specifically for wearables.

Pebble Steel with a thinner body, Corning Gorilla Glass, and tactile metal buttons was released last year at CES 2014. Pebble Time was announced this year in Feb, again through a kick-starter campaign which reached $14 million by the beginning of March setting another record while the campaign still continued. Pebble Time will have 64 color LCD, a microphone and a more ergonomic and thinner chassis. It includes a new interface designed around a timeline, similar to Google Now. Pebble Time is reported to be selling at $199.

Pebble Time Steel is another latest in Pebble smartwatches that is tipped to have room for a larger battery to last for 10 days. It’s the top model of Pebble in stainless steel which is yet to start shipping in second half of this year and is expected to be priced at $299.

Recently, along with Pebble Time Steel, Pebble team brought their open wearable platform to next level by announcing an open hardware platform for wearables, called “Smartstraps”. By using this platform, a developer can design and develop a new strap that can connect to a special port in the watch to add new features like heart rate monitor, extended battery life, GPS, and others, thus keeping the smartwatch in itself small and slim.

In August 2013, Eric Migicovsky, the founder of Pebble Technology was selected as one of the remarkable 35 innovators under 35 for all of his innovative work on smartwatches.

Also read: Passage of Time with Watches


What is Inside of the Samsung Galaxy S6?

What is Inside of the Samsung Galaxy S6?
by Daniel Payne on 04-06-2015 at 1:00 pm

I’ve always been curious about what is inside an electronic device, and it was seeing the very first TI handheld calculator that got me started into a career as an Electrical Engineer. Next to Apple, the most popular brand in smart phone devices these days has got to be Samsung and they have just launched the Galaxy S6 device. A teardown company called Chip Works did the honors and has some beautiful photographs of what they found inside of the S6.

I’ve owned Samsung phones for the past 6 years, so am very familiar with their product line-up in general. What really sets the S6 apart from other phones are:

  • 8 core processor
  • 64 bit Operating System

In the past many smart phone companies would get to market quickly by using an application processor off the shelf from a company like Qualcomm, however Samsung is big enough that they can afford their own engineering team to design an application processor and they dub their octa-core the Exynos 7420. Apple is also well-known for custom-designing application processors as the A-series.

Even though the application processor brains in the smart phone tend to get top billing, there are a slew of other support chips also required to create a complete system. Inside of the S6 you will also find specialized chips like:

[TABLE] style=”width: 500px”
|-
| Feature
| Company
| Chip
|-
| Application Processor
| Samsung
| Exynos 7420
|-
| Memory
| Samsung
| LPDDR4 SDRAM
|-
| Flash
| Samsung
| 32GB NAND Flash
|-
| Modem
| Samsung
| Shannon 333
|-
| Power Management
| Samsung
| Shannon 533
|-
| RF Transceiver
| Samsung
| Shannon 928
|-
| Envelope Tracking
| Samsung
| Shannon 710
|-
| GNSS Location Hub
| Broadcom
| BCM4773
|-
| Gyro, Accelerometer
| InvenSense
| MPU-6500
|-
| Multimode Multiband
| Skyworks
| SKY78042
|-
| Phase Accordance Method (PAM)
| Avago
| AFEM-9020
|-
| Image Processor
| Samsung
| C2N8B6
|-
| Audio Amplifier
| Maxim
| MAX98505
|-
| WiFi Module
| Samsung
| 3853B5
|-
| NFC Controller
| Samsung
| –
|-
| Audio Codec
| Wolfson
| WM1840
|-
| Power Receiver
| TI
| BQ51221
|-
| Antenna Switch
| Skyworks
| SKY13415
|-
| Touch Screen Controller
| STMicro
| FT6BH
|-

Instead of choosing a touch screen controller from well-known suppliers like Synaptics, Cypress, Atmel or a Chinese company we see that Samsung used STMicroelectronics instead in this model.

Related – Intel Core M vs Apple A8!

14 nm FinFET Samsung technology is used in the Exynos 7420 application processor, a more advanced process than the 20 nm TSMC technology that Apple has in their A8 chip.


Die mark, Exynos 7420


Top-level Metal, Exynos 7420

​Here’s a quick comparison of die sizes for the last two generations of S phones from Samsung:

[TABLE] style=”width: 500px”
|-
| Phone
| Chip
| Die Size
| Technology
|-
| Galaxy S6
| Exynos 7420
| 78 mm^2
| 14 nm
|-
| Galaxy S5
| Snapdragon 801
| 118.3 mm^2
| 28 nm
|-
| Galaxy S5
| Exynos 5422
| 135 mm^2
| 28 nm
|-

Cross-sectional photos show some of the 11 layers of metal and FinFET structures:


Cross-section of metal stack, Exynos 7420

At DAC last year in San Francisco we saw one of the first wafers in 14 nm technology from Samsung, and this year we can buy a smart phone like the Samsung Galaxy S6 with 14 nm silicon, so FinFETs are enabling progress in consumer electronic devices already. Intel, of course was first to market with FinFET (aka TriGate) technology, so we should expect to see continued competition for FinFET silicon from TSMC, Intel and Samsung.


Related:Qualcomm LTE Modem Competitors? Samsung, Intel, Mediatek, Spreadtrum, Leadcore… or simply CEVA!


Security All Around in SoCs at DAC

Security All Around in SoCs at DAC
by Pawan Fangaria on 04-06-2015 at 12:00 am

Last month I was on my way to write a detailed article on important aspects to look at while designing an SoC. This was important in the new context of modern SoCs that go much beyond the traditional power, performance and area (PPA) requirements. I had about 12-13 parameters in my list that I couldn’t cover in one go, so I put the write-up of first six parameters into a blog, “SoCs in New Context Look beyond PPA”. Security is definitely one of the most important parameters in my list, but I couldn’t cover that in the first blog. It was nothing to do with the importance of the parameters, but just a sequencing of those. It’s obvious how intricately our community is looking at the security aspects in SoCs; immediately after that blog was published, the first comment I received is that it missed ‘security aspect’!

Earlier, security was considered to be a software issue that could be patched. But today with the advent of IoT and SoCs encompassing several aspects of the whole system, it’s much more severe extending into hardware and also complicating the software issue with authentication, encryption, traceability, and so on. A hardware security breach stays there; it can’t be patched, so security proofing of hardware has to be considered upfront from SoC design stage. Similarly software architecture has to be considered along with the SoC design.

I’m yet to start writing the second part of my SoC article. But before that, as the DAC 2015 is approaching, I browsed through the DAC agenda. I’m amazed to see that it covers almost every aspect about security beyond what I was contemplating. As always, DAC provides a great indication about the way our semiconductor industry is progressing. I’m convinced that the future of semiconductor world will become fully secure. There are a host of events on security including keynotes, tutorials, SKY talks, special sessions, research paper sessions, and panels. It would be difficult to talk about all of them here, but I will try to highlight some of the important ones that enticed me. While I dive deep into some of those after knowing more about them in the actual DAC presentations, here is a list of items that are worth attending.

Tutorials on June 8, 2015

Building Secure Hardware and Software Systems – 10:30am – 12:00pm, 1:30pm – 3:00pm

Todd Austin from univ. of Michiganand Jin Yang from Intel will talk all about hardware and software breaches that happen today and how to prevent them through pre-emptive and reactive design techniques.

Introduction to Hardware and Embedded Security – 1:30pm – 3:00pm, 4:30pm – 6:00pm

Mark Tehranipoor from univ. of Connecticut, Miodrag Potkonjak from univ. of California, and Ronald Perez from Cryptography Research, Inc. will talk about design and test of powerful security primitives such as Physical Unclonable Functions (PUFs), public PUFs (PPUFs), True Random Number Generators (TRNG), and silicon odometers to meter device usage. Hardware Trojans and Counterfeits detection, prevention and open challenges. Foundations for on-chip security with a novel concept of ‘Root of Trust’ applied to semiconductors and semiconductor IP lifecycle.

There is also a research paper session 39, “Arms and Armor for the FUTURE” on June 10, 1:30pm – 3:00pm that focuses on these concepts and provides an insight into security-enhanced processors mitigating software vulnerabilities, innovative characterization and emulation methods empowering PUFs, hardware verification methods for Trojan detection , and so on.

There is another research paper session 53, “got security?” on June 11, 10:30am – 12:00pm that talks about optical imaging and formal verification methods for hardware Trojan detection, novel watermarking and obfuscation techniques to protect IP at chip and PCB levels respectively, on-chip voltage regulators that suppress side channel information leakage, and a novel TRNG design for FPGAs.

Keynote on June 10, 9:00am – 10:00am

Cyber Threats to Connected Cars: Staying Safe Required More Than Following the Rules of the Road

Speakers: Jeff Massimilla from General Motorsand Craig Smith from Theia Labs/ OpenGarages/ IACT. IACT abbreviates to ‘I am the Calvary’. Craig is also author of the Car Hacker Manual.

Moderator: John McElroy from Blue Sky Productions, Inc. John is also the host of Autoline Daily.

These gentlemen, veterans of cyber security in automotive, will talk about how vehicles can continue to evolve and support internet capability via WiFi and cellular data network, connect to mobile computing platforms via Bluetooth, provide GPS navigation, and automatically link to manufacturers to help with diagnostics. Cars need to be much more secure than computers at home!

Then there are great special paper sessions:

Special session 32: The Fourth Industrial Revolution: Security and Privacy Challenges in Industrial IoT on June 10, 10:30am – 12:00pm

Special session 46: Securing Cyber-Physical Systems (CPS): from Surveillance to Transportation and Home on June 10, 4:30pm – 6:00pm. In this session, speakers from government agency, industry and research institutions will join to introduce security challenges in several critical CPS domains. They will also present promising approaches in quantitative modeling, simulation and analysis of security elements, and in automated security-aware optimization and verification.

Special session 61: Validation, Validation, and Validation: The 1-2-3 of Secure SoCon June 11, 1:30pm – 3:00pm. This session will include both pre-silicon and post-silicon validation techniques including SoC security architectures.

Special session 69: The Lifecycle of Secure Chip Design on June 11, 4:00pm – 5:30pm. This deals with the whole design lifecycle of a cryptographic chip.

Do not forget to attend some of the SKY talks, they are like mini keynotes and are really interesting. There is one, “On the Matter of Trust” on June 11, 3:30pm – 4:00pm. This will be presented by Kerry Bernstein from DARPA (Defense Advanced Research Project Agency). He will talk about various kinds of electronic threats around us and the ideas DARPA is developing to mitigate them.

There is a panel discussion too, titled “Design for Hardware Security: Can You Make Cents of It?” on June 9, 1:30pm – 3:00pm. The panellists include industry veterans, academicians and researchers and it’s moderated by Saverio Fazzari from DARPA. This is an interesting discussion which highlights the vulnerability of hardware to security compromise, but still hardware takes a back seat in dealing with security. What should be done? Is there enough incentive in hardware security? Who should pay?

It’s a great opportunity exploring security issues, challenges, solutions, policies, regulatory, and so on in this DAC!


These Energy-Saving, Batteryless Chips Could Soon Power The Internet Of Things

These Energy-Saving, Batteryless Chips Could Soon Power The Internet Of Things
by admin on 04-05-2015 at 4:00 pm

Power consumption is always a major concern in the field of electronics, especially as the circuits controlling these electronics shrink in size while also growing in complexity. Utilizing a fairly new, ultra-low power technique known as a sub-threshold voltage mode for transistors operating in the circuit, the company named Psikick has developed what they call a “revolutionary” wireless sensor. Subthreshold sensors have significant power supply voltage (VDD) reduction, it is said to be 100 to 1000 times more power efficient than any other recently designed sub-threshold wireless sensor networks. The Department of Electrical Engineering at Texas A&M University published a paper in Circuits and Systems, 2005 titled, “Low power current mode ADC for CMOS sensor IC,” stating that they had created an integrated sensor utilizing sub-threshold and current mode techniques for low-power operation. The power consumption of their integrated circuit was under 6μW. According to the claims made by the Charlottesville-based company Psikick, this would mean that their wireless sensor technology only consumes between 6 and 60 nanowatts of power. Some of the benefits to such a low power design could include battery-less operation of the sensor, low-power voltage sources such as can be collected from wind, vibration, thermal gradients, solar, piezo actuation, and RF (radio frequency) energy scavenging. The applications for such a device could be limitless, useful in everything from medicine to athletics or the military. Such low powered sensors could be used indefinitely to measure patients heart rates or brain waves from the comfort of their own homes. They could be used in avionics to measure conditions outside the aircraft with little to no power input, or to measure the movements and heart rates of athletes for optimum performance on their field.

Another letter by the Dept. of Electronics & Computer Engineering at the University of Colorado at Boulder, published in Power Electronics in Dec. 2010, titled, “Custom IC for Ultra-low Power RF Energy Scavenging,” describes what makes these low power energy sources possible. They presented a custom integrated circuit including an ultralow power RF rectifying antenna power source and a microbattery for maximum power collection. This energy scavenger circuit operated a “boost converter in pulsed fixed-frequency discontinuous conduction mode to present a positive resistance to the rectifying antenna.” Their subthreshold current source was in the 200 nA range, placing their supply voltage in a range from 2.5 V to 4.15 V. This resulted in a power consumption of between 1.5 and 30 μW, with a higher conversion efficiency at higher voltages. This IC was made two years prior to Psikick’s wireless sensor, utilizing very similar technologies (CMOS, RF scavenging, subthreshold processes, etc.). Although the integrated circuit was designed for RF energy scavenging, the low-power “boost converter” mentioned is also responsible for the application of some other power sources mentioned, like wind, vibration, and temperature. On the homepage for the Psikick sensor, the Psikick team brags that their sensor is “Fully integrated and silicon-proven,” generally meaning that the technologies involved have proven to work as expected. However, no actual numbers are ever mentioned relating to supply voltages, dynamic or static power consumption, or subthreshold currents, with the exception of the 100 to 1000 times lower power claim. While this makes it difficult to determine any of the various technologies employed in the sensor’s development beyond speculation, the hype that Psikick has stirred up relating to their new technology definitely makes their design sound promising.

When a CMOS transistor operates at the sub-threshold voltage many problems start to be relevant. Problems that are normally insignificant or nonexistent when the transistors operate at normal conditions. When VDD is reduced, the dynamic energy also is reduced but the transistor leakage over longer time periods increases the leakage energy. Therefore it is necessary to balance both energies and find the best point of operation, normally at VDD (around 300-500 mV). Another problem is that the pMOS and nMOS thresholds are imbalanced and may lead to the need to change the circuit design to correct this difference. Circuits with several series and parallel transistors are another problem due to the stack effect, the series transistor will have less current when ON than the parallel transistors when OFF, making it necessary to raise the source current. Several parallel transistors are also a problem when the circuit is designed to operate as a static structure, since it will increase the leakage current and make the parallel transistor have a greater current when OFF than the series transistors when ON. Dynamic circuits should also be avoided, because the transistors logic works with subthreshold leakage current and this current gradually discharges the dynamic nodes.

Psikick is aiming to produce an application standard product for each vertical market. This helps on the production side of the chips but it also takes up extra space on the chip size level when they are going for more functions that will not be needed for every job. However this is not the main focus of Psikick since the goal was reduction in power consumed. With the amount of power savings, the use for these chips in The Internet of Things will be interesting to see. With the tiny size of these chips, the potential uses are going to seem limitless. Some such uses might be industrial process control, infrastructure monitoring, precision agriculture, medical biosensing, consumer wearables, smart homes/grids/cities, and many more according to the Forbes article about the new startup.

Some disadvantages to subthreshold processing may include a lowered processing power and a much slower speed. This is due to the much lower power constraints of a subthreshold circuit, and so the chip runs at only a “few tens of megahertz at most,” according to the article by Forbes. This is much slower than the gigahertz speeds at which most of us are used to having our chips run. On the plus side, however, there are many uses for which we do not need a significant amount of processing power but rather require minimal processes under extremely extended intervals. In a case such as this, being able to power a device, some type of monitor or sensor, for days, weeks, or maybe even years on end without ever stopping for a recharge or a reboot is a world-changing technology. Sending patients with bad hearts back home and monitoring their vitals via a subthreshold wireless sensor could not only save lives, but provide comfort while doing it. The opportunities for this new company and their circuit design are potentially limitless, and it will be exciting to see the spread of this new technology.

Article in question for reference: These Energy-Saving, Batteryless Chips Could Soon Power The Internet Of Things

By Adam Westman, Christian Sasso and Tanner Helton

The University of Mississippi Electrical Engineering Department introduced a Digital CMOS/VLSI Design course this semester. As part of this course, students researched a contemporary issue and wrote a blog article about their findings for presentation on SemiWiki. Your feedback is greatly appreciated.


The Changing Foundry Landscape: Trends and Challenges!

The Changing Foundry Landscape: Trends and Challenges!
by Daniel Nenni on 04-05-2015 at 4:00 am

This will be a year of change for the fabless semiconductor ecosystem, absolutely. Last year we were wondering how Samsung Mobile was going to compete with the China clones and other low end smart phones. We now know the answer to that question thanks to the Chipworks tear down of the Galaxy S6. SemiWiki IP expert Dr. Eric Esteve blogged about it first HEREbut I can assure you we will see more blogs and Forum posts in the coming days because this is REALLY big news. Samsung has not only served notice to the fabless chip companies and mobile device makers, they have also sent a shot across the bow of the the mighty TSMC. Great timing too since I will be moderating a panel at the Mentor U2U Conference here in San Jose next week discussing just that:

The Changing Foundry Landscape: Trends and Challenges
Giorgio Cesana | Director of Technology | STMicroelectronics
Jack Harding | Co-Founder, President & CEO | eSilicon
Lluis Paris | Deputy Director of Worldwide IP Alliance | TSMC
Wally Rhines | CEO & COB | Mentor Graphics

Moderated by: Daniel Nenni, CEO & Founder, SemiWiki.com

The System on Chip (SoC) business seriously challenged the semiconductor foundries back at 28nm with increased integration, higher performance requirements, novel packaging methods, and very aggressive delivery targets. SoCs still drive semiconductor manufacturing technology at an increasingly rapid pace. This panel of experts will discuss today’s trends, challenges, and new applications that may drive future generations of semiconductor design and manufacturing.

And if meeting me isn’t enough, there are two very interesting keynotes as well:

“Secure Silicon: Enabler for the Internet of Things”

Keynote presented by: Wally Rhines, Chairman & CEO, Mentor Graphics
As electronic system hackers penetrate deeper—from applications to embedded software to OS to silicon—the impact of security threats is growing exponentially. Viruses and malware in the operating system, or application layer, are major concerns, but only affect a portion of users. In contrast, even small malicious modifications or compromised performance in the underlying silicon can devastate system security for all users. Growth of the Internet of Things magnifies the impact of the security problem by orders of magnitude.

Since hardware is the root of trust in an electronic product, EDA companies will be increasingly pressured to solve the silicon security problems for their customers. This requires a new paradigm in silicon design creation and verification. The traditional EDA role is to design and then verify that the silicon does what it is supposed to do. Creating secure silicon, however, requires that verification ensure that the chip does nothing that it is NOT supposed to do.

The industry is at the first stage of Secure Silicon awareness; it’s going to become big business as future events unfold. Join Wally Rhines as he examines the growing threats to silicon security and EDA’s possible solutions.

“Mega Trends Driving Architectures of Mobile Computing and IoT devices”
Keynote presented by: Karim Arabi, VP of Engineering, Qualcomm
The mobile computing and communication industry has been characterized by constant changes and rapid expansions. Aggressive silicon integration technology scaling, advanced low power design techniques, efficient mobile wireless and connectivity solutions and advances in a plethora of sensor technology have been critical in enabling mobile computing in a ubiquitous and cost-effective manner. Mobile computing continues to drive innovation in technologies that will enable new use cases and applications in an energy and cost efficient manner. The industry is now evolving quickly to leverage these capabilities to address the emerging wearable and IoT opportunities expected to sustain growth for the next decade. Choice of device architectures and features are impacted by market requirements and mega trends. In this presentation mega trends, opportunities and challenges driving next generation mobile and IoT devices will be reviewed.

This is a FREE CONFERENCE so I hope to see you there!


Variation Alphabet Soup

Variation Alphabet Soup
by Paul McLellan on 04-04-2015 at 1:00 pm

On-chip variation (OCV) is a major issue in timing signoff, especially at low voltages or in 20/16/14nm processes. For example, the graph below shows a 20nm inverter. At 0.6V the inverter has a delay of 2 (nominalized) units. But due to on-chip variation this might be as low as 1.5 units or as high as 3 units, which is a difference from slow to fast of 100%. Variation is not so bad at 1V but, for power reasons, everyone wants to get the voltage as low as possible since it is squared in the power equation also reduces leakage. Voltage is like sailing, if you want to win races, you have to sail close to the wind even though it is more difficult.

See also Voltage Limbo Dancing; How Low Can You Go?

The problem is on-die variation. We can’t assume that if one transistor is faster than typical that the transistor it is driving is also faster. There are a number of reasons for this but one big one is that optical proximity correction (OPC) means that identical transistors do not end up identical on the mask since that depends on what is around them.

In response, foundries have broken out on-die variation as a separate component in their SPICE models. They created global corners for slow, typical and fast. These global corners, called SSG (slow global), TTG (typical global) and FFG (fast global), only include between wafer variance. On-die variance is separated out as a set of local parameters as part of the SPICE model that work with Monte-Carlo (MC) SPICE around the global corners. Analog designers routinely use these global corners and local parameters to validate cells. These same global corners and local variance parameters can be used to create derates or adjustment factors for static timing and physical optimization of digital designs (and the digital parts of mixed-signal designs).

But how do you put this into the sign-off timing flow and delay models?

Obviously something needs to be done so everyone did something. Cadence, Synopsys and TSMC have all used multiple acronyms: OCV, AOCV, SBOCV, SOCV, POCV and LVF. Too many TLAs (and FLAs).

So how is all this represented? That’s what all the alphabet soup is about. The three main ingredients to the soup are OCV (on-chip variation), AOCV (advanced on-chip variation) and LVF (Liberty variance format). The table below shows the details.

So how is all this represented? That’s what all the alphabet soup is about. The three main ingredients to the soup are OCV (on-chip variation), AOCV (advanced on-chip variation) and LVF (Liberty variance format). The table below shows the details.

The bottom line is that OCV works well for 45nm and above but isn’t good enough for 28nm and below. SBOCV is TSMC’s name for adding more accuracy, and AOCV is what Cadence and Synopsys call it in their timing tools.

AOCV suffers from a major limitation though. There are only 8 values to cover all the different timing arcs through a cell. Even a simple two-input NAND gate may have 128 different AOCV multipliers: two inputs times 4 input slew rates times rising/falling times 8 output loads. To get it down to 8, the worst-case derates (or near worst) need to go in the file. But this means that AOCV is still unnecessarily pessimistic.

The solution is to use Liberty Variance Format (LVF). Why?

  • unlike OCV (1 value per process corner) or AOCV (8 values per cell per corner), LVF models all possible conditions for a cell. Every arc, load and slew has unique variance information
  • it is an independent standard governed by the Liberty TAB (that also controls the Liberty library format) approved in 2013 and revised in summer 2014 to support constraint uncertainty and slew sigma
  • SOCV (a TSMC format) maps back and forth with LVF since the contents are largely the same, so there is no problem using LVF for designs that will be manufactured by TSMC
  • PrimeTime (Synopsys) and Tempus (Cadence) both support LVF

The biggest advantage of LVF is that it drives static timing accuracy much closer to MC SPICE, the ultimate benchmark. As a result, it will also produce more accurate slack numbers. On the whole, when compared with OCV, this will improve overall slack as compared with OCV or AOCV.

The graph below shows it. Red is MC SPICE so the goal is to be as close to red as possible. LVF is green and is clearly closer than either OCV or AOCV. LVF’s rich data set enables an STA tool to dramatically improve overall accuracy compared with other approaches relative to MC SPICE.

So, in conclusion, LVF is the clear long-term winner. It will be in full production usage over the course of 2015. It is the most robust solution, and addresses all of the limitations of AOCV. Semiconductor teams that are intent on delivering 16nm, 14nm or 10nm silicon would be well advised to begin investing in an LVF design flow today.

TL;DR Use LVF

Download A Brief Introduction to Liberty Variance Format from CLKda here.