RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Imec Technology Forum

Imec Technology Forum
by Paul McLellan on 04-23-2015 at 7:00 am

I like to quiz people on which country is the one where the most leading edge research on semiconductors is done. People reflexively answer USA or maybe Taiwan or Japan. Nobody who doesn’t already know the answer would pick Belgium. After all the EU headquarters is there not because Belgium is important but because Belgium is too small to matter, whereas putting the headquarters in Paris, Rome, Berlin or London would have everyone else objecting. But imec is based in Leuven (or Louvain if you are Francophone) and it seems to be where the world’s major semiconductor companies do a lot of pre-competitive research. And they don’t just do research into semiconductors themselves, they are also are leaders in various application areas.

Just reading off the menu on their website brings up:

  • semiconductor scaling
  • GaN power electronics
  • wearable health monitoring
  • life sciences
  • wireless communication
  • image sensors and vision
  • large area flexible electronics
  • solar cells and batteries
  • sensors for industrial applications

One major way that they communicate is though the imec technology forums (fora?) which are held each year around the world. There are events in Korea, Brussels and Taiwan. The US event is held on the first day of SEMICON West so this year it will be July 13th. The biggest event of all is held in Belgium. Too big even for Leuven itself it is actually held over two days in Brussels. This year it is June 23-24th and is titled For the Builders of Tomorrow—Towards Smart Living.

If you work in semiconductors or electronics this should be a must-attend event. Just to give you an idea of its importance, the opening keynote is by Morris Chang (or maybe he is Maurice if you are Francophone). Also speaking are Lip-Bu Tan (CEO of Cadence), Padmasree Warrior (CTO of Cisco), Simon Segars (CEO of ARM), Pater Wennink (CEO of ASML, the home of EUV) and many more. The schedule is not 100% finalized but the provisional one is:

Day 1—June 23 (8am to 7pm)

  • Luc Van den hove—president & CEO, imec
  • Morris Chang—founding chairman, TSMC
  • Lip-Bu-Tan—president, & CEO, Cadence Design Systems
  • Padmasree Warrior—chief technology & strategy officer, Cisco
  • Peter Wennink—president & CEO, ASML
  • Simon Segars—CEO, ARM
  • Jean-Marc Chery—COO, STMicroelectronics
  • Robin Murdoch—global managing director, internet & social, Accenture
  • Martin Anstice—president & CEO, Lam Research
  • Caroline Hillegeer—senior vice president of strategy and technology, GDF SUEZ
  • An Steegen—senior vice president process technology, imec
  • imec life science team

Day 2—June 24 (8am to 5pm)

  • Babak Parviz—vice president, Amazon.com
  • imec wearable health-care team
  • Tim Harris—director of applied physics, Janelia Farm
  • William Yang—founder & CEO, BaySpec
  • Steven Nietvelt—chief innovation officer, Cartamundi
  • Cees Links—founder & CEO, Greenpeak
  • Stephen Turner—Founder & CTO, Pacific Biosciences
  • Meg Doherty—coordinator of treatment and care in the dept. of HIV/AIDS, WHO
  • Joost Wille—R&D director, Sioen Industries
  • Rudi Pauwels—CEO, Biocartis
  • Eric Van Zele—President & CEO, Barco
  • Phillip Vandervoort—chief consumer market officer, Proximus
  • Geert Palmers—CEO, 3E
  • Steve Beckers—general manager IC-link, imec
  • Koenraad Debackere—managing director KU Leuven Research
  • Harmke De Groot—senior director of perceptive systems for IoT, imec/Holst Centre


One especially informative presentation, for me, anyway, is An Steegen who is imec’s senior person on process technology. It is like drinking from a fire-hose, as the cliche goes, but in a short-time you will not miss out on anything important going on that may impact the future of process technologies. I don’t just mean 7nm, I mean what comes next.

It is €550 to attend, and with the current weakness of the Euro against the dollar that is under $600. Full disclosure: I will be attending and imec are paying my expenses.

For more information go here. To register go here.


CEVA DSP Cores … Inside Intel

CEVA DSP Cores … Inside Intel
by Majeed Ahmad on 04-22-2015 at 3:00 pm

Intel Corp. is gaining discernible market share in the LTE chips business, and Qualcomm, the 800-pound gorilla in the mobile baseband market, suddenly looks in Intel’s crosshairs. A closer look at Intel’s journey from a mobile silicon underdog to the owner of a swelling LTE footprint shows that design ingredients like CEVA Inc.’s DSP cores have played a significant role in helping Intel get its baseband act together.

Intel licensed CEVA-XC core for LTE chips back in 2010 at around the same time when it was acquiring Infineon’s wireless business unit. Infineon also used CEVA’s DSP engines in its ARM-based 3G and 4G LTE chips. However, Intel’s licensing deal with CEVA was independent of its pending acquisition of Infineon’s baseband business. After early setbacks, Intel had now started surrounding its Atom system-on-chips (SoCs) with outside ingredients like CEVA soft modems.


Intel first licensed CEVA-XC DSP core back in 2010

Fast forward to 2015, Intel has cobbled a complete cellular portfolio. It is now offering both discrete LTE modems pairing with ARM-based application processors from other chipmakers as well as LTE baseband sockets integrated with its own x86 Atom application processors.

CEVA—after having scored heavy-hitters like Intel and Samsung and snapping up DSP sockets in China’s mobile SoC high-flyers such as HiSilicon, Leadcore, MediaTek and Spreadtrum—now increasingly looks like the ‘ARM of mobile baseband.’ The main chipmaker that doesn’t use CEVA cores at all is Qualcomm. So the fact that Qualcomm’s stronghold on the multi-mode LTE chip market is loosening is a good news for CEVA that designs and licenses cores used by suppliers of mobile baseband chips.

Cellular radio connectivity stack—also known as modem or baseband chip—has been a Qualcomm forte till now. On the other hand, Intel has long been in the shadows while fine-tuning its mobile DSP and baseband strategies. However, the Santa Clara, California–based chip giant remained committed to its long game in the mobile industry, and that endurance is now finally bearing fruit. Intel has finally started to carve out a tangible position in the rapidly growing LTE chips business.

Intel’s Baseband Play in China

Intel has recently scored an important design win in Asustek’s Zenfone 2 handset for the China market. According to Forward Concept’s April 2015 newsletter, Asustek’s smartphone is powered by Intel’s 64-bit Atom quad-core processor and the 5-mode Intel XMM7262 LTE modem. The Cat 4+ modem supports LTE-A, carrier aggregation and FDD and TDD formats for both China and global markets.

Next up, DigiTimes has reported about Rockchip unveiling smartphones and tablets using Intel processors at the Hong Electronics Fair being held on April 13-16, 2015. Intel has been collaborating with the Fuzhou-based Rockchip for jointly developing an SoC labeled as X3-C3230-RK; it comprises of a quad-core Atom application processor, Mali 400 MP4 graphics, and Intel’s 2G/3G/HSPA+ baseband. Intel has also launched two other SoC devices for the low-to-mid range smartphone and phablet markets in China.


Intel has used CEVA DSP engine in 2G/3G modem for the Rockchip SoC
(Image: Intel)

The X3-C3130 device is a slightly less powerful version made up of dual-core 64-bit Atom, Mali 400 MP2 graphics, and 2G/3G/HSPA+ baseband. Then, there is Intel’s X3-C3440 device aiming to budget LTE phones and tablets; it has integrated quad-core 64-bit Atom with Mali T720 MP2 graphics and 2G/3G/TD-SCDMA/FDD/TDD LTE Cat 4 baseband. These Atom x3 SoC devices—part of the Smart or Feature phone with Intel Architecture (SoFIA) family—are aimed for the budget smartphone and tablets. And they use CEVA’s DSP cores in the baseband connectivity stack.

Another prominent highlight of Intel’s rising influence in China’s mobile SoC landscape is its stake in Spreadtrum Communications, one of the leading baseband chip designer from China, who is also licensing CEVA’s DSP cores. Intel can leverage its relationship with Spreadtrum to target entry-level smartphone and tablet vendors in China.

Intel’s LTE Breakout

In December 2104, market research firm Strategy Analytics acknowledged in its update on mobile baseband chip market that Intel is making steady progress in the LTE chips business. It mentioned multiple designs wins at Samsung’s OEM business for its XMM 7260 Category 6 LTE baseband solution. “Intel’s SoFIA 4G chips in 2015 could help further,” the report noted.

Then, at the Mobile World Congress (MWC) floor in March 2015 in Barcelona, Intel introduced the XMM 7360 LTE-CA modem with 3x carrier aggregation and 5-mode capability. The LTE baseband chip, expected to be available in second half of 2015, features up to 450Mbps downlink speed and supports 29 LTE bands.


Intel’s Aicha Evans holds up the 5-mode XMM 7360 modem chip
(Image: Intel)

Intel’s upcoming LTE baseband chip supports LTE Advanced up to Category 10 and rivals Qualcomm in featuring envelope tracking for power efficiency. Moreover, it supports LTE Broadcast, voice-over-LTE (VoLTE) and dual-SIM capabilities.

VentureBeat has recently reported that the iPhone 7 handset expected to be launched in 2016 will use Intel’s LTE baseband chip. Apple currently uses Qualcomm’s baseband chips in the iPhone, and if Intel can win baseband socket in Apple’s iconic smartphone, it will be a major ‘design loss’ for the world’s top baseband chipmaker Qualcomm. The VentureBeat report corroborates on Intel’s increasing clout in the mobile chip business and the fact that it is closing up the technology gap with the market leader Qualcomm.

Also read:

CEVA-XC DSP Cores

CEVA Eyes DSP Scale in China’s $65 LTE Handsets

CEVA and LTE: Happy Together

Majeed Ahmad is author of books Age of Mobile Data: The Wireless Journey To All Data 4G Networksand Essential 4G Guide: Learn 4G Wireless In One Day.


Are There Trojans in Your Silicon? You Don’t Know

Are There Trojans in Your Silicon? You Don’t Know
by Paul McLellan on 04-22-2015 at 7:00 am

Yesterday was the Mentor users’ group U2U. As usual, Wally Rhines gave the keynote, this year entitled Secure Silicon, Enabler for the Internet of Things. Wally started off saying it was a challenge to find a new angle. The number of news articles on cloud computing has exploded from nothing to 72,000 last year. On IoT from nothing to 42,000 last year. Cybersecurity from nothing to 67,000. In fact, if you want to really fill up your calendar then you can go to a conference on one of these subjects pretty much every day between now and the end of the year.

Estimates of the size of the IoT market range from $300B to $14.4T (quite a range) and for cybersecurity from $113B to $3T. Is that really plausible? Well, to take just one famous example, the Target credit card theft:

  • 40M credit card numbers stolen
  • 70M credit cards stolen with address, email and phone
  • -30% drop in Target’s business after security breach
  • -46% drop in Target’s profits in Q4 2013 (YoY)
  • $200M estimated cost for banks to reissue 21M credit cards (not sure why the number of cards is so much lower than the number stolen)
  • $100M estimated cost for Target to update terminals to chip-and-PIN cards


Earlier in the week I wrote about your refrigerator attacking you. It turns out that the security weakness at Target was not a refrigerator but surprisingly close: they accessed the network through an air-conditioning subcontractor. This is just one well-known security breach. There are lots more: Stuxnet worm in the Iranian centrifuges, Syrian army hacking Forbes, AP’s Twitter account hacked and a multi-billion drop in the stock market on a fake tweet about Obama being injured in explosions at the White House.

I wrote a joke article about a Sonics/eSilicon chip recently here. But all the security stuff in it is real: the NSA did steal all the PIN card passwords for millions of mobile phones, people do get across air gaps using compromised thumb drives and so on. In fact, Wally had some data on a test done by the Department of Homeland Security where they dropped thumb drives in the parking lots of government buildings. Result: 60% were picked up and plugged into computers in the buildings and if the thumb drive had a DHS logo on it then a staggering 90% were plugged in.

There are three main levels of security that designers need to worry about:
Firstly there are side-channel attacks, which is extracting information (typically encryption keys) from chips using either passive approaches such as differential power analysis or electromagnetic analysis. By inspecting the thermal and electric aspects of the chip, or by inducing faults with lasers or electro-magnetically. To date, most of the attacks have been focused on the chips inside credit cards in most of the world, finally starting to come into mainstream use in the US too. But there are also attacks on set-top boxes and apparently the manufacturers judge how good their security is by how long it takes before cracked boxes are available on eBay. If they make it to two years that is regarded as the gold standard.

Solutions are starting to emerge by increasing randomness, fixed time algorithms, disguised structures to prevent reverse engineering and so on. But designers need to worry about simulating or emulating these attacks to defend against them before committing to silicon.

The second problem is counterfeit chips, which involves supply chain security. The problem is growing all the time and, despite people assuming it is a minor problem, it is already significant, and it doesn’t only affect high cost parts. The #1 reason it gets detected is when the parts do not work, but it is often very difficult to detect. Designers generally are not responsible for the supply chain management directly but there are things designers can do such as build on-chip odometers to measure use over time (so that fake chips cannot be recycled) or requiring chips to be activated with keys after manufacturing.

The biggest worry for designers is malicious logic getting inserted inside chips. Around a quarter of the IP blocks on the average SoC contains 3rd party IP along with 3rd party VIP. If the IP verifies with the VIP then the designer is happy. The block does what it is meant to. The new task that the designer is going to have to worry about is to check that the block does not do anything it is not meant to.
The most vulnerable attack points that Wally identified were 3rd party IP and code re-use, complex 3rd party scripts (driving the EDA tools) and physical IP with the trojan already designed in.

Wally reckons that the story will unfold like this:

  • there is already an emerging customer demand for silicon authentication
  • there are new standards that will force better validation, such as ISO 26262 for automotive
  • but world will not change until there is a major event resulting in financial or physical harm, which will force

    • semiconductor customers request certification from chip suppliers
    • chip suppliers scramble to test and certify existing IC’s
    • procedures implemented to screen IP blocks used in designs
    • design methodologies modified to add countermeasures to most designs
    • trojan detection and prevention becomes a design process

Perhaps the most worrying thing Wally said is that although you don’t read on the internet about trojans being inserted into hardware, when he meets people in the right US government departments they say it happens all the time. Wally’s assumption is that they are already doing it themselves, and they also assume the other guys are doing it. Given what we have learned about the NSA in the last year it would be more surprising if they were not. So it is not just a theoretical problem to worry about years in the future, it is already happening.


Moore’s Law is dead, long live Moore’s Law – part 5

Moore’s Law is dead, long live Moore’s Law – part 5
by Scotten Jones on 04-22-2015 at 4:00 am

In the first four installments of this series we have examined Moore’s law, described the drivers that have enabled Moore’s law and discussed the specific status and issues around DRAM and logic. In this final installment we will examine NAND Flash.
Continue reading “Moore’s Law is dead, long live Moore’s Law – part 5”


Moore’s Law is dead, long live Moore’s Law – part 4

Moore’s Law is dead, long live Moore’s Law – part 4
by Scotten Jones on 04-21-2015 at 11:00 pm

In the third installment of this series we discussed the status of DRAM scaling and Moore’s law. In this installment we will tackle logic. The focus will be on foundry logic.

Logic technology challenges
In the second installment of this series we discussed constant electric field scaling. As we mentioned in that installment at 90nm logic hit a scaling wall. Basically the gate oxide thickness had become so physically thin that leakage was exponentially increasing with decreasing gate oxide thickness and gate oxide thickness scaling stopped.

Strain

In order to continue to drive performance logic manufacturers turned to mobility enhancement using strain. Carrier mobility in a MOSFET channel is a major component in transistor drive current and it was possible to use strain to continue to drive up drive current while holding oxide thickness constant. By applying tensile strain to NMOS MOSFETs and compressive strain to PMOS MOSFETs significant mobility enhancement was achieved. At TSMC, the world’s largest foundry, dual strain layers (DSL) were implemented at 90nm by applying and patterning compressive and tensile silicon nitride films. DSL layers were also used at 65nm. The tradeoff for DSL is that it requires two additional masks and associated additional processing.

At 40nm two more strain techniques were added by TSMC. Stress memorization was added and embedded silicon germanium (eSiGe). eSiGe is a particularly powerful method for adding compressive strain. Since PMOS typically under performs NMOS on the same process having an extra performance knob for PMOS is a big advantage. eSiGe adds one mask and associated processing.

High-k gate oxide

Although strain provided several nodes of scaling, a solution for gate oxide leakage was badly needed. If a high dielectric constant (high-k) dielectric is substituted for a lower dielectric constant dielectric, the high-k dielectric can be physically thicker than the lower k dielectric while maintaining good electrostatic control over the gate of the MOSFET. After many years of development, Intel introduced the industry’s first high-k gate dielectric at 45nm and TSMC followed at the 28nm node. High-k dielectrics also required metal gate electrodes to maximize the capacitance so the transition was actually to high-k metal gates (HKMG). HKMG reduced gate oxide leakage by several orders of magnitude. The HKMG transition also added process complexity and cost.

Threshold voltage control

At the same time that HKMG was being implemented additional process complexity and masks were also required for threshold voltage control. One of the techniques to prevent transistor punch-through at short gate lengths is the use of halo implants. As gate lengths shrunk to 40nm, the halo implants began to influence threshold voltage so strongly that not only were NMOS and PMOS threshold implants required for each threshold voltage but tailored extension/halo implants were required as well. Some tailoring of the source/drain implants was also sometimes required. This added masks and associated complexity and cost to logic processes.

Fully depleted devices
A standard bulk planar MOSFET has highly doped source and drains of one dopant type separated by a lightly doped channel of opposite dopant type. Above the channel is a dielectric layer with the gate electrode on top. When the gate is properly biased it inverts the channel surface allowing current to flow between the source and the drain. In the off state the gate is supposed to deplete the region between the source and the drain of carriers preventing leakage. The problem is the gate only controls the surface. As gate lengths shrink, leakage currents develop below the region the gate controls. See figure 1, left side.

An alternative to bulk silicon is the use of silicon on insulator (SOI). Early versions of SOI had a thick enough silicon layers that leakage could still occur deep under the gate, see figure 1, second from the left. The “thick” silicon layer SOI is referred to as partially depleted SOI (PDSOI).

The solution to this problem is to constrain the “depth” of the channel region so the gate can fully deplete the channel. There are two main approaches to fully depleted devices, fully depleted SOI (FDSOI) and FinFETs.

FDSOI, figure 1 second from the right makes the silicon layer so thin that the gate fully depletes the channel.

Finally, FinFETs fabricate a narrow fin with gates on both sides or on both sides and the top to fully deplete the channel, see figure 1, right side.

Figure 1. Comparison of bulk, partially depleted SOI, fully depleted SOI and FinFETs

In order to insure a MOSFET is fully depleted in the off state, the silicon layers must be thin enough. For a standard “FinFET” with gates on both sides the fin thickness must be less than one half the gate length, see figure 2, left side. For a “TriGate” with gates on both sides and the top the silicon thickness requirement is relaxed to one times the gate length. Please note that TriGate and FinFET are used interchangeably in the industry today with all “FinFET” implementations being done in the “TriGate” configuration. For FDSOI to be fully depleted, the silicon thickness must be less than one third the gate length, see figure 2, right side.

Figure 2. Silicon thickness for fully depleted MOSFET operation

The merits of FDSOI versus FinFETs have been hotly debated in the industry and on SemiWiki. I will not repeat the arguments here but rather just note that the world’s four largest foundries have all adopted FinFETs for their 16nm/14nm node solutions.

The transition to fully depleted devices actually offers some process simplification in terms of number of steps but FDSOI starting wafer are expensive and fin formation is very difficult to control well enough for high yield.

Other factors

In addition to the factors listed above, each new node has generally increased the number metal layers needed for interconnect and around 20nm local interconnect and metal-insulator-insulator capacitors were added to many processes. At 10nm we also expect to see air gaps introduced in some interconnect layers to reduce parasitic capacitance.

Multipatterning

In installment 2 multipatterning was introduced. At 20nm foundries implemented multipatterning for the shallow trench isolation, gate, contact, and M0 through M5 levels plus associated vias. This dramatically increased mask counts and cost.

Logic scaling
As logic scaled down to 40nm there was a gradual increase in masks and process complexity. At 40nm the addition of eSiGe and the need to tailor implants drove a jump in mask counts. At 20nm the introduction of multipatterning drove another jump in masks and at 10nm additional multipatterning requirements and the introduction of air gaps will likely drive a big jump. At 7nm we have assumed that EUV is available and we see a big reduction in mask count. Assuming this actually happens it drive a lot of other things as we will see later. Figure 3, illustrates the mask count trend for a foundry logic process.

Figure 3. Foundry logic process mask count trend. Source, IC Knowledge Strategic Cost Model

Figure 4. illustrates the resulting wafer cost increases versus node. The big jump in masks and process complexity seen at 10nm also drives a big jump in wafer cost. At 7nm the introduction of EUV has the potential to actually decrease the wafer cost versus 10nm.

Figure 4. Foundry wafer cost trend.Source, IC Knowledge Strategic Cost Model

Figure 5. illustrates the gate density for foundry logic processes. The logic gate density continually increases until 16nm where the foundries decided to maintain the same back end of line (BEOL) linewidths while transitioning to FinFETs. 10nm is expected to be a full shrink and we are forecasting 7nm as a full shrink as well.

Figure 5. Foundry gate density trend.Source, IC Knowledge Strategic Cost Model

Figure 6. puts together the wafer cost and gate density to produce a cost per gate trend. Once again we have spaced this chart out to align the nodes with years to better show the cost trend. Some other analysts are claiming no cost reduction at 28nm or 20nm, we don’t see that. We do see a slight increase in cost per gate at 16nm due to the lack of a shrink. We also only a small decrease in cost at 10nm due to all the required multipatterning masks. At 7nm assuming EUV can be implemented we see a cost reduction that is pretty close to the historical trend.

Figure 6. Foundry cost per gate trend.Source, IC Knowledge Strategic Cost Model

Conclusion
The need for multipatterning at 20nm and the pause is scaling at 16nm for the FinFET transition lead to an increase in cost per gate at 16nm and therefore a pause in Moore’s law. At 10nm we see some cost reduction but less than normal due to extensive multipatterning. At 7nm we see the prospect to return to “normal” Moore’s law cost reductions if EUV meets its promise keeping Moore’s law alive for at least 3 more nodes. Without EUV 7nm is likely to be another smaller than normal cost reduction assuming a full shrink is even possible.

Also Read:
Moore’s Law is dead, long live Moore’s Law – part 1
Moore’s Law is dead, long live Moore’s Law – part 2
Moore’s Law is dead, long live Moore’s Law – part 3

Moore’s Law is dead, long live Moore’s Law – part 5


How is Trillion Sensors by 2025 Panning Out?

How is Trillion Sensors by 2025 Panning Out?
by Pawan Fangaria on 04-21-2015 at 7:00 pm

From several literatures, talks in the semiconductor industry, forecasts, and BHAGs (Big Hairy Audacious Goals), specifically in the context of IoT (Internet of Things) and IoE (Internet of Everything), we have been looking forward to a world with over a trillion sensors around us. I recollect (produced below) from an impressive slide out of a presentation by Chris Wasden at 2014 MEC. Chris is the Executive Director, Sorenson Center for Discovery and Innovation at University of Utah.

This table shows the intensity of increase in the number of sensors to reach more than 1 trillion by 2025, decrease in the average unit price of a sensor, and yet increase in the total revenue from sensors. It’s impressive in the sense that it shows the best from all angles – increase in the number of sensors meaning a highly automated world, increase in the industry revenue meaning a healthy semiconductor (sensors) business, and drastic decrease in price per unit of sensor meaning a large worldwide society able to afford it and support the industry. Wow!!

How do we see that dream panning out today? Well, we seem to be doing fine in terms of number of units of sensors until 2015. It rose by 13% in 2014 and is expected to rise by 16% in 2015 to reach 12.9 billion, a figure above the one projected in the table. However, according to IC Insights’ forecast for five years, the unit volume growth of sensors is expected to rise at a CAGR of 11.4% and reach 19.1 billion by the end of 2019. That seems much below what is projected in the table even after a strong CAGR of 11.4%. Let’s look at the sales figures in terms of dollar revenue.

The IC Insights’ report on sensor shipments indicates that falling ASPs (Average Selling Prices) of sensors are impacting sales growth. True, the high volume markets like intelligent wearables, automated control systems, and mobile electronics etc. have boosted the requirements for sensors. However, the high-volume applications increased competition in the market and reduced the prices of sensors to a large extent, squeezing the profit margins of suppliers. That seems to dampen the spirit in sensor market. While sensor sales grew by a CAGR of 17.1% between 2009 and 2014 reaching a record high of $5.7 billion last year, it’s expected to rise by a CAGR of just 6% in next five years.

The acceleration & yaw sensors (i.e. accelerometers & gyroscopes), the largest category in terms of dollar sales volume (26% of total sensor/actuator market) had 4% drop in worldwide sales at $2.4 billion; in 2013 this category had dropped by 2%. In 2013, the magnetic-field sensor sales had dropped by 1%, but in 2014 it sharply rebounded by 11% growth reaching sales of $1.6 billion. The pressure sensor sales is continuing strength with 16% increase in 2013 and 15% increase in 2014 reaching a new high of $1.5 billion.

The MEMS-based sensors that include acceleration & yaw sensors, magnetic-field sensors and pressure sensors accounted for a major $7.4 billion of sales in 2014, i.e. 5% rise from $7 billion in 2013. It is expected to rise by 7% in 2015 to $7.9 billion and reach $9.8 billion by 2019, at a CAGR of 12%.

Looking at these actual figures in IC Insights report, they appear more or less in line with the projections in the trillion sensor slide as far as 2015 is considered. However, the projections in the slide for 2020 and beyond appear to be way off what we see on the ground in the IC Insights report. We need to watch the developments in the sensor business which is a key ingredient for the IoT landscape in the near future!


S2C eyeing 1B gate FPGA-based prototypes

S2C eyeing 1B gate FPGA-based prototypes
by Don Dingee on 04-21-2015 at 1:00 pm

We hear a lot about FPGA-based prototyping hardware: Aldec, Dini Group, PRO DESIGN, Synopsys, and others. So, why is today’s news on a new platform from S2C important? It’s a matter of intent, beyond the act of gluing a few large FPGAs on a board for customers to dump more and more prospective RTL into.

Size differences aside, each vendor has an emphasis. At the risk of oversimplification ….

Aldec is concentrating on speed of the basic implementation, and incorporating actual target hardware on FMC daughtercards, handy in situations requiring DO-254 compliance. Dini focuses on application platforms, such as FPGA-based algorithm acceleration for financial trading. PRO DESIGN is developing a modular approach to mix and match both FPGA architecture and I/O capability. Synopsys is leveraging their FPGA synthesis knowledge to create better partitioning and smoother upward integration for IP blocks into complete systems.

Unsurprisingly, in this field of worthy contenders, S2C has chosen their own emphasis in launching the Prodigy Complete Prototyping Platform. S2C isn’t a new company, they have been at FPGA-based hardware since 2004 and have deep connections, particularly in China – Daniel Nenni has an article coming shortly with a look at the history of the firm.

In their opening of their press release, they make a rather sweeping statement: “any functional design stage, with any design size, and across multiple geographical locations.” When looking at more specifics, their strategy – at least what they’ve announced so far – looks like an amalgam of the competition with some unique additions, and a hint at what’s coming.

Where most of the vendors are concentrating on the biggest Xilinx FPGA they can find (PRO DESIGN the exception), S2C has a selection of Xilinx Virtex-7, Kintex-7, Virtex-6, and Altera Stratix IV “logic modules”, the hardware side of the solution. A Xilinx UltraScale product is coming soon.

To facilitate partitioning a design across multiple FPGAs, S2C has Prodigy Player Pro 5.1, a hardware-aware partitioning tool that also provides remote monitoring and control. One of the biggest performance boosters in partitioning is where and how to insert LVDS pin multiplexing, handled with either automatic or guided modes in the partition engine. In addition to clock and reset control, self-test, and remote management capability, the software also has virtual switches and LEDs for simple I/O functions usually found only on the physical board.

From there, Prodigy ProtoBridge takes over. It links system-level simulation to the FPGA-based prototyping platform using an AXI-4 and other protocols over a 4-lane PCIe Gen2 interface. It supports up to 16 master and 16 slave instances, with configurable data width from 32-bit to 1024-bit, and transmission length limited only by the host hard disk space. A C-based API allows simulators to connect and run verification routines, such as high-performance regression tests.

There is also a selection of Prodigy Prototype Ready IP, with interfaces such as USB, HDMI, MIPI, a range of memory, a Xilinx Zynq module, and many others. S2C is offering design services to create specific modules per customer requirements.

If it ended there, S2C would have a comprehensive solution. What about the “multiple geographical locations?” S2C is readying a breakthrough approach linking these platforms across a network, exposed and managed in a private cloud, targeting designs of a billion gates and perhaps more. In conjunction, deep trace debugging across multiple FPGAs is also on the S2C roadmap.

Executed properly, a private, secure cloud solution could enable interesting capabilities between third-party IP vendors, SoC designers in distributed teams, and foundry partners. So far, I haven’t heard of other FPGA-based prototyping vendors taking on the cloud in this kind of strategy. This will be intriguing to watch.


Top 10 Reasons to Use Industry-standard Data Management

Top 10 Reasons to Use Industry-standard Data Management
by Paul McLellan on 04-21-2015 at 7:00 am

Should a semiconductor/IP company use a proprietary data-management (DM) environment? Or even develop their own? After all, every company is unique and developing a unique DM allows a perfect match of just what is required for that particular company. And, in principle, a proprietary DM system can underpin the design management solution perfectly. On the other hand…

There are several industry-standard DM environments that are widely used. Probably the most famous are Perforce, Subversion and Git. Methodics ProjectIC environment can use any of them. These are actually two separate decisions that Methodics made early on:

  • do not develop their own proprietary DM environment
  • make ProjectIC work with any DM environment

Here is a list of reasons why it makes sense to use an industry-standard DM rather than using something non-standard, even if it is hidden under the hood of the design management solution selected.

[LIST=1]

  • Software/Hardware compatibility: since most (hopefully all) software developers already use industry-standard configuration management tools, it makes the interface between the hardware designers and the software designers more efficient and effective if both are using the same underlying system. In the software world these DM environments are often called source-code management, but for IC design, data management seems like a better general term.
  • Industry-standard DM solutions make typical software development methodology such as Agile available to hardware designers. This methodology has a proven history of making large teams more efficient at collaborating and working on pieces of a design that must all come together seamlessly at the end. Like a chip design or releasing a block of IP.
  • Zero cost of development: companies are not indirectly (or directly if they are insane enough to develop their own) paying for a proprietary DM environment with the cost amortized over a small user-base.
  • Lower cost of maintenance. Again, everyone can focus all their resources on doing design rather than wrestling with a proprietary DM and paying a large percentage of its ongoing maintenance costs.
  • With industry-standard DM solutions, the installed base is much larger than with proprietary tools. Most industry standard DMs are open-source and so the number of contributors to the ongoing development is large. Consequently, new features are constantly updated as new requirements emerge from this large user base and its associated cohort of developers.
  • There is no rule 6. Obligatory Monty Python reference.
  • There are integrations to a long list of existing third-party tools already in-place, including MS Word, Emacs, Eclipse, and others.
  • There is a huge online knowledge base that can be accessed for quick answers to a wide range of questions based on other users’ experiences, so users can easily and quickly search and find answers to problems.
  • Industry-standard DM solutions have better tuned performance profiling since a greater variety of use models will have been seen and handled and problems already seen and addressed.
  • The needs of any company change over time, especially in IC design where designs only get larger and more second order effects become first order. There is only ever more data to be managed. An industry standard DM is much more likely to already support any new requirements and to scale to future needs. A proprietary DM may require extensive development which can put the DM itself on the critical path to tapeout.


    These reasons together make a compelling case for not reinventing the wheel in the DM area. There are plenty of excellent wheel providers already out there.


  • SecurCore: Modern Hardware Security Approach

    SecurCore: Modern Hardware Security Approach
    by admin on 04-20-2015 at 7:00 pm

    The increasing number of interconnected devices grows day by day and has slowly begun expansion into other consumer products. The need for safe, efficient, and reliable systems that meet modern user expectations has become increasingly important as a result. SoC engineers addressing these challenges must consider design tradeoffs such as available silicon area or clock speed for security purposes while still trying to maintain the desired specification. Security breaches in legacy systems were typically handled by application-layer software. However, the proven susceptibility of these systems has generated a push to design hardware with security in mind. SecurCore is a chip series manufactured by ARM that looks to meet these standards. Designed from the bottom-up with security in mind, it looks redefine how a modern, safe system should be designed.

    SecurCore is designed using two main concepts: the principle of least privilege and a partitioning of the system into protected compartments. First, hardware and software resources are split up into two different worlds called the “Secure world” and “Normal world.” The Secure world is a trusted execution environment that only handles sensitive data and has access to the entire system plus a subset of private resources that only it can access. The Normal world is where casual user activity takes place. A common OS such as Windows lives here and functions as it would on any other end device. The OS kernel still manages system calls and has access to non-secure portions of the system. These two worlds are separated using hardware logic in the bus fabric by inclusion of a non-secure (NS) bit.

    The NS bit is what the processor uses to differentiate between secure and non-secure activity, creating a mechanism that prevents the activity in the Normal world from affecting the Secure world. This mechanism also limits the direct memory access of peripheral devices that may attempt to access secure private data. It also simplifies cache memory management, as cache flushes between context switches are no longer necessary.

    A security issue with multicore processors is that of shared resources. To mitigate this, SecurCore only utilizes a single core to process all data and provides two virtual cores: one for managing Normal world activity, the other for handling Secure transactions. Processor time is split between the two virtual cores in a time-sliced fashion managed by a hypervisor monitor, which creates the two worlds as virtual machines and provides a mechanism for safe context switching between the worlds through monitoring of the NS bit. The monitor also provides a single point of entry, eliminating the need for extraneous security processor cores from design.

    The reach of SecurCore is to create an environment where casual and business activity can take place separately on a single device while providing robust security. This would be helpful in areas like the music business where producers could listen to new material while on the move instead of having to travel to a studio, or stockbrokers making trades while on a business trip. If the Normal world were to become compromised on the device, it still would not be able to access sensitive resources available in the Secure world. This fact makes it a good candidate for future implementation in consumer goods such as refrigerators, thermostats, etc.

    Overall, SoC designers are provided greater flexibility when designing with a chip with built-in security such as SecurCore. Eliminating the need for independent security component allows for greater optimization of silicon area on the chip. The ARM chip uses larger transistors to lower dynamic power consumption by reducing supply voltage. They are also used to reduce subthreshold and gate leakage, therefore increasing reliability. However, larger transistors have a longer critical path, which decreases clock speed and performance. But this is a small tradeoff for now. The issue of leakage at small feature size may limit the progress of faster security devices in the future for reliability issues, but SecurCore is a step in the right direction with a bottom-up approach to secure system design.

    Source: https://www.legacy.semiwiki.com/forum/content/3953-securecore-secure-mpu-iot.html

    By Jason Ball and Terence Roby

    The University of Mississippi Electrical Engineering Department introduced a Digital CMOS/VLSI Design course this semester. As part of this course, students researched a contemporary issue and wrote a blog article about their findings for presentation on SemiWiki. Your feedback is greatly appreciated.


    TSV Modeling Key for Next Generation SOC Module Performance

    TSV Modeling Key for Next Generation SOC Module Performance
    by Tom Simon on 04-20-2015 at 1:00 pm

    The use of silicon interposers is growing. Several years ago Xilinx broke new ground by employing interposers in their Virtex®-7 H580T FPGA. Last August Samsung announced what they say is the first DDR4 module to use 3D TSV’s for enterprise servers. Their 64GB double data rate-4 modules will be used for high end computing where capacity and performance are critical. Nvidia has announced that it is following its Maxwell GPU’s with the Pascal family that will use 3D memory and an interposer. This with other improvements promises a huge improvement in performance. The 3D interconnect will allow a 3X improvement in memory bandwidth, making Pascal 10X faster overall than Maxwell.

    But interposers present challenges to design engineers. Unlike package substrates or wire bond connections, silicon interposers are made of silicon, a poor dielectric. Depending on the design configuration there are needs to pass signals vertically, which requires going through the silicon interposer or die themselves. The only way to achieve this high interconnect density is to use through silicon vias (TSV’s). These are metal vias that are relatively large and tall. They require insulation which is provided in the form of a silicon oxide sleeve separating the TSV metal from the silicon bulk.

    Their design structure and characteristics in high speed designs mandate analysis and modeling that will be accurate enough to predict potential performance issues. In advanced designs there will be high densities of TSV’s, which means there will be interactions between adjacent TSV’s. Simple RC extraction will not be sufficient. Nearby TSV’s can couple inductively. Also because of the properties of the interposer there will be parasitic MOS capacitors formed that can couple between adjacent TVS’s. The models needed to accurately represent this system should have frequency dependent elements.

    I came across an excellent discussion of modeling and analysis of designs with multiple TSV’s recently that was published by Mentor.Here is the link. This paper discusses the difficulties in producing good compact models for TSV’s. It further discusses the tradeoff between full wave and quasi-static analysis methods in light of the many frequency dependent effects that need to be considered.

    Mentor acquired Nimbic a while back and the test cases in the white paper that illustrate the results were run with nApex, which came from this acquisition. nApex is a quasi-static extractor, but is able to do some clever things to get good correlation with full wave solvers, which run much slower and also have the added difficulty of outputting S-parameters. Getting from S-parameters to compact models suitable for transient SPICE can be extremely difficult. When the target application is modeling large numbers of TSV’s there will be a high port count that almost always makes simulation difficult without an equivalent circuit.

    The Mentor paper shows results for a test case modeled from DC up to over 100GHz. It drills into the relative effects of inductive and capacitive coupling within a TSV array as a function of distance. In real world cases there will also be RDL and potentially package structures that need to be included. All of this makes an effective case for using quasi static methods that can be shown to do well in modeling skin effects, etc.

    The need to model 3D structures is only going to grow. It will be interesting to see how overall system performance improves as these new design approaches offer ways to reduce memory access time and expand inter-die busses. Nvidias successor to Pascal has already been discussed and the projected performance gains allow them to track Moore’s Law type improvements in their GPU’s. This certainly would not have been possible without the benefits of 3D memory and interposers that rely on TSV’s.