CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Mind The Gap – Boarding the Silicon Photonics Packaging Train

Mind The Gap – Boarding the Silicon Photonics Packaging Train
by Mitch Heins on 05-01-2016 at 8:00 pm


I’ve been doing a lot of reading on silicon photonics lately and I’ve come to realize that while there is much written on the development of individual silicon photonic components and devices (modulators, photo detectors, optical amplifiers and such) that much of the cost and therefore chances of economic success of integrated photonics solutions resides not in the silicon but in packaging of these solutions. Before the photonics platform can be used photonic ICs (PICs) must be integrated to their electrical IC (EIC) counterparts and the rest of the system.

One of the biggest challenges of this integration is getting light on and off the PIC from optical fiber. For integrated photonics this is typically done either through edge couplers or grating couplers as shown in figure 1. The tricky (and costly) part comes in ‘minding the gap’ between the relatively large optical mode of fiber to the very small optical mode of a SOI on-chip wave-guide. To put this into perspective, the diameter of a single-mode lensed telecom fiber for 1550nm light is ~3um. This must be matched up to a SOI-waveguide mode with dimensions of ~ 220nm x 450nm. That’s an area difference of almost 2 orders of magnitude (~7 million nm[SUP]2[/SUP] vs 99,000 nm[SUP]2[/SUP]). More challenging is that SOI wave-guides typically only support TE-polarized modes of light an
d the light coming in from a fiber is usually unknown and unstable requiring a mode convertor to clean up the signal before entering the waveguide. The 1db alignment tolerance for a typical edge-coupler is sub-micron (~ +/- 500nm) requiring time consuming and expensive active alignment during packaging. Additionally, these couplers usually require laser-welding to secure the lensed-fiber to the PIC as epoxy bonding suffers from small alignment drifts that would not be tolerable at these dimensions. With laser-welding comes the need for more expensive packages to mitigate thermal expansion on the optical alignment. Peter O’Brien and the packaging group from Tyndall National Institute in Cork, Ireland do a great job of explaining all of the nuances of this and more in chapter 7 of the book, Silicon Photonics III. The end result being that while edge couplers are the standard for the packaging of III-V laser devices and do a good job reducing insertion loss and giving more broadband coupling, they add a substantially increased packaging cost for integrated PICs due to their stringent alignment requirements.

The alternative to edge couplers are grating couplers that use diffraction gratings to couple a near incident fiber-mode to the wave-guide mode (figure 2). They are typically designed as a 10um x 10um periodic array of trenches, partially etched into the silicon layer. The trenches are usually curved to focus the light to the SOI-waveguide reducing the need for long space-consuming taper structures and can be designed to do double duty by taking care of the required polarization cleanup. The 1db alignment tolerance for these couplers are ~ +/- 2.5um. While still challenging, the process of “minding the gap” here is greatly simplified and much less costly compared to edge couplers. Grating couplers have the added benefit that they are fully CMOS flow compatible and allow for wafer-scale optical access at any point on the PIC surface enabling inline testing and characterization of the PIC before dicing.

The biggest issue with grating couplers is their relatively high insertion loss. A standard grating coupler in 220nm SOI is -3db which equates to a 50% reduction in the transmitted power. In MPW runs from imec and CEA-Leti, users have experienced insertion losses as high as -5db. Several research groups are working on this and have reported devices in the labs using bottom-reflectors that have insertion loss down to -1db but these devices have not yet seen production. The other major concern for grating couplers is the need for near incident light making for bulky and delicate vertical connections from fiber to the PIC. To “mind this gap”, a quasi-planar approach (figure 7.4) has been developed in which the fiber lies flat on the surface of the PIC with a 40[SUP]0[/SUP] polished facet that directs the fiber-mode onto the grating coupler at the correct angle. Due to their relaxed alignment constraints these connections can be made with less expensive epoxy bonding. This is especially helpful for fiber-array connections that have multiple fiber-channels in the same connector where you can amortize the cost to connect multiple channels across one alignment task. As an added bonus, grating couplers can also be used with VCSELs (vertical cavity surface-emitting lasers) that are directly flip-chipped and bonded over the PIC couplers.

In the end, the best fiber-coupling solutions for a given PIC is strongly application and cost-dependent, but no matter what you do, to make your PIC design successful you’ll need to “mind the packaging gap” while boarding your silicon photonics train.


Why Should Companies Care about Internet of Things Services?

Why Should Companies Care about Internet of Things Services?
by Bill McCabe on 05-01-2016 at 4:00 pm

As with any new technology, businesses will need to find quantifiable benefits in the Internet of Things before the concept is embraced and implemented. It could be argued that connected devices are already being adopted on a wide scale: companies like Microsoft, Amazon, Qualcomm, IBM, and others already see IoT as a core part of their businesses. Even so, there are still some, especially small to medium sized businesses, that are weighing up the costs and benefits of ultra-connectivity in the world of the Internet of Things.

You do not have to dig deep to see why IoT is important. Business Insider’s research division, BI Intelligence, has predicted that IoT will become the largest device market in the world over the next five years. Most analysts predict market value will reach in to the trillions, with possibly $7 trillion of total value by 2020. Any way you slice the pie, billions of dollars are on the table. These figures are promising for businesses directly involved in the manufacture and design of device services and hardware, but what about the companies that will purchase these technologies to incorporate them into operations?

Perhaps the single largest benefit will be in how Internet of Things devices can lower costs. The manufacturing sector provides an ideal case scenario. Machine to Machine (M2M) systems will allow for machinery to become more efficient, and more autonomous. Take a production line that was previously labor intensive. Sensors relying on IoT can receive orders, initiate fabrication, sign off work orders, and even package products using IoT, and with little human interaction. Even non-automated manufacturing will benefit. Orders can be taken from anywhere in the world, transferred through the cloud, and delivered to remote manufacturing facilities. These systems can collect valuable analytics that can benefit accounting, inventory management, and even resource procurement.

While this type of IoT will directly benefit businesses in manufacturing, it will also create new opportunities for project managers, engineers, and IT professionals who will be necessary in designing, implementing, and supporting these systems. It even creates the role of Chief Internet of Things Officer, the CIOTO, tasked with managing a network of connected systems, and connecting their efforts back to business goals.

Because IoT provides immediate data collection, businesses in all industries will benefit from improved decision making. Being able to analyze and distribute intelligence faster means that tedious data collection will be a thing of the past. Decisions can be made faster, and in some cases can be automated. What this spells for enterprise is, in essence, better decisions based on better data.

Hong Kong International Airport, and other mega-airports around the world, already rely on RFID technology to track luggage and freight throughout their sites. This enables luggage to be delivered by machine to the correct gate, the correct passenger carousel, or to the correct airliner, train, or delivery vehicle. Items are tracked via computer, and managed from a central control point. This reduces hands on management and labor costs. HKIA spent $50 million to develop the initial infrastructure, but widespread adoption of this IoT based technology could save the industry $760 million per year, according to the International Air Transport Association.

Imagine how a similar system could benefit a SMB. Goods delivery could be RFID or barcode tracked on handheld scanners. This tracking information could be uploaded to a cloud solution, from where dispatchers, couriers, and clients could track the location and progress of a delivery. These are the kind of innovations that are driving IoT, and making it a necessary technology in a market where cost and efficiency is key, and where end users and consumers demand constant, easily accessible information.

The opportunities are there for businesses who adopt IoT today. The benefits exist whether they seek to improve manufacturing efficiency, streamline logistics processes, or even provide new ways for customers to interact and receive information. In the growing world of IoT, the question is not why should we care, but is rather, can you afford not to?

Please give us your feedback or share how the Internet of Things has touched your business below.

Posted in IoT Basics | Tagged interact and receive information, Internet of Things, IoT innovations, M2M, Machine to Machine systems, manufacturing efficiency, quantifiable benefits, streamline logistics processes

For more information please review our website at www.internetofthingsrecruiting.com


Semiconductor capital spending slow in 2016

Semiconductor capital spending slow in 2016
by Bill Jewell on 05-01-2016 at 12:00 pm

The outlook for semiconductor capital expenditures (capex) in 2016 is weak. Gartner’s January 2016 forecast called for a decline of 4.7%. IC Insights in February projected a 0.8% decline. The table below shows the Gartner forecast along with the capex forecasts from the top three spenders (Intel, Samsung and TSMC) which account for about half of total industry capex. Intel is forecasting $9.5 billion in capex in 2016. This is up 30% from 2015, but below Intel’s $10B plus in capex in 2011 through 2014.

TrendForce estimates Samsung’s capex will be $11.5 billion in 2016, which would be tied with 2013 as the lowest level since 2010. TSMC plans a 17% increase in capex in 2016 to $9.5 billion, in line with TSMC’s record $9.7 billion in capex in 2013. Based on Gartner’s forecast and the estimates for the top three, the implied capex for the rest of the industry is a 15% decline. Using IC Insights forecast for a 0.8% decline in 2016 capex would mean the rest of the industry would see a 7% decline.

[TABLE] align=”center” style=”width: 500px”
|-
| colspan=”9″ style=”width: 100%; height: 31px; text-align: center” | Semiconductor Capital Expenditures, US$B
|-
| style=”width: 17.82%; height: 12px” |
| style=”width: 9.08%; height: 12px” |
| style=”width: 9.08%; height: 12px” |
| style=”width: 9.08%; height: 12px” |
| style=”width: 9.08%; height: 12px” |
| style=”width: 9.08%; height: 12px” |
| style=”width: 9.08%; height: 12px” |
| style=”width: 9.08%; height: 12px” |
| style=”width: 18.62%; height: 12px” |
|-
| style=”width: 17.82%; height: 19px” |
| style=”width: 9.08%; height: 19px” |
| style=”width: 9.08%; height: 19px” |
| style=”width: 9.08%; height: 19px” |
| style=”width: 9.08%; height: 19px” |
| style=”width: 9.08%; height: 19px” |
| style=”width: 9.08%; height: 19px” | Fcst.
| style=”width: 9.08%; height: 19px” |
| style=”width: 18.62%; height: 19px” |
|-
| style=”width: 17.82%; height: 25px” |
| style=”width: 9.08%; height: 25px” | 2011
| style=”width: 9.08%; height: 25px” | 2012
| style=”width: 9.08%; height: 25px” | 2013
| style=”width: 9.08%; height: 25px” | 2014
| style=”width: 9.08%; height: 25px” | 2015
| style=”width: 9.08%; height: 25px” | 2016
| style=”width: 9.08%; height: 25px” | Change
| style=”width: 18.62%; height: 25px” | Source
|-
| style=”width: 17.82%; height: 25px” | Total
| style=”width: 9.08%; height: 25px” | 67.4
| style=”width: 9.08%; height: 25px” | 58.9
| style=”width: 9.08%; height: 25px” | 57.1
| style=”width: 9.08%; height: 25px” | 65.0
| style=”width: 9.08%; height: 25px” | 62.3
| style=”width: 9.08%; height: 25px” | 59.4
| style=”width: 9.08%; height: 25px” | -5%
| style=”width: 18.62%; height: 25px” | Gartner
|-
| style=”width: 17.82%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 18.62%; height: 25px” |
|-
| style=”width: 17.82%; height: 25px” | Intel
| style=”width: 9.08%; height: 25px” | 10.8
| style=”width: 9.08%; height: 25px” | 11.0
| style=”width: 9.08%; height: 25px” | 10.7
| style=”width: 9.08%; height: 25px” | 10.1
| style=”width: 9.08%; height: 25px” | 7.3
| style=”width: 9.08%; height: 25px” | 9.5
| style=”width: 9.08%; height: 25px” | 30%
| style=”width: 18.62%; height: 25px” | company
|-
| style=”width: 17.82%; height: 25px” | Samsung
| style=”width: 9.08%; height: 25px” | 12.1
| style=”width: 9.08%; height: 25px” | 12.3
| style=”width: 9.08%; height: 25px” | 11.5
| style=”width: 9.08%; height: 25px” | 13.6
| style=”width: 9.08%; height: 25px” | 13.0
| style=”width: 9.08%; height: 25px” | 11.5
| style=”width: 9.08%; height: 25px” | -11%
| style=”width: 18.62%; height: 25px” | TrendForce
|-
| style=”width: 17.82%; height: 25px” | TSMC
| style=”width: 9.08%; height: 25px” | 7.3
| style=”width: 9.08%; height: 25px” | 8.3
| style=”width: 9.08%; height: 25px” | 9.7
| style=”width: 9.08%; height: 25px” | 9.5
| style=”width: 9.08%; height: 25px” | 8.1
| style=”width: 9.08%; height: 25px” | 9.5
| style=”width: 9.08%; height: 25px” | 17%
| style=”width: 18.62%; height: 25px” | company
|-
| style=”width: 17.82%; height: 25px” | Big 3 Total
| style=”width: 9.08%; height: 25px” | 30.2
| style=”width: 9.08%; height: 25px” | 31.6
| style=”width: 9.08%; height: 25px” | 31.9
| style=”width: 9.08%; height: 25px” | 33.2
| style=”width: 9.08%; height: 25px” | 28.4
| style=”width: 9.08%; height: 25px” | 30.5
| style=”width: 9.08%; height: 25px” | 7.3%
| style=”width: 18.62%; height: 25px” |
|-
| style=”width: 17.82%; height: 25px” | % of Total
| style=”width: 9.08%; height: 25px” | 45%
| style=”width: 9.08%; height: 25px” | 54%
| style=”width: 9.08%; height: 25px” | 56%
| style=”width: 9.08%; height: 25px” | 51%
| style=”width: 9.08%; height: 25px” | 46%
| style=”width: 9.08%; height: 25px” | 51%
| style=”width: 9.08%; height: 25px” |
| style=”width: 18.62%; height: 25px” |
|-
| style=”width: 17.82%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 9.08%; height: 25px” |
| style=”width: 18.62%; height: 25px” |
|-
| style=”width: 17.82%; height: 25px” | Others
| style=”width: 9.08%; height: 25px” | 37.2
| style=”width: 9.08%; height: 25px” | 27.3
| style=”width: 9.08%; height: 25px” | 25.2
| style=”width: 9.08%; height: 25px” | 31.8
| style=”width: 9.08%; height: 25px” | 33.9
| style=”width: 9.08%; height: 25px” | 28.9
| style=”width: 9.08%; height: 25px” | -15%
| style=”width: 18.62%; height: 25px” |
|-

The weak outlook in semiconductor capex is reflected in the projections for semiconductor manufacturing equipment. Semiconductor Equipment and Materials International (SEMI) in December 2015 forecast a 1.4% increase in equipment sales in 2016. Gartner in January 2016 projected a 2.5% decline. The chart below shows the latest combined data on semiconductor manufacturing equipment bookings and billings from SEMI and SEAJ (Semiconductor Equipment Association of Japan) for 1[SUP]st[/SUP] quarter 2016.

1Q 2016 billings were US$6.6 billion, down 7% from a year ago. Bookings were US$7.5 billion, up 3% from a year ago. The book-to-bill ratio of 1.13 points to growth in billings in the next few months. If this trend continues, year 2016 billings for semiconductor manufacturing equipment could reach the SEMI forecast of 1.4% growth.

SEMI and SEAJ data shows equipment sales hit a peak of $42.8 billion in 2007. Sales declined severely in the industry downturn, hitting a low of $15.9 billion in 2009. Sales rebounded to a new peak of $43.5 billion in 2011. Over the last four years, sales have ranged between $32 billion and $37 billion. Sales were $36.5 billion in 2015. Sales by region have varied significantly comparing the 2007 and 2011 peaks to 2015. The chart below shows SEMI/SEAJ data of semiconductor manufacturing equipment sales by region.

Taiwan is the largest regional market with sales of around $10 billion. The vast majority of sales in Taiwan are to wafer foundries such as TSMC and UMC. South Korea is a close second, with sales of $7 billion to $8 billion. Sales in South Korea were slightly higher than in Taiwan in 2011. South Korea sales are primarily to memory companies Samsung and SK Hynix. Japan sales declined from $9.3 billion in 2007 to $5.5 billion in 2015. Toshiba, Sony and Renesas are the major customers in Japan. North America (primarily the U.S.) sales went from $6.6 billion in 2007 to $9.3 billion in 2011, making North America the largest region in 2011. Sales dropped to $5.1 billion in 2015. Intel is the largest buyer of semiconductor equipment in North America, with Micron Technology second. China was the only region to see growth from 2007 to 2015 with sales increasing 68% to $4.9 billion. Europe and other regions each saw sales drop about a third from 2007 to 2015.

Despite the shifting of the semiconductor market to China and other emerging Asian countries, the market for semiconductor manufacturing equipment remains dominated by Taiwan, South Korea, Japan and North America – the sites of the largest semiconductor manufacturing companies. These companies prefer to make most of their multi-billion-dollar wafer fab investments close to home. China has seen strong growth in the equipment market, but China’s growth rate should slow over the next few years. China should pass Japan and North America in the next few years, but is not likely to pass Taiwan or Korea before the end of the decade.


Qualcomm’s New X16 LTE Modem Delivers Gigabit LTE And A Scalable Architecture

Qualcomm’s New X16 LTE Modem Delivers Gigabit LTE And A Scalable Architecture
by Patrick Moorhead on 05-01-2016 at 7:00 am

Qualcomm has been the global unit and revenue market share leader for years in modem technologies used in smartphones, tablets, PCs and IoT (Internet of Things). One of the reasons they have maintained this lead for so long is that they are typically first to market with new generations of modems. Today at their investor conference, they announced their latest and greatest LTE modem capable of gigabit-class speeds, specifically 1 Gbps, which translates to “Category (Cat.) 16” LTE according to the 3GPP standards.

This is also the first modem that fits into the class of ‘LTE Advanced Pro’, which is the next step in LTE modem technologies and is a move towards the 5G future with gigabit-class connectivity. This new modem also brings a lot of industry firsts, even for Qualcomm, which include the first Cat. 16 LTE modem, the first LTE Advanced Pro modem, the first modem to support LAA (Licensed-Assisted Access) and the first discrete modem built on the 14nm FinFET process as well as a new modem architecture.

Why we need faster modems
The need for faster modem speeds is driven by “data density”. The growth of media consumption and content creation has driven the need for a gigabit class LTE modem. The resolutions of the phone and tablet displays have now reached beyond 1080P or 2K resolution compared to 480P in phones like the Galaxy S2from five years ago. Over the past five years, these same phones have seen their cameras’ capabilities increase drastically from 5 megapixel photos and 720P video to 16 megapixel photos and 4K video. Just in terms of video alone, this means an 8x increase of resolutions and approximately the same amount of data usage. These increased resolutions have driven users to desire more and more download and upload speeds which means they are using their operators’ networks more than ever before. There has also been an explosion of video sharing and streaming applications like Periscope, Vine, YouTube, WeChat, Instagram and Snapchat. This increased demand can be clearly seen as most of the carriers have significantly increased their data caps across the board or kept their unlimited plans.

I am also thinking how this same technology could replace cables to the home and business.


Up to 1 Gbps 50% faster over prior generation
The new X16 LTE modem delivers peak speeds of up to 1 Gbps, an increase of over 65% over the previous generation using the same spectrum, the Snapdragon X12 LTE modem capable of 600 Mbps. Currently, only Samsung Electronics has announced any modems that can reach the speeds of the last generation Snapdragon X12 LTE modem which is also inside of the Snapdragon 820 SoC. Samsung’s announcement means that some of their phones this year will finally catch up to Qualcomm’s modems from last year. In fact, Qualcomm announced it originally in November, 2014 as a Cat. 10 modem, but announced its upgrade to Cat 12/13 along with support for LTE-U and 4×4 MIMO in September, 2015. They’ve already outdone themselves four months later with the Snapdragon X16 LTE modem. Qualcomm’s cadence in modems has seen the company increase download speeds roughly 10x over the course of the last 5 years alone.

These kinds of speeds can also enable entirely new use cases for wireless technologies, and the higher throughput and lower latency can allow for use cases that we haven’t even thought of yet. The reality is that these kinds of speeds are really what people were imagining originally when LTE was originally introduced as a ubiquitous wireless technology. You could bring the full speed of the wired internet with you anywhere you went, and now, wireless internet speeds are outpacing wired internet speeds in many places. This could mean the ability to experience virtually lag free remote desktop environments as well as video streaming as demanding as 360-degree VR HD video.

How consumers get 1 Gbps from carriers
The new Snapdragon X16 LTE modem achieves 1 Gbps speeds by using only three carriers at the same time, even though it can deliver such speeds with four carriers as well if 2×2 MIMO (multiple input, multiple output) is the only antenna configuration available. The three carrier configuration (3X carrier aggregation) uses 4×4 MIMO on the first two carriers and 2×2 MIMO on the third, making achieving such speeds much more realistic since having four 20 MHz blocks of spectrum is going to be extremely difficult in most geographies. The end result with these different carrier and antenna configurations is that ultimately you get 10 streams of LTE data, each carrying 100 Mbps. These speeds translate to 3GPP Cat. 16 downlink speeds of 1 Gbps, while the uplink, via 2×20 MHz CA plus 64-QAM is 150 Mbps or triple of most upload available today. The new modem also brings support for LTE-U and LAA, making the use of unlicensed spectrum easier and more possible across the world using one modem. And if all of that wasn’t enough, it also adds support for the new 3.5 GHz band of licensed spectrum, which is a new 3GPP approved band.

New, scalable modem architecture “reduces 2/3rd development cycle”
In addition to all of these new speeds and capabilities, Qualcomm’s new Snapdragon X16 LTE modem features an entirely new modem architecture which is designed to improve the company’s R&D efficiency. This new architecture allows for Qualcomm’s modems to be hardware and software scalable from the high-end to the low-end discrete modems. This change is intended to translate into delivering new products top to bottom in only one R&D cycle for all versions of Qualcomm’s discrete modems as well as shorter time to CS (commercial sampling). All of this means that Qualcomm is intending to spend less time and money developing their top to bottom modem stacks which could obviously be better for them and their customers relying on them to create faster modems at lower costs. With the X16 architecture, Qualcomm now has a modem for virtually every application all the way down to IoT and the company says it delivered it in one cycle, not three.

Qualcomm will not say what the building blocks are in the new architecture, but they are quick to say that these aren’t common compute cores as was introduced with NVIDIA’s now redeployed Icera modem division. Competitively, if Qualcomm can deliver what they say they can with the new architecture, this could put some serious pain on their discrete modem competitors because the mid-range and low end modems could arrive one to two years earlier on a leading edge node. Think about that. I’ll be keeping a close eye on Qualcomm’s execution, the competitive reaction and if and how this impacts companies like Huawei, Intel, Samsung Electronics and even MediaTek.

1 Gbps speeds require new RF

The new modem will also be accompanied by a brand new RF transceiver which is also the world’s first gigabit-class LTE transceiver. Other than the fact that Qualcomm has introduced 1 Gbps downlink (DL) speeds, they’ve also added support for 256-QAM and 4×4 MIMO, which are necessary to achieve such speeds. In fact, in terms of configuration this RF transceiver can support up to 4x DL CA (80 MHz total) as well as 2x UL (upload) CA (40 MHz total). However, the 2x UL CA is only possible with 3x DL CA enabled, not 4x DL CA. It supports all 3GPP bands, including 3.5 GHz which is the newest band. It also has 5GHz LTE-U support and location support for GPS, Glonass, Galileo and BeiDou, making it among the most comprehensive global positioning chips on earth as well. And even with all of those new features, Qualcomm says they have managed to implement fewer connections between the transceiver and the modem with fewer wires and less board space, making it easier overall to implement.

Already sampling

What makes this announcement all the more impressive is that according to Qualcomm, the Snapdragon X16 LTE modem has already sampling to customers and shipping in commercial devices by the second half of 2016, as in, less than a year. This also means that there will be networks that can support this modem’s capabilities around that time as well, meaning that we could see gigabit-class LTE networks as soon as this year. The expectation is that this modem will land inside of broadband devices and mobile routers first, considering the antenna complexity and how difficult that will be to implement in phones in such a short period of time. But there is still a very good chance that we will eventually see the X16 LTE modem in phones next year.

Wrapping up
There are a lot of people wondering what exactly Qualcomm has been doing since they announced the X12 LTE modem and it looks like they are reaffirming their modem lead once again, just when we thought that the competition had caught up. Something similar had happened with Intel last year when they announced their XMM 7360 LTE modem which was capable of the same 450 Mbps speeds as Qualcomm’s 9×40 series delivered at the time. However, Intel didn’t keep up with Qualcomm when they brought their speeds up to 600 Mbps on that series and Intel still hasn’t shipped their Cat. 10 modem and now looks to be significantly behind both Samsung Electronics and Qualcomm. It’s very possible we could see Intel’s Cat 10 LTE modems shipping in devices around the same time we start to see Qualcomm’s Cat 16, once again widening the gap between the two companies. Additionally, Qualcomm’s Snapdragon 820 will feature a Cat. 12 LTE integrated modem while Intel’s discrete modem is still only Cat. 10. Either way, 2016 is shaping up to be a pretty exciting year in the wireless space, especially with all of these new SoCs and modems coming to the market with the new 14nm FinFET process and amazing speeds.


More from Moor Insights and Strategy


Are Layoffs Good for the Semiconductor Industry?

Are Layoffs Good for the Semiconductor Industry?
by Daniel Nenni on 04-30-2016 at 7:00 am

As I have mentioned before, semiconductor professionals are very smart people, pound for pound the smartest in the workforce in my opinion. So what happens when thousands of engineers from Qualcomm, Broadcom, Altera, and Intel get shown the door? They don’t go to work for Starbucks, they don’t go to the unemployment line, they continue to innovate, absolutely.

Continue reading “Are Layoffs Good for the Semiconductor Industry?”


Why I’ll Always Be an Andy Grove Fan

Why I’ll Always Be an Andy Grove Fan
by Martin Lund on 04-29-2016 at 4:00 pm

Silicon Valley sadly lost a respected and revered leader with the death of Andrew Grove in March. The co-founder and former CEO of Intel was an inspiration to generations of technologists and business leaders, including me. Andy had a profound influence on me throughout my career. And while I only met him once, I feel as though I’ve lost a friend and mentor.

When faced with difficult business situations, many times I have asked myself, “What would Andy do?” And most of the time, the answer would appear from applying the disciplined and analytical thought processes that Andy advocated.

I met Andy in 1997 at the Intel worldwide sales conference. Back then, I was working for a small Danish company that was being acquired by Intel. After his keynote, Andy came directly down to our table and asked us what we thought. One of the executives sitting at the table said something flattering and, frankly, vacuous. He then turned to me, and I bluntly pointed out what I thought was an important point he’d missed in his speech. Andy fixed his clear, blue eyes on me for an uncomfortably long moment, and I thought I had ended my career at Intel before it had even begun. But, then, I saw a flicker of recognition in his eyes and knew that he appreciated being told the truth, straight, and had no time for fluff.

That short interaction with Andy has stayed with me over the years, and so has his first management book, High Output Management. In fact, I like it so much that I’ve given out tens of copies to high-potential managers in my organizations. It’s still very relevant today.

I’d like to share the most important lessons I’ve learned from Andy:

Constructive confrontation.
The essence of constructive confrontation is about getting the best results through productive dialogs, even in the face of deep disagreement. In other words, fight for what you believe in with respect, passion and intellectual honesty. And when it’s all said and done, be prepared to disagree and commit.

Outside-in thinking.
Business leaders make bad decisions when they get caught up in the inertia of the status quo and the internal company reality. This lesson comes from the classic Intel story from the mid-1980s, when Andy Grove and then-CEO Gordon Moore were struggling to decide how to save the company. Andy famously asked, “If we both got fired and new management came in, what would they do?” The answer was, they would get out of memory chips and focus on microprocessors. That’s exactly what they did, and the decision led to one of the greatest corporate turnarounds ever. Being able to mentally step outside your company and view it dispassionately is vital for making good decisions, doing the right things and staying true to your objectives.

Good KPIs vs. Bad KPIs.
How your performance indicators are designed and implemented can change culture, processes and outcomes — for better and for worse. I have used the concept of leading indicators for almost 20 years — and not just for making breakfast!

Intent-based organizationaldesign.
Businesses oscillate between two main corporate structures: centralized and distributed. Each structure has merits. The important lesson here is that the correct organizational design for your company or business unit depends on the market environment and your business objectives.

Core drilling.
Just like with a soil test, Andy was able to sample all the levels of his company and knew the most finite details of his organization, all without micromanaging. This technique allows you to stay in touch with what’s really going on in your organization, and I’ve been able to use it effectively in the businesses I’ve managed.

Only the paranoid survive.
These are words to live by in business, and they became thetitle of Andy’s second management book in 1999. You have to respect your competitors, big or small, and should always be looking over your shoulder at what they’re doing. Also, you need to look constantly for disruptions and nonlinearity in your market. For me, it also means not succumbing to the not-invented-here syndrome and always having a healthy dose of humility, especially when everything seems fine.

A lifelong learner and teacher, Andy was an inspiration to people not just in Silicon Valley but everywhere, and from any walk of life. He will be missed. But his legacy lives on in all the people like me whom he has taught about technology, business and life.

Also read:

My morning with Andy Grove

The time Andy Grove came to Fortune and refused to meet with the editors

Andy Grove’s Less Remembered Intel

Good bye and thank you, Andy Grove!


Ecosystem Partnership for Effective Network Hardware Design

Ecosystem Partnership for Effective Network Hardware Design
by Bernard Murphy on 04-29-2016 at 12:00 pm

When you’re designing a hardware solution to plug into what is arguably the most complex system of all – the Internet – you can’t get away with a little fake traffic to test whether your box is going to do all the right things at the right performance. You have to model realistic voice, video, data and wireless traffic in multiple protocols (and software-defined networking) at variable bandwidths.

First, you’re obviously not going to do this with a simulator, which, best-case, might model 100 packets per day. You need to get to tens of millions of packets a day over 128 ports to have a reasonable chance of tracing bugs. And that’s only possible in emulation.

But then how do you model realistic traffic? Networking design teams already know the answer to this one. They work with companies like Ixia who are well established in providing solutions for validating and optimizing physical and virtual networks and particularly, in this instance, for network modeling and traffic testing.

So you have great solutions on either end of the modeling problem – design modeling and network modeling – but that usually means the design team has to figure out how to hack some kind of connection between the two, usually inelegant, inefficient and incomplete and often in as much need of debug as the design itself.

Or the solution providers on each end could partner, which is exactly what Mentor and Ixia have done. Mentor has integrated the Mentor® Veloce® emulation platform- through the Virtual Network (VN) App – with Ixia’s virtual edition test product family – IxNetwork® Virtual Edition (VE) to accelerate the verification of complex networking chips.

The key value of the integration is Ixia script re-use with the emulator. This means that Ethernet traffic generation is consistent between simulation, emulation and lab testing. Being able to run realistic traffic on the emulator is half of what design teams need – knowing that it will correlate with real lab testing is the other half and is often where custom-crafted integrations break down.

Mentor built the VN App in a close collaboration between the R&D teams at Mentor and Ixia. VN has been designed to create a highly optimized flow from simulation to the lab for greater efficiency and improved debug. Mentor is currently demonstrating a working prototype to mutual customers in their emulation lab in Fremont California.

You can learn more about other Veloce emulator applications HERE.

More articles by Bernard…


Process Development, CAD and Circuit Design

Process Development, CAD and Circuit Design
by Daniel Payne on 04-29-2016 at 7:00 am

Working at Intel as a circuit designer I clearly remember how there were three distinct groups: Process Development, CAD and Circuit Design. Each of the groups sat in a different part of the building in Aloha Oregon, we had different job titles, different degrees, spoke with different acronyms and yet we all had to work together somehow to ensure success for our company. There is one company that does span all three of these groups over the past 20 years, and that’s ProPlus. I just watched their latest archived webinar on a new tool for process and device evaluation, so wanted to blog about what I learned.

The CTO of ProPlus is Dr. Bruce McGaughy, he was the presenter in this webinar and we’ve met at DAC over the years to get an update on what’s new. SoC designers today have many foundries to choose from, and each foundry has multiple process nodes where each node may have many variants, so sifting through all of this complexity is a challenge as the Process Design Kits (PDKs) are always changing.

In the days of 180nm process nodes we were using BSIM 3 models for transistor behavior and they used dozens of parameters, however now at the 16nm node we’re using BSIM-CMG models which can have thousands of parameters in each model. Transistors can use macro-models and custom found models, which add to the complexity.

CAD engineers can manage the relationship with the foundry and cope with the PDK files and versions, then the design teams can use the models to run SPICE circuit simulations, monte carlo analysis and other tools for variation analysis. How do you manage all of the revisions inherent in this cycle?

A New Tool
ProPlus has created a new tool dubbed MEPro to help cope with these issues, so here’s what goes into that tool and how it helps in five areas.

Using the PDK model library as the key input you can now explore, compare and even verify the soundness of the models. Designers are assisted because they can understand and explore the process design space. The five ways to look at any process with this tool are:

  • Browse your library files using familiar icons for folders and files
  • Review the process specification sheet to see how it matches your requirements
  • Look at device-level behavior curves
  • See the statistical variation of process parameters
  • Look at circuit simulation results using each process variant

A first-time user of MEPro can invoke the tool, load a PDK library, then click OK to run 1,212 plots showing device performance based on pre-defined templates, all in under 1 minutes by using the built-in Fast SPICE tool NanoSpice. Trying to do the same thing with your own scripts, assembling the plots and presenting the data would likely take days to weeks of effort. The templates may be updated by using a GUI, and then you can save or share any of your projects.

Traditional design books are static documents, while using MEPro you can create all the data used in a design book very quickly and interactively, providing more insight.

One application for MEPro is model exploration by using custom templates that have been configured to your specific needs. One example of customized templates showed how the tool could produce five different types of data: Process Spec Sheet, analog device curves, memory device curves, digital device curves and even custom analysis curves.

Analog designers can explore device characteristics, matching and statistical behavior of a process. Layout Dependent Effects (LDE) can often be a bit mysterious to the designer, so MEPro helps you by visualizing the value of SA graphically. Process variations like multiple Vt choices can be shown graphically for measurements like Idsat as a scatter or linear plot. You can plugin any transistor-level circuit to your template, browse your netlist, then get quick SPICE simulation results shown graphically in one environment.

Users can benchmark and compare two different processes like 28HP and 28HPM, as the tool graphically shows PMOS and NMOS transistor curves. There’s no need for manual data collection, or using Excel for comparisons.

Let’s say that your process model from the foundry has been updated, how do you know the impact on your designs after this revision? Using MEPro you can quickly find out how your most sensitive circuits are affected by a new PDK revision.

Models can actually be verified with MEPro in terms of:

  • Accuracy checking – model versus silicon measurements
  • Quality checking – are there any kinks, glitches, crossover or continuity issues
  • Behavior checking – curves are monotonic, peaks, symmetry, range checking

In summary, you can approach the task of comparing, validating or evaluating PDK models manually or with an automated flow like in MEPro. The time savings with the automated flow approach look impressive. Bridging the gap between foundries and fabless design users is now made easier. Designers can now more readily understand any process, get better margin out of their designs, or even ask the foundry to tweak the process for their specific design.

Dr. Lianfeng Yang then did a live product demonstration to show how intuitive MEPro is to use, running on his laptop.

Watch the entire 59 minute webinar online here.


Webinar alert – VHDL guru says its time to move up

Webinar alert – VHDL guru says its time to move up
by Don Dingee on 04-28-2016 at 4:00 pm

Many years ago when I worked for Ed Staiano at Motorola, I learned never to use the word “comfortable” in a career context. I’m comfortable being with family and friends. This new high-back chair I sit in at my new faux-cocobolo desk (slightly distressed chalk-painted wood and industrial piping, awesome) is comfortable, a lot more so than that camp chair I was on for a few months. It’s comfortable inside this air conditioned house on a humid day. But I’m never, ever comfortable with my knowledge, competitiveness, or position at work. Andy Grove may have written the book, but Ed lead the choir on comfort leading to complacency.

Engineers tend not to operate that way (unless they encounter one of these managers or their disciples). Expertise has a high value. Years of investment in learning every detail of a skill lead to more opportunities to use that skill. That works for a while, perhaps even decades, until it becomes time to upgrade skills. COBOL programmers are still out there, but they don’t get a lot of new opportunities unless they learn something more on the cutting edge.

I can see engineers being extremely comfortable with VHDL. It’s tried, and proven, and still usable. But it’s the 21[SUP]st[/SUP] Century and all, and there are new standards and new tools and people getting better results in chip design. Resisting a move to SystemVerilog and UVM, now both in wide adoption and growing, can be a career limiter.

So when the author of “VHDL: Programming by Example” speaks on a webinar providing a look at SystemVerilog and contrasting its features and use with VHDL, it’s time to move up. Doug Perry is working at Doulos these days as a Senior MTS, and he may be the biggest of the VHDL gurus left – many VHDL designers have his book on their desk.

Perry will walk through the challenges in making the transition from comfortable VHDL to the more modern SystemVerilog from his unique perspective, using examples running in Aldec Riviera-PRO. Registration for this May 4th webinar is free on the Doulos site:

Getting into SystemVerilog from VHDL: Guidance from a VHDL Guru
Europe and Asia time zones
North America time zones

If you haven’t already made this move, take an hour and learn why you should from a guru.


Software-Driven Verification Drives Tight Links between Emulation and Prototyping

Software-Driven Verification Drives Tight Links between Emulation and Prototyping
by Bernard Murphy on 04-28-2016 at 12:00 pm

I’ve mentioned many times what has become a very common theme in SoC and system verification – it has to be driven by the software because any concept of exhaustively verifying “everything” is neither feasible nor meaningful. Emulation has become a critical component of this flow in validating and regressing software “close to the metal”. On an emulator you can even boot Linux and Android and run Android test suites – but not at a performance acceptable to the fast turns and regression needs of software development teams.


Accelerating performance often takes advantage of mixed platforms, for example combining virtual platforms with emulation to accelerate OS-boot. An increasingly common way to accelerate, especially as the design starts to converge, is by prototyping on an FPGA platform. This can run an order of magnitude (or better) faster than emulation, which makes it more practical for regression flows where you want to run hundreds, thousands or more tests in each regression pass. Prototyping is great on speed but doesn’t offer as much internal visibility to debug problems on the hardware side as emulators do, so you want be able to jump back to emulation (where you have much better internal visibility) for debug. There you isolate, diagnose and correct problems as they are discovered, while continuing regression testing on the prototype platform.

This means you need to be able to jump back and forth between prototyping and emulation to get to the coverage you need, and to shake out problems as they arise. But there’s a problem with this appealing concept – typically it can take up to 3 months to build a manually optimized FPGA prototype, and that’s not exactly conducive to quick turn-around debug between prototyping and emulation.


Cadence has been working on reducing the turnaround time by optimizing the flow between their Palladium™ (emulation) and Protium™ (prototyping) platforms as a natural extension to their continuum of verification solutions. Optimizations start logically with a unified compile to both platforms, enabling reuse of scripts, constraints, clock definitions, memory definitions and more.

This compatibility isn’t just for input formats. Clocking semantics are compatible between the Palladium and Protium environments—a netlist for the Protium tool can be moved back to the Palladium platform and debugged there. And the Protium tool is compatible with the SpeedBridge adapters that work with the Palladium environment.

In addition, Protium bring-up time (for handling memories and clocks, partitioning, FPGA back-end design and functional unit debug) has been reduced from months to weeks. And with the Perspec System Verifier (the Cadence implementation of the emerging PS standard for portable stimulus between platforms), you can easily transfer stimulus between engines. A further optimization is through support for black-boxing. Black-boxes for Protium are treated as “don’t-touch” – they don’t need to be rebuilt in the prototyper in subsequent revisions, which means you can further accelerate turn-around times in RTL transfer to Protium.

Between these capabilities and the ability of the Protium platform to backdoor download memory contents, you can quickly switch from regression in prototyping to a more detailed debug in emulation. And when you have accumulated enough fixes, you have a shorter path to rebuild a new prototype for late-stage regressions.

So design/verification teams have three options for hardware-enabled verification:

  • They can use pure emulation, all RTL in hardware, and test benches either synthesized into into the emulator or connected via acceleration. This option provides great, simulation-like debug on the hardware side, though the speed may not satisfy notoriously impatient software developers. :rolleyes:
  • A second, and faster, approach can use virtual platforms for the compute subsystem, intelligently connected to an emulation hosting the items that require full accuracy – like GPUs (reports show a speed improvement of between 50X and 200X). This second approach also allows you to execute software-driven tests faster (users report an up to 10X speed increase).
  • Finally, a third approach couples emulation with FPGA-based prototyping, which is ideal when the hardware has matured and you need the speed that will satisfy software developers.

Also, since the Palladium emulation database runs out of the box on the Protium platform, re-using the same front-end compile, users can make a tradeoff between fast automated bring-up with reasonable prototyping speed or more time-consuming manual optimization for even more speed. Cadence has seen reports of 5MHz to 10MHz out-of-the-box for FPGA-based prototyping using the fully automated flow, with potential to reach 10s of MHz up to 100Mhz by manually optimizing with partitioning guidance and black-boxing.

You can learn more about Protium capabilities in the following excellent webinar HERE.

More articles by Bernard…