CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Car Leasing, Car Sharing Don’t Mix

Car Leasing, Car Sharing Don’t Mix
by Roger C. Lanctot on 05-08-2016 at 4:00 pm

Not to be outdone by General Motors with its investment in Lyft, its acquisition of Cruise and its launch of Maven, BMW is in the process of relaunching and expanding the DriveNow car share service in the U.S. and may soon provide aftermarket hardware to enable Mini lessees to rent their cars, according to a Bloomberg report.

“BMW to Let Car Owners Rent Out Vehicles Like ‘Airbnb on Wheels”

Bloomberg says Mini plans soon “to make its new cars available with devices that enable owners to rent out their vehicles … the system includes features that accept payment and track the vehicle to make sure the renter doesn’t go for a one-way joyride.” Further details regarding the add-on system were not available, although the functional description is reminiscent of Berlin startup Carzapp’s device for car sharing applications.

The news arrives in connection with the Beijing Auto Show, though it’s not clear what geography will first see the new device. My heart sank, though, when I read the words of a senior Mini executive: “…there’ll be others who’ll love the idea of halving their leasing rate.”

There are some excellent value propositions in car sharing – especially if you own your car. There are some excellent value propositions in car leasing – especially if you like driving a new car every 2-3 years. But leasing and sharing do not go together.

Leasing is a very popular option for drivers of German luxury cars. According to U.S. car shopping service Cartelligent the Mini Hardtop is the fifth most frequently leased vehicle. It is exceeded in leasing popularity by the BMW 3 Series – third most frequently leased (70%), according to Cartelligent.

Leasing is on the rise in the U.S. according to figures shared last year by Experian. Some leasing analysts have noted that the typical 2-3-year leases are starting to extend to four and five years.

Leasing a car is one of those things that your friends and family members tell you to NEVER DO! I usually listen to that advice, but since I have a BMW in a long-term lease right now, I can tell you I did not listen and I am not happy and I watch my car’s mileage carefully. I do not want to get dinged for per-mile charges over and above my annual limit.

By the way, I am also notorious (to my wife and sons) for parking in remote areas of parking lots – not wanting to acquire any door dents or rim scrapes that might crop up during the inspection when I return the vehicle at the end of the lease. Can a lessee count on a car sharer to be so careful with the vehicle? I don’t think so.

Share it if you own it, not if you lease it, I say.

Adding an aftermarket device to a leased Mini or BMW is the equivalent of attaching an explosive device to the customer’s credit report. It is not a good idea. It is not a good business model. It is not a good marketing practice.

If the test of Mini car sharing goes well, Bloomberg reports, BMW will expand the offer to the parent brand. It’s all part of BMW’s larger plan to become a mobility company. It may take time for a determination to be reached regarding the program’s success, but I can say without equivocation: car leasing and car sharing do not mix. It’s only hip up until the point that someone bends a rim.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Why is NXP Moving to FD-SOI?

Why is NXP Moving to FD-SOI?
by Ron Martino on 05-08-2016 at 11:00 am

The latest generations of power efficient and full-featured applications processors in NXP’s very successful and broadly deployed i.MX platform are being manufactured on 28nm FD-SOI. The new i.MX 7 series leverages the 32-bit ARM v7-A core, targeting the general embedded, e-reader, medical, wearable and IoT markets, where power efficiency is paramount. The i.MX 8 series leverages the 64-bit ARM v8-A series, targeting automotive applications, especially driver information systems, as well as high-performance general embedded and advanced graphics applications.

Over 200 million i.MX SOCs have been shipped over six product generations since the i.MX line was first launched (by Freescale) in 2001. They’re in over 35 million vehicles today, are leaders in e-readers and pervasive in the general embedded space. But the landscape for the markets targeted by the i.MX 7 and i.MX 8 product lines are changing radically. While performance needs to be high, the real name of the game is power efficiency.

Why are we moving to FD-SOI?

The bottom line in chip manufacturing is always cost. A move from 28nm HKMG to 14nm FinFET would entail up to a 50% cost increase. Would it be worth it? While FinFETs do boast impressive power-performance figures, for applications processors targeting IoT, embedded and automotive, we need to look beyond those figures, taking into account:

  • when and how performance is needed and how it is used;
  • when power savings are most pertinent;
  • how RF and analog characteristics are integrated;
  • the environmental conditions under which the chip will be operating;
  • and of course the overall manufacturing risks.

In fact, both NXP and the former Freescale have extremely deep SOI expertise. Freescale developed over 20 processors based on partially-depleted SOI over the last decade; and NXP, having pioneered SOI technology for high-voltage applications, has dozens of SOI-based product lines. So we all understand how SOI can help us strategically leverage power and performance. For us, FD-SOI is just the latest SOI technology, this time with a design flow almost identical to bulk, but on ultra-thin SOI wafers and some important additional perks like back-biasing.

When all the factors we care about for the new i.MX processor families are tallied up, FD-SOI comes out a clear winner for i.MX SOCs.

FD-SOI: Designing for Power, Performance and more!

For our designers, here’s why FD-SOI is the right solution to the engineering challenges they faced in meeting evolving market needs.

In terms of power, you can lower the supply voltage (Vdd) – so you’re pulling less power from your energy source – and still get excellent performance. Add to that the dynamic back-biasing techniques (forward back-bias improves performance, while reverse back-bias reduces leakage) available with FD-SOI (but not with FinFETs), you get a very large dynamic operating range.


By dramatically reducing leakage, reverse back-biasing (RBB) gives you good power-performance at very low voltages and a wide range of temperatures. This is particularly important for IoT products, which will spend most of their time in very low-power standby mode followed by short bursts of performance-intense activity. We can meet the requirements for those high-performance instances with forward back-biasing (FBB) techniques. And because we can apply back-biasing dynamically, we can specify it to meet changing workload requirements on the fly.

Devices for IoT also have major analog and RF elements, which do not scale nearly so well as the digital parts of the chip. Furthermore analog and RF elements are very sensitive to voltage variations. It is important that the RF and analog blocks of the chip are not affected by the digital parts of a chip, which undergo strong, sudden signal switching. The major concerns for our analog/RF designers include gain, matching, variability, noise, power dissipation, and resistance. Traditionally they’ve used specialized techniques, but FD-SOI makes their job much easier and results in superior analog performance.

In terms of RF, FD-SOI greatly simplifies the integration of RF blocks for WiFi, Bluetooth or Zigbee, for example, into an SOC.

Soft error rates (SER)* are another important consideration, especially as the size and density of SOC memory arrays keep increasing. Bulk technology gets worse SER results with each technology node, while FD-SOI provides ever better SER reliability with each geometry shrink. In fact, 28nm FD-SOI provides 10 to 100 times better immunity to soft-errors than its bulk counterpart.

Our process development strategy has always been to leverage foundry standard technology and adapt it for our targeted applications, with a focus on differentiating technologies for performance and features. We typically reuse about 80% of our technology platform, and own our intellectual property (IP). Looking at the ease of porting existing platform technology and IP, and analyzing die size vs. die cost, again, FD-SOI came out the clear choice.


In terms of manufacturing, FD-SOI is a lower-risk solution. Integration is simpler, and turnaround time (TAT) is much faster. 28nm FD-SOI is a planar technology, so it’s lower complexity and extends our 28nm installed expertise base. Throughout the design cycle, we’ve worked closely with our foundry partner, Samsung. They provided outstanding support, and very quickly reached excellent yield levels, which is of course paramount for the rapid ramp we anticipate on these products.

In the second part of this article, we’ll take a look at the new i.MX product lines, and why FD-SOI is helping us make those game-changing plays for specific markets.

By Ronald M. Martino, Vice President, i.MX Applications Processor and Advanced Technology Adoption, NXP Semiconductors

Also read:Why NXP is Moving to FD-SOI (Part II)


IoT Devices Making Inroads into Semicon Revenue

IoT Devices Making Inroads into Semicon Revenue
by Pawan Fangaria on 05-08-2016 at 7:00 am

Last year IC Insights forecasted IOT semiconductor growth rate to be around 19% CAGR for next five years. And within that space, the O-S-D (Optoelectronics, Sensors, and Discrete) semiconductors were expected to grow at a CAGR of 26%, one among the fastest. In 2015, the O-S-D revenue was at $66.6 billion, i.e. ~19% of total semiconductor industry revenue of $353.7 billion. The share of O-S-D revenue in the total semiconductor revenue will continue to be around 20% until 2020. The CMOS image sensors and sensors in general within the O-S-D space command significant share of revenue and they are essential components of IOT devices. We have started seeing an impressive growth rate in revenue from sensors.

Although the overall semiconductor growth rate is moving at snail’s pace, the worldwide CMOS image sensor revenue grew 12% in 2015 to $9.9 billion, and is expected to grow at a CAGR of 9% for at least another 5 years to reach $15.2 billion in 2020. The image sensor CAGR was higher at 17% in the last five years (2010-15), driven by embedded cameras in phones and other image recognition systems. However, the future trend reveals the growth in image sensors coming from elsewhere and not from cameras phones. On an absolute scale, the camera phones keep the lion’s share of total image sensor revenue, but that will decrease gradually.


It’s interesting to analyze how CMOS image sensors which get into devices to realize IoT applications are cutting the pie of revenue from camera phones. The share of revenue from camera phone image sensors will decrease from 70% of total image sensors revenue in 2015 to 48% of total image sensors revenue in 2020. In absolute terms, the revenue of image sensors from camera phones will grow at a mere 1% CAGR to $7.3 billion in 2020. This is the beauty of semiconductor industry that when Smartphone market is maturing, there are other market segments which are growing to keep the overall semiconductor revenue afloat.

The largest growth in CMOS image sensors is expected to come from automotive industry at a CAGR of 55% reaching $2.2 billion in 2020; that amounts to ~14% of total image sensors revenue. The automotive industry is poised for a big push towards safety from collisions, autonomous driving, video-recording of crashes, rear-view cameras, and so on. All of these require image sensors.

The next major growth in image sensors is expected to come from security and surveillance applications at a CAGR of 36%, and then from medical and scientific applications at a CAGR of 34%. The other areas include industrial systems, video games, and so on; all with double digit CAGR.

Clearly, the IoT connectivity between different devices will increase the usage of image sensors in the ecosystem and that will lead to higher growth in the revenue from image sensors. Although starting with low base, the absolute revenue from image sensors in each of these areas will eventually increase considerably.

What are the other devices witnessing double digit growth rates? They are again related to IoT. Another IC Insight report on O-S-D product categories states about ‘Lamp Devices’ growth rate at 14% in 2015 reaching record revenue of $14.3 billion. There are other devices such as ‘Laser Transmitters’ and ‘Infrared Devices’ which are exhibiting double digit growth rates.

More Articles from Pawan


Apple should buy Tesla and appoint Elon Musk as CEO!

Apple should buy Tesla and appoint Elon Musk as CEO!
by Vivek Wadhwa on 05-07-2016 at 7:00 am

Apple’s dismal earnings announcement shows why it badly needs to rethink its innovation model and leadership. Its last breakthrough innovation was the iPhone — which was released in 2007. Since then, Apple has simply been tweaking its componentry, adding faster processors and more advanced sensors, and playing with its size — making it bigger in the iPad and smaller in the Apple Watch. Chief executive Tim Cook is probably one of the most competent operations executives in the industry but is clearly not a technology visionary. Apple needs another Steve Jobs to reinvent itself otherwise it will join the ranks of HP and Compaq.

That Steve Jobs may be Elon Musk—who has proven to be the greatest visionary of our times.

In the same period that Apple released the iPhone and successors, Musk developed two generations of world-changing electric vehicles; perfected a new generation of battery technologies; and released first-generation autonomous driving capabilities. And that was in Tesla Motors. In his other company, SpaceX, Musk developed a spacecraft, docked it with the International Space Station, and returned with cargo. He’s launched two rockets to space that have made vertical landings back on Earth — one on a helicopter-like pad and another on a ship in the ocean. Musk is also developing the Hyperloop, a high-speed transportation system in which pressurized capsules ride on an air cushion driven by linear induction motors and air compressors. In discussions that I had with him in 2012, Musk told me that his ambition was to build a space station and retire on Mars. He wasn’t joking, I expect he will do this.

Apple has reportedly been developing an electric vehicle because it sees a car as an iPhone on wheels. It is conceivable that it will demonstrate something like this in the next five to 10 years. But Tesla already has this technology — and it is amazing. I have likened my Tesla Model S to a spaceship that travels on land. I consider it to be better than any Apple product — because it is more complex, elegant, and better designed than anything that Apple offers.

Apple should buy Tesla and appoint Elon Musk as CEO…

Would Musk be interested in being part of Apple when Tesla is on top of the world? Tesla just received nearly $20 billion in orders for its Model 3 — a record for any product in history. Musk reportedly turned down an acquisition offer from Google in 2013 when it was on the verge of bankruptcy. Why would he consider such an offer now, from Apple?

My guess is that he would do this — if he were offered the chief executive role. A combination of an operations executive such as Cook and a visionary such as Musk would be formidable. Apple’s vast resources would allow Tesla to scale up his operations to deliver the nearly 400,000 orders it has received for the Model 3. Tesla would be able to leverage Apple’s global distribution network and incorporate many new technologies. Musk would be able to pursue his dream projects while Cook worried about delivery and detail.

And Cook would get the visionary that Apple badly needs, someone who is even a cut above Steve Jobs. The markets would rejoice and take Apple stock to a level higher than anything it has seen before. Consider that Tesla’s market cap of $33 billion is eminently affordable by Apple, which has reserves of more than $200 billion. And Apple lost $47 billion in valuation with its earnings announcement Tuesday, which is more than it would likely cost to acquire Tesla.

This could be a marriage made in heaven. We would get world changing innovations as well as our space colonies.

Below is an interview I did with Tyler Mathisen of CNBC Power Lunch about this. More on my website: www.wadhwa.com or follow me on Twitter: @wadhwa.


3D NAND – Moore’s Law in the third dimension

3D NAND – Moore’s Law in the third dimension
by Scotten Jones on 05-07-2016 at 4:00 am

For more than a decade 2D NAND has been the leading driver of lithography shrinks, for example, Samsung went from 120nm in 2003 to 16nm in 2014 with shrinks on an almost yearly basis, but the shrinks came at a price. At 16nm Self Aligned Quadruple Pattering (SAQP) was required for the most critical layers and patterning related costs including deposition and etches for multi-patterning grew to represent nearly two thirds of the cost of the wafer fabrication process. At the same time device related issues were also a growing problem, adjacent cell interference, maintaining control gate to floating gate coupling and the shrinking number of electrons per cell are just a few of the many issues.

In 2014 Samsung introduced the first 3D NAND part. Instead of horizontal stings of memory cells Samsung turned the strings on end into the vertical direction. The basic process flow can be broken up into three major segments:

[LIST=1]

  • CMOS – this is the peripheral circuitry that drives and controls the memory array.
  • Memory Array – the area where the values are stored.
  • Interconnect – connects the memory array and CMOS together.

    The CMOS and Interconnect are similar to the 2D NAND process but the memory array formation is completely different. The memory array fabrication is as follows (Samsung TCAT process):

    • Alternating layers of silicon dioxide and silicon nitride are deposited.
    • Channel hole etch – the channel opening is etched down through all of the oxide/nitride layers.
    • Channel fill – an epitaxial layer is grown in the bottom of the channels and then the channel is filled with polysilicon and oxide to create a “macaroni channel” (a tube of polysilicon filled with oxide).
    • Stair Step Formation – a thick photoresist layer is applied and patterned, one set of oxide/nitride pairs is etched and then the photoresist pattern is shrunk and the next pair of oxide/nitride layers is etched. This sequence is repeated to create a stair step structure at the edge of the array. Ideally this is done with a single mask but in practice multiple masks are required.
    • Planarize – a thick oxide layer is now deposited and planarized.
    • WL Slot – a word line slot mask is applied and a slot is etched down through all of the oxide/nitride layer pairs.
    • Gate Formation – the nitride layers are now etched out through the word line slot. A gate stack of silicon dioxide, silicon nitride, aluminum oxide, tungsten and tantalum nitride if then deposited and etched back and finally the slot is filled with oxide and tungsten. This is a gate last process, other companies use a gate first process.

    There are a number of advantages to this process:

    [LIST=1]

  • The lithography requirements are relaxed because the cell “length” is set by the depositions. All of the memory array patterns are done with single patterning.
  • The number of cells in a vertical string can be scaled up by depositing more layers. In theory you can add layers without needing any additional masks although the stair step formation may require some additional masks. In theory the whole memory array is fabricated with only three masks although in practice more are required.
  • The memory cells are bigger and hold more electrons.
  • Speed, endurance and other critical performance characteristics are all improved versus 2D NAND.

    With 2D NAND we saw memory density improve from 0.006 Gb/mm[SUP]2[/SUP] at 120nm to 1.1 Gb/mm[SUP]2[/SUP] at 16nm for a 3 bit per cell memory cell. In 2014 Samsung introduce a 24 layer 3D NAND part with 0.97 Gb/mm[SUP]2[/SUP] for a 2 bit per cell part, in 2015 Samsung introduced a 32 layer 3 bit/cell part with a density of 1.86 Gb/mm[SUP]2[/SUP] and in 2016 a 48 layer 3 bit per cell part with 2.62 Gb/mm[SUP]2[/SUP]. 3D NAND has already far surpassed the higher memory density of 2D NAND and it is expected that additional layers will continue to be added until parts with over 100 layers and more than 1Tb per part will be introduced. In fact, we forecast that a 128 layer – 4 bit per cell part will be produced around 2020 with 8.67 Gb/mm[SUP]2[/SUP].

    3D NAND is not without it challenges, as the number of layers increases it may not be possible to etch and fill through the entire stack and the stack may need to become a two-step process where half the stack is deposited and patterned and then the other half is deposited and patterned. The relatively low mobility of the polysilicon channel may also become limiting and IMEC has already demonstrated InGaAs as a channel material.

    See my article on IMECs work here.

    Another interesting innovation in 3D NAND was disclosed by Intel and Micron at IEDM last year where they fabricate part of the peripheral CMOS under the memory array. The combination of CMOS under the memory array and a denser array enabled Intel-Micron to achieve a 22% density advantage over Samsung for a 32-layer device.

    See my article on the Intel-Micron disclosure here.

    Of course no technology succeeds in the semiconductor industry unless it is economical. The switch to 3D NAND has changed the cost paradigm away from being patterning dominated to being deposition and etch dominated. In fact, I estimate that patterning costs make up less than one third of the total fabrication process for Samsung’s 32-layer device (one double patterned layer for interconnect). Some analysts claim that 48 layers is the breakeven technology versus 16nm 2D for bit cost, I disagree with this. 3D versus 2D wafer fabrication costs are similar although with different costs drivers. 3D NAND has much higher bit density but to-date poor yield due to the challenges of pattering the memory stack. My modeling is that Micron’s 32-layer part is 25% less expensive per bit than their 16nm 2D NAND after factoring in yield. This is consistent with statements from Micron. Furthermore, Micron has shown a generation 2 part that they say will provide an additional 30% cost reduction over generation 1, also consistent with our modeling.

    In conclusion 3D NAND has overcome the limitations of 2D NAND providing lower cost and better performance with a scaling path into the next decade.


  • One FPGA synthesis flow for different IP types

    One FPGA synthesis flow for different IP types
    by Don Dingee on 05-06-2016 at 4:00 pm

    Both Altera and Xilinx are innovative companies with robust ecosystems, right? It would be a terrible shame if you located the perfect FPGA IP block for a design, but couldn’t use it because it was in the “wrong” format for your preferred FPGA. What if there were a way around that?

    There is a compelling argument to use each FPGA vendor’s tool that delivers synthesis results optimized for their particular FPGA. However, that can be a limiting factor in a design with numerous IP blocks. Constraining the IP search to only the FPGA’s vendor ecosystem may artificially rule out what might be the best option for differentiating a design. Imagine what would happen if your firm is looking at acquiring another firm, and you discover they work in the other FPGA environment. “Oh, dang it, we can’t buy them ….” Probably not a good response.


    It makes a lot more sense to be ready for any FPGA IP that comes your way. I’d also like to challenge the assumption that a third-party FPGA synthesis tool can’t deliver the same or better quality of results – QoR is a function of the entire design, after all the IP is comprehended. Synopsys Synplify Premier is designed to handle both Altera and Xilinx IP, working with the various formats and constraints, and deliver better synthesis results.

    Paul Owens, Sr. Corporate Applications Engineer for Synopsys, points out in a recent webinar that there are two broad categories of FPGA IP: interface, and datapath. Interface IP often has vendor-specific physical constraints and non-timing constraints, while datapath IP has associated timing constraints. There is also the possibility that vendor IP is in one of several formats. Life is good if everything is in readable RTL source, but Altera IP is often encrypted RTL, and Xilinx IP also comes in DCP (Design Check Point) and BD (Block Design) formats.

    Readable RTL can be added in what Owens calls absorb mode. The entire IP is read in, the synthesis engine gets to optimize timing paths and logic in and around the IP, and the final netlist contains the IP netlist in entirety.

    What if IP is encrypted? Synplify Premier also handles a white or grey box method. Timing models for the IP are read in, and the synthesis engine optimizes around it but doesn’t modify the IP itself. These IP blocks typically have more complex constraints, which Synplify Premier imports for synthesis of surrounding logic while preserving the original constraints for place & route of the encrypted block.

    Constraints can make or break a synthesis cycle. Writing and importing constraints is a hugely important step in achieving QoR. About half of Owens’ presentation is devoted to dealing with constraints and achieving QoR, even while working with the disparate IP types. A noteworthy observation is Synplify Premier works with the Xilinx Vivado place & route engine for congestion improvement, and similar capability is in development for Altera.


    This webinar provides one of the most packed yet to-the-point descriptions of Synplify Premier capability I’ve seen. Owens discusses the benefits of parallel place & route on a server farm, which can significantly speed up the overall synthesis process. He also touches on the debug process with the Identify debugger, offering a unified environment and one look and feel regardless of which side the FPGA IP came from.

    To view the entire webinar (registration at TechOnline):
    Accelerate your FPGA Design Schedules with Synplify Premier

    I’ve called the concept of free the “f-word” of technology marketing: operating systems, EDA tools, it’s all the same argument. There are times in more advanced scenarios where paying for a tool delivers better results. FPGA synthesis on big projects with disparate IP and complex constraints is one of those scenarios where productivity and QoR gains are worth the investment in a tool like Synopsys Synplify Premier.


    Neural nets for Qualcomm Snapdragon

    Neural nets for Qualcomm Snapdragon
    by Bernard Murphy on 05-06-2016 at 12:00 pm

    Neural nets are hot these days. In this forum certainly you can’t swing a cat without hitting multiple articles on the topic – I’ve written some myself. For me there are two reasons for this interest. First, neural nets are amazingly successful at what they do, for example in image recognition where they can beat human observers in accuracy and response time. More subtly, they have changed the way we look at some aspects of artificial intelligence, from mathematical models to biological models.

    With the benefit of hindsight this shouldn’t be surprising. If we want to mimic the behavior of say the visual cortex, starting with a low-level model of how the brain actually works (interconnected neurons with connectivity weights trained through learning) seems like a better bet than a high-level algorithmic abstraction of how we think vision works. You lose the benefit of understanding the process, but the effectiveness of the result is more important in this case than scientific insight.

    The way neural nets work is maybe easiest to understand in image recognition. First an image is broken up into small regions. Pixels within each region are tested against a function to detect a particular feature such as a diagonal edge. The function is simple – a weighted sum of the inputs, checked against a threshold function to determine if the output should trigger. Other feature tests (eg for color) can then be performed, but I’ll skip that complication.

    Outputs are fed into a second layer. The same process repeats, this time with a different set of functions which extract slightly higher level details from the first-level. This then continues through multiple layers until the final outputs provide a high-level characterization of the recognized object. The weighted sums at the core of this method can be modeled very nicely in a DSP or GPU, which is convenient because the Snapdragon 820 core offers both and can perform this modeling with low power consumption.

    Setting the weights requires a training phase. Once a net has been trained it can be used to distinguish between the classes of objects on which it has been trained – road signs for example. Within their training domain, such neural nets have been shown to achieve 99% or better recognition accuracy in real time.

    A great place to deploy this capability is in mobile systems, because that removes the need to go to the cloud for complex processing. Qualcomm recognized this and has just announced a software development kit to be used with the Snapdragon 820 (the processor at the heart of the Samsung Galaxy S7 and other phones) to enable neural net processing. This Snapdragon Neural Processing Engine SDK is powered by the Qualcomm® Zeroth™ Machine Intelligence Platform and is optimized for Snapdragon 820.

    Just some of the places this capability can be used are on smartphones, security cameras, cars and other platforms for scene detection, text recognition, object tracking and avoidance, gesturing and natural language processing. Think about upcoming electric vehicles, game-stations and “remoteless” home entertainment centers – all of these will enabled by this kind of technology.

    In many applications, untethering from the cloud is not a nice-to-have. You don’t want collision-avoidance dependent on whether you have line of sight to a cell tower (despite Verizon claims to the contrary, this is not universal). Or be at the mercy of heavy loads on cloud servers. And you don’t want security checks like facial or iris recognition on your phone farmed out to the cloud for similar reasons. Not to mention that man-in-the-middle attacks are an obvious weakness in cloud-based security.

    Thanks to programs like this, we can look forward to much more safety, security and other intelligent usefulness in mobile devices in the near future. You can learn more about the Qualcomm offering HERE.

    More articles by Bernard…


    Seven Reasons to Attend DAC in Austin

    Seven Reasons to Attend DAC in Austin
    by Daniel Payne on 05-06-2016 at 7:00 am

    I’m attending the 53rd Design Automation Conference (DAC) in Austin, Texas starting June 5th, and there are at least seven reasons that you should consider attending as well. For decades now DAC has been the premier place for all the players in our semiconductor ecosystem to get together: Academics, Commercial vendors in EDA, foundries, semiconductor IP, and media.

    1. Keynotes
    I like to hear about the big picture from industry luminaries, and this year we get to hear from people at NXP Semiconductors, NVIDIA Corporation, Advanced Micro Devices and the University of Texas at Austin. Recall that NVIDIA just announced a 15 billion transistor, the Tesla P100. NXP Semi is well-know in the automotive, IoT and security markets.


    The keynotes and Sky talks are here.

    2. Exhibitors
    As a blogger most of my time will be spent visiting some of the 175 exhibits to find out what’s new with EDA software, how it compares to previous releases, and who is winning against competitors.

    Here’s the online exhibitor list.

    3. Training
    Did you know that at DAC there is a special day just for training? Yes, on Thursday there are interesting training topics that you can sign up for like:

    • How to Build Class-Based Verification Environments in SystemVerilog
    • Learn UVM using the Easier UVM Coding Guidelines and Code Generator
    • SystemVerilog Synthesis Tuned for ASIC and FPGA Design
    • The Definitive Guide to SystemC TLM-2.0: Learn the Technology Standard that Underpins Virtual Platforms
    • Introduction to Embedded Security: Making Security Hard: Hardware Security and How to Use it
    • Introduction to Embedded Linux Security: Keys to Understanding Vulnerabilities in Embedded Systems and How to Secure Them
    • Taking Your C++ to the Next Level
    • Finding Creative Solutions to Complex Problems
    • Maximizing Mental Agility

    Signup for training classes here.

    4. Networking
    Let’s admit it, reunions are grand fun and can also benefit your career path by staying connected to those coworkers that have landed at various places over the years. Each evening of DAC look to network with other semiconductor professionals over cocktails from 6:00-7:00PM. There’s even a Technology Art Show for you to open up the creative side of what silicon chips and other techno-art looks like.

    5. DAC Tracks
    We live in an era of specialization, and so DAC has organized into multiple tracks based upon your specific interests:

    6. Video Casts
    Since DAC has so much activity going on simultaneously you cannot be all places at once, so throughout DAC they will be recording some portions for viewing later. They call this DACtv and you can bookmark this page and return to get caught up a bit. Here’s a glimpse of what a DACtv video looks like:

    7. Customers
    One important part of business is getting to know what your customers really want, so with all of the top EDA and IP executives in one place at one time, attending DAC to meet with key customers is an essential part of keeping close to your customers. I’m not talking about closing business at DAC but I am certain that relationships are started and enhanced by this special face time that cannot be gained through email, phone calls or even video conferencing.


    Is the Future Finally Here? What a GaAs!

    Is the Future Finally Here? What a GaAs!
    by Mitch Heins on 05-05-2016 at 4:00 pm

    Back in 1983 I was working for Texas Instruments during the beginning of the push to let common electrical engineers develop their own CMOS application specific ICs (ASICs). This would eventually the be the fuel that fed the semiconductor engine to reach over $335 billion in 2015. At that time, I was a young guy and I had a rascally old boss who used to say, “Gallium Arsenide – has been and always will be the technology of the future!”. Fast forward to 2016 and we witness the announcement byPOET Technologies Inc. that it has signed a definitive agreement to acquire all the shares of DenseLight Semiconductors Pte. Ltd. DenseLight is a Singapore-based privately held photonics company that designs, manufactures, and markets photonic optical light source products to the communications, medical, instrumentation, industrial, defense, and security industries. These products are based on DenseLight processes using Indium Phosphide (InP) and you guessed it, Gallium Arsenide (GaAs). Have we met the future and is it “now”?

    POET Technologies’ name is in fact an acronym for Planar Opto-Electronic Technology and they are working on moving from the lab into fab technologies for monolithically integrated opto-electronics or what POET calls smart optical components. Much of their value proposition is based on an invention they call DOES (Digital Opto-electronic Switch). This switch is used in their proposed products, that would replace with one POET IC, an existing transceiver made up of a hybrid assembly of a VCSEL (vertical cavity surface emitting laser), laser driving electronics, a GaAs photo detector, and a receiver IC consisting of a TIA, limiting amplifiers and an output amplifier . The existing transceivers have been effective operating at 10Gbps to 25Gbps over multi-mode fiber (MMF) up to about 100 meters. However, POET claims that these solutions fail for 500 meter links, especially at the 25Gbps link rates. This has prompted the market to look to multi-die solutions that use a combination of a silicon photonics IC (PIC), an electronics IC (EIC) and a laser along with single mode fiber (SMF) for the reaches past 100 meters.

    POET believes that they can use their III-V (GaAs) VCSEL epitaxy process and their new DOES technology to integrate both VCSELs and electronic FETs (field effect transistors) and HBTs (Heterojunction Bipolar Junction Transistors) on a one-chip solution that will provide 10X improvement over what can be provided by silicon photonics in this space (see POET white paper here).

    POET started life as OPEL Technologies out of Toronto, Canada selling III-V semiconductor devices through a U.S. company named ODIS Inc. to the military, industrial and commercial market spaces. They specializing in infrared sensor arrays and ultra-low power random access memory. They changed their name to POET Technologies in June of 2013 and have been working ever since to use their expertise in III-V processes to become a premier supplier of opto-electronics processes and smart optical solutions. Over the last year POET has made some major strides to bring their technology out of the lab and into the fab. A short time line follows:

    • August, 2015: POET announced a manufacturing services agreement with ANADIGICS, a New Jersey company, to prove out their new hybrid VCSELs in ANADIGICS 6-inch fabrication line.
    • September, 2015: POET reported they were expecting prototypes of their VCSEL products in Q2 of 2016, with hopes of providing a 10X improvement in energy consumption, component cost and form factors used for data center short reach and very short reach communications.
    • January, 2016: POET made separate announcements of a supplier agreement with EpiWorks, a wafer manufacturer specializing in epitaxial growth and a manufacturing services agreement with Wavetek Microelectronics Corporation, that specializes as a GaAs foundry.
    • March, 2016: POET announced a R&D initiative with IMRE/A*STAR in Singapore to develop smart pixel technology for the Augmented Reality market using GaAs and Gallium Nitride (GaN).
    • April, 2016: POET announces acquisition of all shares of DenseLight, whose fabs in Singapore specialize in Indium Phosphide (InP) and GaAs. Also in April, POET announced that they had multiple wafer lots of their hybrid VCSEL technology produced as promised in Q2 of 2016 and were in the process of characterizing them. They also announced they are working on a new GaAs-based resonance cavity based photonic sensor targeting prototypes by the end of 2016.

    All of that said, I’m not quite sure the future is here yet. Prototypes are interesting but the proof is in real production volume products. It appears however that POET is getting closer and showing good progress. For the sake of integrated photonics, whether it be Si or GaAs based, I hope they are successful. It would mark a major milestone towards the commercialization of integrated photonics out of the labs.


    Qualcomm’s New Smartphone Chips Go Straight At MediaTek

    Qualcomm’s New Smartphone Chips Go Straight At MediaTek
    by Patrick Moorhead on 05-05-2016 at 12:00 pm

    Last Thursday at Qualcomm’s Financial Analyst Day the company made a slew of chip announcements ranging from the industry’s 1 Gbps wireless LTE modem to a custom designed smartwatch SoC and platform called “Snapdragon Wear 2100 SoC”. In between those, Qualcomm also announced a few very overlooked chips that help strengthen Qualcomm’s position in the mid-tier of the market, which is still the fastest growing portion of the smartphone. Contrary to some beliefs, Qualcomm is not reducing its focus on the smartphone market, but rather refocusing their efforts to better utilize their vast portfolio of smartphone IP in SoCs. These new chip announcements are all perfect examples of this new focus, giving added value where Qualcomm can with their own IP without undercutting the high-end.

    The Snapdragon X16 LTE Advanced Pro Gigabit modem is the most complex piece of this puzzle and also the crown jewel of Qualcomm’s financial analyst day announcements. As a result, I’ve written a separate piece detailing all of the technological improvements and implications. The X16 LTE modem harnesses Qualcomm’s latest and greatest R&D and modem technologies that could allow Qualcomm to continue to push the wireless envelope (literally) and help maintain their current leadership. Since modems is arguably one of Qualcomm’s strongest technological areas, it makes sense that they would announce this alongside all these other chips. That leads me to the next chips announcements, those are the Snapdragon 625, Snapdragon 435 and Snapdragon 425.

    The Snapdragon 625, 435 and 425 are by no means Qualcomm’s highest-end processors, in fact they are squarely intended for the middle range of the market. All of these chips are updates on older chips like the Snapdragon 617, Snapdragon 430 and Snapdragon 410/415. The Snapdragon 625 is the fastest and most technologically advanced of the bunch. It is Qualcomm’s first mid-range smartphone chip that features the latest 14nm FinFET process technology that Qualcomm says helps reduce power by up to 35% over the previous generation Snapdragon 617. These power savings are huge because that means smartphone OEMs that utilize these chips can really see some crazy power savings thanks to the eight lower performance/power A53 CPU cores and the new X9 LTE modem with Cat. 7 down (300 mbps) and Cat 13 up (150 Mbps) support. It also has 802.11ac MU-MIMO wireless capabilities, which is something that has traditionally only been for high-end smartphone chips.

    Qualcomm also added their 500 series Adreno GPU with the Adreno 506 in the Snapdragon 625, giving it extremely high graphical capabilities. They also added Qualcomm’s own high-end ISPs and DSPs to enable dual high-resolution cameras, up to 24-megapixel main camera and 13-megapixel front-facing cameras. For good measure, it also has Qualcomm’s latest QuickCharge 3.0 charging technology which is one of the fastest charging technologies I have seen to date.

    This chip from top to bottom screams that it is intended for the Chinese smartphone market and I believe the likes of Xiaomi and ZTE will very eager to get their hands on these chips. Even though I am by no measure a fan of eight-core CPU SoC designs with eight A53 CPU cores, there is no denying that the Chinese market and Qualcomm’s Chinese customers are demanding them and they’re filling a need. Even so, this chip will still very likely be very popular in China, especially with its 4K hardware encode and decode capabilities, that should prove pretty popular as 4K displays continue get cheaper and more market penetration.

    The Snapdragon 435 and 425 chips are also ARM Cortex A53-based SoCs, however they are 28nm processors, making them more affordable for OEMs as they are not using a leading edge process. The 435 is a slight upgrade from the Snapdragon 430, that has an eight core processor and adds a Cat 6 LTE modem which means download speeds of up to 300 Mbps and 2x CA, something that is becoming standard across the world. The Snapdragon 435 also adds differentiated features like an integrated Hexagon DSP and QuickCharge 3.0 which makes it an extremely attractive entry-level smartphone chip. The Snapdragon 425 on the other hand is only a quad core processor SoC, and is designed to replace the Snapdragon 410 and 415. However, it was designed to up the specs of Qualcomm’s entry-level SoCs and integrates a lot of technologies that smartphone vendors playing in that area might want to see in a single chip. Namely, those features are HD display at 60 Hz support as well as dual 13-megapixel camera support, 1080P video and a Cat 4 LTE modem with 2×10 CA for maximum download speeds of up to 150 Mbps. However, both the 435 and 425 also feature 802.11ac MU-MIMO wireless capability which shows Qualcomm bringing their wireless leadership with MU-MIMO all the way down to the 400-tier of SoCs. Qualcomm also included their QuickCharge 2.0 charging technology as well as their DSP and sensor hub to lower development costs for the OEMs in this price sensitive space.

    What makes the Snapdragon 625, 435 and 425 so interesting is that they are all software and pin compatible so that OEMs in China and other developing markets can save money on PCB designs and software development. Qualcomm is clearly making these chips as a way to open up their potential customer base and take away customers from the likes of AllWinner, MediaTek, Rockchip and Spreadtrum. This is also where a few of Huawei’s HiSilicon chip live.

    Qualcomm’s last chip announcement was probably the one that got the most attention outside of their LTE Advanced Pro X16 modem and that was the Snapdragon Wear chip. The Snapdragon Wear 2100 SoC is the first chip in their family of SoCs squarely aimed at the wearable market. Previously, Qualcomm simply repurposed some of their most capable low-cost chips like the Snapdragon 400 in order to satisfy short term demand for decent SoCs. In fact, most current generation smartwatches from Huawei, Motorola, LG and others feature Qualcomm’s Snapdragon 400. To me, this always seemed like a short-term fix until they actually built a purpose-built chip for wearables, which is exactly what the Snapdragon Wear 2100 SoC is designed to be.

    The Snapdragon Wear 2100 is first and foremost 30% smaller than the Snapdragon 400 which should leave more room for other components and frees up space for more battery or an overall thinner device. Qualcomm says the Snapdragon Wear 2100 is also 25% lower power than the Snapdragon 400 series processors which means that wearables with this processor should expect to have much longer battery life than ever before. The Snapdragon Wear 2100 has a quad core CPU with ARM Cortex-A7 cores which are clocked around 1 GHz paired with an Adreno 304 GPU and 400 MHz LPDDR3. This design makes a lot of sense when you think about the power sensitivity of most watches and how small their batteries are. Because of the move from A53 to A7 CPU cores, there should be a significant reduction in power consumption and very little difference in terms of performance as most wearables really don’t do much heavy on-board computing. Most compute is offloaded to the smartphone or cloud to save on power and improve on performance. This tiny little chip also features Qualcomm’s X5 LTE modem which allows for untethered voice and data like what AT&T offers with number sync that allows both your smartwatch and smartphone to share the same number and both take calls and use data. The Snapdragon Wear 2100 also has a built-in sensor hub like the Snapdragon 425. This once again shows Qualcomm harnessing their various IP advantages in modem, sensors and graphics to make their SoC the superior solution. I fully expect to see the Snapdragon Wear 2100 in a multitude of wearables this year, although Qualcomm hasn’t given any guidance of when devices with this chip will be available.

    Qualcomm’s announcements yesterday at their Financial Analyst Day marked a major shift by the company to go after more of the mid-range market and to take the fight to their competitors. They are doing this with their renewed focus on their IP that gives them an advantage over their competitors like their modems, GPUs, ISPs, DSPs and other processors. It will be interesting to see how this renewed focus and improved new chips will attract Asian OEMs towards Qualcomm’s products and how many wearable manufacturers adopt their Snapdragon Wear platform.


    More from Moor Insights and Strategy