NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

NVIDIA on a Tear at CES

NVIDIA on a Tear at CES
by Bernard Murphy on 01-09-2017 at 7:00 am

Jen-Hsun Huang, CEO of NVIDIA, gave the opening keynote at CES this year. That’s hardly surprising. From a company that operated on the fringes of mainstream awareness (those guys that do gamer graphics), they finished 2016 as the top-performing company in the S&P 500, returning revenue growth of 35% (forecast). That’s startup growth and the same rate at which Amazon Web Services (the mighty Amazon cloud) is growing. Pretty impressive for a semiconductor company. And they are earning it. From the keynote alone, it’s obvious they are putting the same blistering level of innovation into their products that you’ll see at any of the FANG (Facebook, Amazon, Netflix, Google/Alphabet) companies.


Jen-Hsun kicked off with the PC gaming sector which remains very important to NVIDIA. This business has doubled in the last 5 years to $31B and NVIDIA provides the dominant game platform today, as represented by GeForce. They’re obviously very proud of this but they’re looking to how they can grow it further. There are a few hundred million serious gamers today, but most PC/Mac users (around a billion) play games at some level, but can’t access the more advanced games and multi-player options because they don’t have the hardware. So NVIDIA has put GeForce for gaming in the cloud, called GeForce NOW, making it accessible to all users with an Internet connection. This apparently took some serious work to preserve the performance and low latency you expect in desktop gaming. Access will be available in March for early users and will be offered on-demand at $25 for 20 hours of play. Now you see part of why these guys are doing well – they’re expanding their market to casual users, from whom they’ll make money, and at least some of those casual users will like it so much they invest in their own GeForce-enabled desktop systems. 🙂


NVIDIA has also partnered with Google on Shield, their Android-based streaming device (same concept as Roku, AppleTV, etc). It serves up all the usual options – Netflix, Hulu and (unlike AppleTV) Amazon video and, of course, gaming – games can stream from their GeForce systems to the TV or from GeForce NOW in the cloud. More interestingly (for me, I’m not much of a gamer), Shield is tying in Google Assistant, providing natural speech-control of the TV but also home automation, so you have a central hub for voice-activated (including TV) control of any smart home device. To make this a through-house ambient capability they also are introducing the NIVDIA Spot, a small AI microphone (with lots of cool tech) which plugs into a wall socket and communicates with the hub, so from anywhere in the house you can say “OK Google …” and have the Google Assistant respond. (I have to believe NVIDIA is talking with Amazon about Echo integration, though that didn’t come up in the keynote.) Shield starts at $199 and SPOTs are separately priced ($50 each I hear).

Then of course there’s NVIDIA’s role in the automotive industry, which is already significant. This isn’t just about graphics, it’s also in a very big way about AI. Jen-Hsun makes the point that GPUs were a big part of what transformed AI from an academic backwater into a major industry, especially in deep learning. He calls GPUs the “big bang” of AI. Maybe I’d be more of a geek and call it the “Cambrian Explosion” (there was AI around before GPUs, it was just evolving slowly). Either way, NVIDIA saw this opportunity and ran with it – their solutions are a dominant platform in this field.


At the show, Jen-Hsun introduced Their Xavier AI Car Supercomputer – an 8 core ARM64 CPU, a 512 core Volta GPU, the board fuses sensor information, connects to CANs and to HD maps, is designed to ASIL-D and delivers 30 Tops in 30W. NVIDIA created a car they call BB8 (for Star Wars fans) which can drive autonomously given voice directions. The example they showed was “Take me to Starbucks in San Mateo”, from which it figured out the best direction and headed out. Interestingly, they see this more as a co-pilot (they call it AI CoPilot) than a fully autonomous intelligence – BB8 hands over control to the driver whenever it gets to situations it feels it can’t handle.

It also pays attention to the driver, looking for tiredness, inattention, perhaps having had a few too many drinks, and can warn the driver (or possibly take corrective action?). Even more interesting, it does this through facial recognition on the driver and gaze tracking. It also does lip-reading with 95% accuracy (they claim), much better than human experts. Why? Because cars can be noisy environments (music, traffic, passengers), so you want to pay special attention to driver commands, even when voice can’t get through.

Finally, Jen-Hsun announced new automotive partnerships. They have added ZF as partner (5[SUP]th[/SUP] largest automotive electronics supplier), and Bosch (#1 tech supplier to the car industry) has announced a production drive computer partnered with NVIDIA. And they have announced a new partnership with Audi (my favorite car) to build a next generation AI car by 2020. In fact, Audi was demoing a Q7 driving itself in a parking lot at CES after just 4 days of training. All of which reinforced that cars are still in many ways our favorite consumer devices, which is why CES is becoming as much of a car show as an electronics show.

There’s a lot of detail I skipped here, such as Shield supporting 4K and HDR. You’ll have to watch the video HERE to get the full keynote. I was really impressed. This is a semiconductor company that has reinvented itself to play right alongside the consumer technology leaders of today, not just as an “NVIDIA inside” but in many cases as a very visible part of our consumer experience. Other semis should take note. NVIDIA has shown that there is still a path to greatness in hardware.

More articles by Bernard…


Intelligent Vision in (almost) Every Application

Intelligent Vision in (almost) Every Application
by Eric Esteve on 01-06-2017 at 12:00 pm

Let’s take a look at the tremendous penetration of intelligent vision in so many and various applications. A few years ago, computer vision algorithms were implemented in applications directly linked with imaging, like computational photography for smartphones and cameras. We can mention today a bunch of segments like automotive, human machine interface or machine vision where computer vision is now the backbone of applications which have been created, thanks to the capabilities of the imaging technology.

CEVA has launched the 5[SUP]th[/SUP] generation architecture for imaging and computer vision, the CEVA-XM6 DSP, and offer a comprehensive vision platform built around the DSP. In a previous blog, we have explained how to build machine learning device implementing the Convolutional Deep Neural Network (CDNN) from CEVA. But let’s take a look at CEVA-XM6 platform, which is much more than a DSP core as it includes the CDNN toolkit comprised of hardware accelerators, neural network, software framework, software libraries, and a set of algorithms.

Automotive driver assistance systems (ADAS) is the most prominent example illustrating the penetration of computer vision completely shaking an automotive segment in which the electronic innovation was in a quiet mode. Now you will find DSP-based imaging in applications like traffic sign detection, free space, pedestrian detection, lane departure, forward collision warning and probably more. Why did it took so long for these types of application to be adopted in automotive? The answer is as usual linked with cost, performance (per dollar) and power consumption.

If we take a look at the CEVA-XM6 DSP architecture we can list the (four) scalar processors SPU0 to SPU3 and the three 512-bit vector processing units VPU-0 to VPU-2, all of which 128 single-cycle 16×16-bit MACs, bringing the total MAC count to 640. In fact, the most important enhancement may be the neural-network hardware accelerator (HWA) that offers 512 additional single-cycle 16×16 bits MACs, connecting to the DSP core’s processing cluster through an AXI4 interface. This HWA is one of the User-defined Coprocessors located in the bottom right box labelled TCE. Taking the example of the CDNN based machine learning, the convolutional layers consuming most neural processor cycles are implemented in the HWA, freeing the DSP core and providing a boost to the machine learning function. When the CEVA-XM6 DSP solution is implemented in a 16-nm chip, it offers unbeatable performance going with very decent power consumption and low footprint.

This performance/power efficiency, coupled with a reasonable chip price, is making CDNN based machine learning an affordable technology to be implemented in mass-market application today. A few years ago, such technology was only demonstrated in a lab, not implemented into a piece of silicon available at mass-market price.

The development of computer vision in Human Machine Interface applications is also opening new possibilities and new markets. The CEVA-XM6 DSP can be integrated to support gesture recognition, emotion sensing, eye tracking, face recognition or face detection. These applications are often linked with the need to provide more security in a world becoming more interconnected, not only thanks to faster communication but also due to higher flow of human moving across the planet. No doubt that these new markets will need more efficient algorithms and higher performance to develop the computer vision based applications increasing safety and security.

Deep learning and augmented reality are two segments, directly linked with machine vision, literally exploding and expected to generate innovation in the industry as well as in our future day to day life. Both are very demanding in term of raw performance and algorithm efficiency. Because the CEVA-XM6 platform is coming with imaging & vision SW libraries and CDNN network generator, it will help the developers to fasten their system time-to-market. CDNN support a variety of popular CNN technologies, including AlexNet and GoogleNet and CEVA offers software libraries for OpenCV, OpenVX and OpenCL support.

If you are interested by CDNN and deep learning solutions for ADAS applications, you should attend to this on demand webinar and download this product note:

On demand webinar “Challenges of Vision Based Autonomous Driving & Facilitation of an Embedded Neural Network Platform”:

http://go.ceva-dsp.com/Nov16-XM6-WebinarOnDemand.html

Learn how to use deep learning solutions for ADAS applications; How to run AdasWorks Free space detection neural network, while utilizing CEVA’s low power vision DSP combined with CEVA Deep Neural Network SW toolkit.

CDNN product note

You will find a complete description of CEVA-XM6 here:
CEVA-XM6 product note

By
Eric Esteve


CES 2017, Semiconductors and Cycling

CES 2017, Semiconductors and Cycling
by Daniel Payne on 01-06-2017 at 7:00 am

It’s back, that giant consumer electronics trade show CES 2017, held every January in Las Vegas with too many new product introductions to mention in one blog, so I’ll take a more focused look at what’s new for cycling.

Smart Bike
We all know what a smart phone is, but what could a smart bike be? The Chinese company LeEco has managed to integrate several pieces of separate technology into a single bicycle dubbed the LeEco Smart Road Bike:

  • Carbon frame road bike
  • 4″ touchscreen (Android 6.0 BikeOS, Snapdragon 410 processor, 6,000mAh batter)
  • Turn-by-turn navigation
  • Music playback
  • Walkie-talkie communication
  • ANT+ support of heart rate sensor and power meter
  • On-board lighting
  • Security alarm
  • 11 speeds
  • 18.5 pounds


LeEco also has a Smart Mountain Bike with similar integrated features as their road bike.

This bike would appeal to someone that falls in love with the price, looks and features of an integrated bike, and is not concerned with big-name bike brands (Trek, Specialized, Fuji, Cannondale).

Another integrated, smart bike is called the SpeedX Unicorn, priced at $3,199 and being marketing on Kickstarter. When you straddle the bike and look at the stem you see an integrated, Android-power bike computer with a 2.2″ display. It looks pretty snazzy, however my Garmin 820 has even more programmable display fields:

Electric Bikes
You can upgrade any existing bike to an electric by getting the Rool’in, it’s an electric front wheelfor your bike and it comes in three sizes.

Swagtron has the SwagCycle Urban E-bike with a top speed of 40mph and range of 55 miles per charge, that’s fast and far for sure.

Exercise Bike
Toddlers need to work out too, right? So Fisher-Price has a Think & Learn Smart Cycle for your toddler that keeps them fit while they play an app on your Apple TV or Android TV.

Monitoring
I’ve never heard of measuring body temperature while cycling, however Bodytrak claims that there new device that fits in your ear can monitor:

  • Body temperature
  • Heart rate
  • Speed
  • Distance
  • Cadence

That’s a big claim, and right now I have separate sensors for heart rate, speed and cadence.

Power Meter
I ride with a power meter integrated into my left crank arm, however Leti from France has come up with a power meter in a pedal, called PUSH. The big news is that they plan to only charge $100 or so for this, while their competitors have priced much higher at $500-$1,500 range. Their product photo shows a conventional pedal, not a road bike pedal, so I’m not sure how big their market is going to be. The consumers with the most cash to spend on a power meter are road cyclists like me, or tri-athletes that compete.

Smart Lock
For folks that commute and need to lock up their bike while running errands should be interested in the Ellipse Smart Bike Lock which has a GPS device for location and an accelerometer to detect and report a crash by text to your contacts. The battery is charged through a solar panel, so no need for USB cords and wall charging. There’s an App included, and the price is $199.

Smart Helmet
Cars have turns signals and brake lights, so why not bikes? Well, now your smart bike helmet can have turn signals and brake lights thanks to Livall and their Smart Riding Helmet. You just connect a Bluetooth enabled button-set on your handlebars to control the lighting on the helmet, and then cars approaching from behind know when you are turning or braking.


Another smart helmet called the CLASSON is marketed on Kickstarter for just $99 and has turn signals, brake lights and even alerts you to cars in your blind spot.

Heads Up Display
Solos Cycling has a product that reminds me of Google Glass, except that it’s for cyclists so that they don’t have to glance down at their bike computer mounted on the handlebar or stem. It uses the ANT+ and Bluetooth Smart Sensor protocols so should connect to your sensors for: speed, power, cadence, heart rate.

10 Battery Road Bike
I wanted to update you on what I’m riding these days, it’s a Specialized SL4 frame with SRAM eTapwireless shifting, very cool, no derailleur cables and no electrical cables for shifting. Before a ride I have to ask myself, “Are all 10 batteries ready to go?” Here’s where the 10 batteries come in:
[LIST=1]

  • Left shifter – CR2032 battery, lasts a few years
  • Right shifter – CR2032 battery
  • Garmin 820 – bike computer, rechargeable battery, lasts about 200 miles per charge
  • Cygolite – front headlight, rechargeable battery, about 10 hours per charge
  • Garmin speed sensor – CR2032 battery, wireless, on the front hub
  • Stages Cycling Power meter – CR2032 battery, on the left crank arm, about 150 hours of riding
  • Front derailleur – SRAM eTap, rechargeable battery, about 1,000 miles per charge
  • Rear derailleur – SRAM eTap, rechargeable battery, interchangeable with front derailleur battery
  • Rear light – rechargeable battery, about 10 hours per charge
  • Heart rate monitor – CR2032 battery, about 1.5 years per battery

    I did reach my mileage goal for 2016 of 16,000 miles, which included some 789,000 feet of climbing. Come and follow me on Strava, or better yet, come join me for a bike ride in the Portland, Oregon area.


  • Tesla’s (and Uber’s) Teflon to be Tested in 2017

    Tesla’s (and Uber’s) Teflon to be Tested in 2017
    by Roger C. Lanctot on 01-06-2017 at 7:00 am

    For the past two years the impression has been spreading that Tesla Motors can do no wrong. (I can’t really say the same for Uber after the recent San Francisco licensing debacle.) There is no question that Tesla’s legal department is growing by the month as fights persist over opening stores and forestalling liability judgments, but, so far, even fatal crashes of Tesla vehicles have failed to tarnish the Tesla brand.

    This will change in 2017. Tesla is quietly and not so quietly shifting its strategy from bobbing and weaving and legal actions in order to open stores – toward strong-arm, Trump-like muscle flexing.

    Sources in the industry indicate that Tesla has begun threatening to make sourcing decisions based on the level of local support for its marketing and sales activities. This kind of influence peddling is not new and is reflected in states modifying their autonomous vehicle laws to attract the likes of Google and Uber and their development dollars and investments. It is also reflected in Nevada’s scoring of two electric vehicle plants in 2016.

    Competing car companies can only look on in awe, envy, disgust and anger at the result and the complete lack of a consumer backlash. Were a driver to be killed in a Toyota or a General Motors or an Audi with an autopilot-like system congressional hearings would be called, executives would be humiliated on C-SPAN and fines would be levied.

    Tesla has the non-stick Teflon coating of the Silicon Valley startup widely regarded by consumers and legislators alike as the source of national pride and economic growth. Existing car companies are seen as passe impediments to progress, locked in the past and dragging their heels on safety advances – only contributing to the rising toll of highway fatalities.

    Tesla has the halo of the innovator, not unlike Silicon Valley neighbors Apple and Google/Alphabet and Uber. We tend to treat these companies with kid gloves because we fear our economic future hinges on their success even if we find ourselves surrendering our privacy and … freedom?

    European regulators are far more concerned with privacy and harbor no illusions about halos or innovation. This is why the rough treatment that Apple and Google have received from Brussels seems so odd from a distance. It’s worth noting that concerns for privacy have led to the barring of dashcams in Germany and Austria. The obsession with privacy does have its limits.

    In the U.S. the steepest resistance to Tesla has come from state regulators standing in the path of Tesla opening retail stores. State legislators and regulators are vulnerable to the immense lobbying influence of automobile dealers who are tightly woven into their local communities and drive a substantial amount of economic activity including both tax revenues and employment – to say nothing of charitable and political donations.

    But taking the economic argument to the next level to leverage production decisions to the advantage of product development has so-far eluded incumbent car makers. Incumbent car makers aren’t leveraging their sourcing decisions for economic advantage. Rather they are scurrying from the glare of the incoming Trump administration which is casting threats far and wide against car makers seeking to build plants in Mexico. The resulting negative impact on Ford, GM and Toyota stocks is manifest.

    Perhaps in recognition of the might of the dealer lobby, Tesla has taken the gloves off. Work with us, say CEO Elon Musk’s minions, or we will take our business, our tax dollars, our employment contribution elsewhere. Uber, too, has taken this approach with mixed results. Austin, Tex., said: “No.” to Uber’s preference to not fingerprint its drivers. The state of Maryland said: “Okay.” to Uber’s demand.

    Nowhere is Tesla’s threat more potent than Michigan, where Tesla is likely to achieve victory in its drive to open stores in the state in 2017. But the overt Tesla (and Uber) threats being made behind closed doors and, increasingly paralleling Trump’s more public Twitter-based efforts, will test the public’s patience.

    Both Tesla and Uber are placing multi-billion dollar bets on transformative transportation technology and business models. In the process jobs are both being created and destroyed. Tesla’s rise puts the entire internal combustion dealer network under threat. Uber is putting the jobs of millions of professional drivers of cars and trucks at risk.

    Consumers have so far remained on the sidelines in the struggle – happy to benefit from subsidized cab rides (Uber) and subsidized EVs (Tesla). But these subsidized experiences have a cost (Uber – mistreated passengers, Tesla – fatal crashes) capable of bringing a reckoning in 2017.

    Uber and Tesla appeal to our emotions and our pocketbooks. Let’s hope that in the end it isn’t all just a shakedown where we are surrendering both our freedom and our privacy – which represent core consumer value propositions that are carefully curated by the incumbent car makers. Both Tesla and Uber are out to narrow rather than expand our transportation choices. This is a battle where things will get very sticky indeed.


    2017 Semiconductor Dead Pool

    2017 Semiconductor Dead Pool
    by Daniel Nenni on 01-05-2017 at 12:00 pm

    In 2015 we saw $85B in semiconductor acquisition activity and in 2016 there was more than $110B. Given 2015 and 2016 were relatively flat years for the $335B semiconductor industry and 2017 looks like more of the same we should expect consolidation to continue, absolutely.

    So, let’s come up with a list of companies that may fall in 2017 and circle around at the end of the year to see how we did. I will be more than happy to defend my choices in detail in the comments section.

    My first three picks will focus on companies in the growth markets of data center, IoT, and automotive chips. I also look at company leadership (strong or weak, new or old) and if an activist investor is involved that’s a bonus. But first let’s look at who was acquired in 2015, 2016, and who is up for grabs in 2017 (let me know who I missed).

    Acquired in 2015:

    [LIST=1]

  • Altera
  • Atmel
  • Broadcom
  • Emulex ISSI
  • Fairchild
  • Freescale
  • Micrel
  • Omnivision
  • PMC
  • Pericom
  • Richtek
  • Sand Disk
  • Silicon Image
  • Vitesse

    Acquired in 2016:

    [LIST=1]

  • Applied Micro
  • ARM
  • Brocade
  • Linear Technology
  • Mentor Graphics
  • NXP
  • Intersil
  • EZ Chip
  • Lattice
  • Qlogic
  • Invensense

    Who’s left? (not exhaustive, just the ones I know)

    Ambarella, AMD, Analog Devices, Broadcom, Cavium, Cirrus Logic, Cypress Semi, Dialog, IDT, Inphi, Infineon, Intel, MACOM, Marvell, Maxim, MediaTek, Melexis, Mellanox, Microchip, Micron, Micronas, Microsemi, Novatech, NVIDIA, On Semi, Qorvo, Qualcomm, Realtek, Renasas, Samsung, Semtech, Silicon Labs, Silicon Motion, SK Hynix, Skyworks,Sony Semi, STMicro, Synaptics, Texas Instruments, Toshiba, Xilinx.

    The company names in bold are on the SOX Semiconductor Index which is up 36% this year, a big number considering the semiconductor industry on a whole was flat in 2016. Given we are looking at another growth challenged year, here are the first three companies that I feel are best positioned for Acquisition in 2017:

    [LIST=1]

  • Marvell
  • Microsemi
  • Cypress Semiconductor

    According to Marvell:

    Marvell first revolutionized the digital storage industry by moving information at speeds never thought possible. Today, that same breakthrough innovation remains at the heart of the company’s storage, network infrastructure, and wireless connectivity solutions. With leading intellectual property and deep system-level knowledge, Marvell’s semiconductor solutions continue to transform the enterprise, cloud, automotive, industrial, and consumer markets. To learn more, visit: www.marvell.com.

    While Marvell has always been viewed as a technology centric company with very controlling management resulting in a historically high executive turnover rate, that all changed in April of 2016 when founder/CEO Sehat Sutardja and his wife Weili Dai were ousted as a result of questionable management practices. Marvell now has a new CEO, executive staff, and board members.

    According to Microsemi:

    Microsemi Corporation (Nasdaq: MSCC) offers a comprehensive portfolio of semiconductor and system solutions for aerospace & defense, communications, data center and industrial markets. Products include high-performance and radiation-hardened analog mixed-signal integrated circuits, FPGAs, SoCs and ASICs; power management products; timing and synchronization devices and precise time solutions, setting the world’s standard for time; voice processing devices; RF solutions; discrete components; enterprise storage and communication solutions, security technologies and scalable anti-tamper products; Ethernet solutions; Power-over-Ethernet ICs and midspans; as well as custom design capabilities and services. Microsemi is headquartered in Aliso Viejo, California and has approximately 4,800 employees globally. Learn more at www.microsemi.com.

    Microsemi has been reported to be looking at sale options after takeover interest from Skyworks. Microsemi is a well-known AMS expert inside the aerospace/defense, industrial markets, and communications including connectivity chips for data centers. The FPGA business is the big swing here for M&A after the $16.7B Intel acquisition of Altera and the recent $1.3B acquisition of Lattice Logic.

    According to Cypress:
    Founded in 1982, Cypress is the leader in advanced embedded system solutions for the world’s most innovative automotive, industrial, home automation and appliances, consumer electronics and medical products. Cypress’s programmable systems-on-chip, general-purpose microcontrollers, analog ICs, wireless and USB-based connectivity solutions and reliable, high-performance memories help engineers design differentiated products and get them to market first. Cypress is committed to providing customers with the best support and engineering resources on the planet enabling innovators and out-of-the-box thinkers to disrupt markets and create new product categories in record time. To learn more, go to www.cypress.com.

    Last year Cypress founder and CEO TJ Rodgers stepped down naming Cypress insider Hassane El-Khoury President, CEO, and a member of the Board of Directors (TJ is no longer on the BoD). In 2015 Cypress cemented itself as a world class memory provider with the $5B Spansion merger and in April 2016 Cypress acquired Broadcom’s IoT business for $550M.


  • Webinar: Hassle-Free Bluetooth 5 SoC Design

    Webinar: Hassle-Free Bluetooth 5 SoC Design
    by Bernard Murphy on 01-05-2017 at 7:00 am


    Bluetooth has always been a popular communication protocol for short-range applications, but now anticipating BT5 it’s really moving into the big leagues as a significant option for IoT applications. The new standard combines ultra-low power with significantly higher range and higher performance. Ultra-low power is always important for IoT, higher range makes use of BT5 practical to support ranges required for smart-home applications for example and higher performance can allow for more data communication long with improved Run Fast then Stop power reduction.

    Naturally you’ll get maximum value out of BT5 if you can integrate with the rest of your SoC functionality. Join CEVA and CSEM to get designer views on where BT5 fits in the IoT landscape and how you can design and integrate a low-power radio front-end for your design with a minimum of fuss.

    REGISTER NOW

    According to ABI Research, the number of connected devices will reach 48 billion by 2021, a third of which will be Bluetooth wireless technology enabled. The Bluetooth Special Interest Group (SIG) has recently released the highly-anticipated Bluetooth 5, which extends the performance and the scope of Bluetooth low energy. New features include a doubling of speed (from 1Mbps to 2Mbps), as well as a 4X range increase, thus enabling smart home applications. What does it take to design BLE products that are low power and low cost, but also reliable?

    This webinar presents how to easily and quickly design a low power Bluetooth 5 SoC for IoT, wearable or smart home applications, thanks to the CEVA RivieraWaves Bluetooth low energy system IP combined with the CSEM RF solution.

    Join CEVA and CSEM experts to learn about:
    • Overview and market trends in connectivity for IoT, wearable and smart home.
    • How does Bluetooth low energy fit into the landscape, and what will Bluetooth 5 bring.
    • Bluetooth 5: typical system architecture and key components.
    • The low cost and power optimized CEVA Bluetooth IPs.
    • Designing a low power radio front-end using the CSEM IcyTRX RF IP.

    Target Audience
    Design, system and product engineers targeting SoC for IoT, wearable and smart home applications requiring Bluetooth 5 connectivity
    Speakers:


    About CEVA, Inc.
    CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, advanced imaging, computer vision and deep learning for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi (802.11 a/b/g/n/ac up to 4×4) and serial storage (SATA and SAS). Visit us at www.ceva-dsp.com and follow us on Twitter, YouTube and LinkedIn.


    Dassault Systemes Hosts New Microsite Focused on IP Reuse Challenges

    Dassault Systemes Hosts New Microsite Focused on IP Reuse Challenges
    by Mitch Heins on 01-04-2017 at 12:00 pm

    I recently wrote an article about networks-on-chip (NoC) and how Systems-On-Chip integrated circuits (SoCs) are becoming increasingly more complex and heterogeneous in nature. While researching for that article I came upon a new micro-site by Dassault Systemes that goes into great detail about the operational challenges faced by the semiconductor industry in its reuse of intellectual property (IP).

    The site gives a good description of how the Internet-of-Things (IoT) is simultaneously pushing the market towards shorter design cycles while increasing the demand for SoC customization, effectively putting solutions providers between a rock and a hard place. The industry has responded to these diametrically opposing requirements by leveraging IP reuse with an estimated 200+ IPs now used per SoC.

    As mentioned the in previous article these SoCs are driving to more heterogeneous content with the implication that the semiconductor companies are now dealing with a tremendous IP management burden. Dassault Systemes new micro-site addresses the challenges of Enterprise IP Management with three phases as shown below. Posted on the microsite are three white papers, one for each of these phases, and I would encourage SoC designers to read them as they do a good job of outlining what is required to be successful in this new world of hyper IP-reuse.

    The first white paper is entitled, ‘Creating a Solid Semiconductor IP Foundation’ and covers the challenges of scaling and sharing IP at the enterprise level. This paper includes such topics as IP cataloging, IP governance and IP defect/issue tracking. The paper points out that companies have done a pretty good job ‘below-the-line’ in the details of their technical design activities. The problems however start appearing as tasks move ‘above-the-line’ to the enterprise level. This is especially true for larger companies that have multiple globally diverse teams that are expected to develop and share IPs in addition to doing their regular design work.


    Many years ago I was a CAD engineer in Texas Instruments’ ASIC division. We were just at the beginning of real design reuse and I can distinctly remember the first time one of our customers took a TI DSP core and used it as a cell in their ASIC design. The jump in our customer’s productivity was huge as they took advantage of all of the man years TI had invested in their custom DSP development along with the automation we in the ASIC division had wrapped around the design and test flows.

    It was technically challenging for us at TI as it required us to work across multiple enterprise boundaries to make the customer successful. The DSP and ASIC groups were two separate business units with different ways of dealing with practically everything, including design, packaging, test, business models and even legal requirements.

    In the end our efforts for the customer were successful but we did not yet have a scalable process that could be easily repeated. We proved it could be done but at the time we lacked the basic infrastructure and tools required to do this type of IP reuse efficiently and with predictable results. It was, however, a wake-up call that our standard product businesses were at risk as we suddenly saw how competition could quickly enter into our markets with highly differentiated and customized products.

    As I read Dassault Systemes first white paper I related with all of the issues they discussed around the scaling and sharing of IP at the enterprise level. As a CAD engineer, I especially remember the simple things that got in our way. Examples of this included the different vocabulary used by the ASIC and DSP design groups as well as the dramatically different design flows used by each.

    After reading the first white paper I also realized just how far the industry has come since those early days at TI. The white paper does an excellent job of detailing the issues and requirements for IP cataloging, IP governance and IP defect & issue tracking as laid out by the ‘early-phase’ of their IP management model. More importantly, the paper introduces the reader to solutions for those issues including a brief introduction to their ENOVIA product life cycle management solutions.

    As the semiconductor industry evolves into a more multi-discipline, collaborative ecosystem that requires the efficient reuse of design IPs from multiple sources (both internal and external to the organization), it’s clear that a concerted effort will be needed by solutions providers at both the technical and business levels to work with sophisticated tools for managing IPs across the entire enterprise. Dassault is leading the way. Take a look at the white papers and keep an eye out for additional blogs on this topic in the upcoming weeks.

    The Dassault micro-site can be found here: http://www.3ds.com/industries/high-tech/ip-management/
    The link to the white paper is here, Creating a Solid Semiconductor IP Foundation.
    See also: ENOVIA Solutions
    Factors Affecting the Future of the Semiconductor IP Management Business


    This Apple Fell a Little Further from the Tree

    This Apple Fell a Little Further from the Tree
    by Bernard Murphy on 01-04-2017 at 7:00 am

    Some companies are famously, even obsessively secretive about internal development. We never get to see discussion of areas they are working on (other than through patent filings) – we only see the polished and released product/service. Amazon is one such company but Apple must rank for many of us as the pre-eminent company in this class. If you’ve ever had a meeting with an Apple technical team, you’ll understand. They can ask you any technical questions they want but your scope for asking questions is very limited and their answers, if any, will be given only in general terms.

    When you’re in the lead or you think you have the special sauce that will push you into the lead, secrecy is an understandable tactic. But when you’re not in the lead, or at least not perceived to be in the lead, a little in-process signaling can help, as in “Hey look, we’re working on this stuff too!” This not only lets the wider world know that you’re not losing your technology edge, but it also helps you recruit. Both of which can be pretty important when other 800lb gorillas have already staked out a domain and you seem to be on the outside looking in. As is the case with Apple and AI, at least as far as the rest of us are concerned. (Sure they have Siri, but that’s old news compared to the continuous PR drumbeat from Google and Facebook.)

    Apple announced very recently and somewhat informally that they would change this policy in the AI domain, as least as far as academic publications are concerned. I’m not surprised. If you’re a hot AI researcher with a newly-minted PhD from one of the top schools, where would you rather go – a company with a leading-edge AI program where you can continue to polish your credentials by publishing yet more papers, or a company with undiscoverable AI credentials where you can disappear from view? Kudos to Apple for recognizing that one of the cardinal articles of their faith needed a little loosening up.

    The paper itself is interesting and I’m sure a valuable contribution to the field, if perhaps not ground-breaking. The domain is image recognition and the goal is to improve the effectiveness of training using synthetic images (which are already labeled) complemented by similar but unlabeled real data to provide in effect unsupervised training to improve the synthetic images. The intent behind this is to be able to provide much larger sets of labeled training images (since the synthetic images can be generated) without the need for arduous labeling across those sets.

    Perhaps the paper isn’t ground-breaking but the shift in Apple’s policy on academic publication certainly is. You can read a news article on this momentous event HERE and the Apple paper HERE.

    More articles by Bernard…


    Will Lawsuits Stall Automotive AI?

    Will Lawsuits Stall Automotive AI?
    by Roger C. Lanctot on 01-03-2017 at 12:00 pm

    The roster of automotive artificial intelligence (AI) initiatives is growing rapidly with Softbank working with Honda on the Emotion Engine for the Neuv self-driving commuter vehicle, IBM’s collaboration with General Motors and BMW, and, now, reports of Microsoft bringing AI to Volvo in the form of Cortana. It was Google and Apple that originally opened this door with voice search enabled on mobile devices increasingly connected in cars – but it boils down to what will appear on in-dash displays and how drivers will interact.

    The purpose of artificial intelligence is to put relevant contextual, historical, and behavioral data together to anticipate driver needs thereby mitigating driver distraction. The challenge to achieving this goal is great, however, given the cognitive and glance-time burdens associated with voice recognition and touch screen interfaces, respectively.

    Adding to this challenge is the current preference among auto makers to use integrated smartphones for voice recognition functions – while limiting smartphone access to vehicle sensor data. On the eve of CES 2017, Volvo announced its plan to bring Microsoft’s Skype to in-vehicle infotainment systems to manage business communications (sans Skype video).

    But with the arrival of Skype in the centerstack display can Apple’s FaceTime or Google Hangouts (or Viber, Tango or Oovoo) be far away? A family in Texas hopes so. A week before CES 2017 a lawsuit has been filed over a fatal crash that occurred on Christmas Eve 2014 that took the life of a five-year-old child and injured multiple family members.

    – Parents of Child Killed by Distracted Driver Sue Apple for Not Blocking FaceTime While Driving

    The family contends that Apple had a responsibility to block the use of FaceTime in a moving vehicle – especially since Apple filed a patent six years earlier for blocking texting while driving. There are broader issues here including the potential responsibility of the relevant wireless carrier. Since an in-vehicle smartphone interface does not appear to be involved given the age of the vehicles in crash, the relevant auto makers appear to be off the liability hook.

    But now that Apple CarPlay and Alphabet Android Auto smartphone integration solutions are nearly universally available one has to wonder how long it will be before FaceTime and similar apps are enabled beyond the control of car makers. Apple and Alphabet have already taken responsibility for certifying car maker infotainment systems for smartphone integration.

    Ironically, the two vehicles involved in the 2014 crash were Toyota’s. Toyota is one of the last remaining auto maker resisters to Apple/Alphabet smartphone integration.

    Even widespread deployment of smartphone integration has not been enough to pry smartphones out of the hands of drivers. Car makers, Apple, Alphabet and wireless carriers all want consumers to connect their phones in cars in order to ensure a safer user experience, but connecting a smartphone in a car remains an often annoying and unnatural act.

    But even connecting a smartphone and adding AI to enhance speech recognition and anticipate driver needs, introduces potential new sources of distraction, cognitive load and confusion. (One of my favorite Patton Oswalt comedy routines is his description of his first experience with Tivo’s AI – after he watched “The Man from Laramie,” Tivo automatically recorded all the content it could find with horses including children’s programs. “No, Tivo!” says Oswalt to Tivo.)

    The bottom line is that a smartphone in a car is a potential life-saving proposition. The focus of too many auto makers is on delivering content or, in the worst cases of visionary automotive integration, marketing messages and advertisements carefully honed to the driver’s known preferences and current and anticipated location.

    In the Volvo case, the Skype application appears to be built into the car, not accessed via the driver’s connected smartphone. This type of integration means the system will be better able to mitigate driver distraction by taking into account the driving context. Volvo pioneered the concept of an Intelligent Driver Information System (IDIS), more than 10 years ago for locking out distracting vehicle features and functions during stressful driving scenarios.

    – Volvo Cars Adds Microsoft’s Skype for Business to its 90 Series Cars, Heralding a New Era for In-car Productivity – Volvo Cars Media

    The Texas lawsuit should give pause to auto makers to consider the objectives behind their fledgling AI systems. Are these systems intended to enhance safe driving or are they intended to distract with promotional offers and advertisements? Does the AI system make use of vehicle sensor data and driving context to manage the driver’s cognitive load? Or is the system more likely to distract with confusing interfaces, too-small icons and inadequately refined speech recognition?

    If the Texas lawsuit proceeds it will be up to a court to decide the level of responsibility of Apple which is no-doubt protected by its own voluminous and rarely read end user license agreement. But car makers and wireless carriers may ultimately be held accountable for the nature and performance of in-vehicle systems – particularly in the context of rising annual highway fatalities. We want drivers to connect their phones in cars. But what happens next when Apple and Alphabet are in charge of that integration? We will found out this week at CES 2017, for sure.

    Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here:
    https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


    Who Left the Lights On?

    Who Left the Lights On?
    by Bernard Murphy on 01-03-2017 at 7:00 am

    I attended a Mentor verification seminar earlier in the year at which Russ Klein presented a fascinating story about a real customer challenge in debugging a power problem in a design around an ARM cluster. Here’s the story in Russ’ own words. If you’re allergic to marketing stories, read it anyway. You might have run into this too and the path to debug is quite enlightening.

    When I was a kid, my father used to get very angry when he found a light on in an empty room. “Turn off the lights when you leave a room!” he would yell. I vowed when I got my own home I would not let such trivia bother me. And I don’t. The last time my dad came to visit he asked me, “What’s your electric bill like?” as he observed a brightly lit room with no one in it. I changed the subject.

    There is probably no worse waste of energy than lighting and heating a room that is empty. The obvious optimization: notice that no one is there and turn off the lights. It works the same on an SoC or embedded system. To save energy, system developers are adding the ability turn off the parts of the system that are not being used. Big energy savings but with no compromise to functionality.

    I was working with a customer who had put this type of system in place, but they were observing a problem. While most of the time the system did really well with battery life, occasionally – about 10% of the time – the battery would die long before it should. The developers were stumped. After a lot of debugging what they discovered was that one of the energy hungry peripherals would be turned on and left on continuously, while there were no processes using it.

    To debug the problem, they stopped trying to use the prototype and went back to emulation on Veloce to try to figure out what was going on. Veloce has a feature that allows developers to create an “activity plot” of the design being run on the emulator. The activity plot shows a sparse sampling of the switching activity of the design. While switching activity does not give you an absolute and exact measurement of power consumed, it does allow you to find where likely power hogs are hiding (see figure #1).

    Figure #1

    They ran their design on Veloce and captured the activity plot; it looked like this (see figure #2).

    Figure #2

    The design was configured to run two processes, one that was using peripheral A (the developer of this system is quite shy and does not want me putting anything here which could be used to identify them – so the names have been changed to protect the innocent). The other process was using peripheral A and peripheral B. As you can see from the graph, one peripheral is accessed at one frequency, creating one set of spikes in switching activity. The second process accesses both peripherals, but less frequently producing the taller set of spikes. For testing purposes, the frequency of the processes being activated was increased. Also, the period of the two processes was set to minimize the synchronicity between them.

    Figure #2 shows that at some point, the spikes on peripheral A disappear – that is, peripheral A gets left on, when peripheral B gets turned on. Someone “left the lights on” as it were. Examination of the system showed that, indeed, the power domain for peripheral A was left on.

    Figure #3 shows a close up of the activity plot when power domains are being turned on and off correctly. Figure #4 shows a close up of the point where peripheral A is unintentionally left powered on continuously.

    Figure #3
    Figure #4

    With Codelink[SUP]®[/SUP], a hardware/software debug environment that works with Veloce, the designers were able to correlate where the cores were, in terms of software execution, with the changes in switching activity shown in the activity plot. Figure #5 shows a correlation cursor in the activity plot near a point where peripheral A gets turned on and the debugger window in Codelink, which shows one of the processor cores in the function “power_up_xxx()”.

    Figure #5

    Since the problem was related to turning off the power to one of the power domains, they set the Codelink correlation cursor to where the system should have powered down peripheral A (see figure #6).

    Figure #6

    At this point there were two processes active on two different cores that were both turning off peripheral A at the same time (see figure #7).

    Figure #7

    Since this system is comprised of multiple processes running on multiple processors, all needing a different mix of power domains enabled at different times, a system of reference counts is used. The way it works is when each process starts it reads a reference count register for each of the power domains it needs. If it reads a 0, then there are no current users of the domain and the process turns on the power domain. It also increments the reference count and writes it back to the reference count register.

    When the process exits, and no longer needs the power domains powered up, it basically reverses the process. It reads the reference register. If it is a 1, then the process can conclude that there are no other processes using the power domain and turns it off. If the reference count is higher than 1, then there is another process using the domain and it is left on. The process decrements the reference count and writes it back to the reference count register.

    At any point in time, the reference count shows the number of processes currently running that need the domain powered on.

    Using Codelink, the developers were able to single step through the section of code where the power domain got stuck in the on position. What they saw were two processes, each on a different core, both turning off the same power domain.

    First, core 0 read the reference register, and it read a 2. Then core 1 read the same reference register, and it too read a 2, since the process on core 0 had not yet decremented the count and written it back. Next both cores decided not to turn off the power for the power domain, as they each saw that another thread was using the peripheral. Finally, both cores decremented their reference count from 2 to 1. And they both wrote back a 1. This left the system in a state where there was no process using the power domain, but it was turned on. Since the reference register held a one, any subsequent processes that used the domain would not clear this count. And the power would be on to this domain until the system was rebooted, or ran out of power.

    Now this looks like a standard race condition. Two processes from two different cores, both doing a read/modify/write cycle. In this case, these bus cycles need to be atomic. The developers went to the software team and told them about their mistake and asked them to perform locked accesses to the reference count register.

    It turns out that they were using a locked access to reference the count register. They pointed the finger back at the hardware team.

    The hardware team had implemented support for the AXI “Exclusive Access”. The way the exclusive access works is that an exclusive read is performed. The slave is required to note which master performed the read. If the next cycle is an exclusive access from that same master, the write is applied. If any other cycle occurs, either a read or a write, then the exclusive access is canceled. Any subsequent exclusive write is not written, and an error is returned. This logic should have prevented the race condition seen.

    On closer examination, it turned out that the AXI fabric was implementing the notion of “master” as the AXI master ID from the fabric. Since the ARM processor had four cores the traffic on the AXI bus for all four cores was coming from the same master port – so they were all seen as coming from the same master. So from the fabric’s perspective and the slave’s perspective, the reads and writes were all originating from the same master – so the accesses were allowed. There was no differentiation between accesses from core 0 and core 1. An exclusive access from one core could be followed by an exclusive access from another core in the same cluster, and it would be allowed (see figure#8). This was the crux of the bug.

    Figure #8

    The ID of the core which originates an AXI transaction is coded into part of the transaction ID. By adding this to the master, which was used for determining the exclusivity of the access to the reference count register, the design allowed it to correctly process the exclusive accesses.

    Veloce emulation gave the developers the needed performance to run the algorithm to the point where the problem could be reproduced. Codelink delivered the debug visibility needed to discover the cause of the problem. The activity plot is a great feature that lets developers understand the relative power consumption of their design.


    Russell Klein is a Technical Director in Mentor Graphics’ Emulation Division. He holds a number of patents for EDA tools in the area of SoC design and verification. Mr. Klein has over 20 years of experience developing design and debug solutions which span the boundary between hardware and software. He has held various engineering and management positions at several EDA companies.