CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

The Smartphone is Dead! Long Live the Smartphone!

The Smartphone is Dead! Long Live the Smartphone!
by Roger C. Lanctot on 01-24-2016 at 10:00 am

According to a study released on the eve of CES by Accenture “heightened data security concerns, falling demand for smartphones and tablet PCs, and stagnant growth in the Internet of Things (IoT) market” are combining to stymie consumer electronics industry growth. While slow uptake of new products is normal, data security concerns is new.
Continue reading “The Smartphone is Dead! Long Live the Smartphone!”


Honda Driver-aware Connected Car Insights from Patents

Honda Driver-aware Connected Car Insights from Patents
by Alex G. Lee on 01-24-2016 at 7:00 am

Honda patent applications US20130245886, US20140276112, US20140309881, US20140303899, US20140371984, US20150126818, US20160001781 and patent US8698639 describe systems implementing state monitoring of a driver for automatically adjusting the operation of a vehicle in response to the driver state (e.g., driver’s health, slower reaction time, attention lapse and/or alertness). For example, in situations where a driver can be drowsy and/or distracted, the motor vehicle can include provisions for detecting and assessing that the driver is drowsy and/or distracted and modifying vehicle systems automatically to mitigate against hazardous driving situations.

The driver state monitoring system includes different types of sensors for obtaining information regarding the physiological driver state, behavioral driver state, and vehicular-sensed driver state. The physiological sensors measure the heart rate, blood pressure, oxygen content, blood alcohol content, respiratory rate, perspiration rate, skin conductance, brain wave activity, digestion information and salivation information etc. For example, the driver state monitoring system includes optical sensing devices and/or thermal sensing devices to sense and provide a heart rate signal indicative of a driver state. The heart information can be detected from head movements, eye movements, facial movements, skin color, skin transparency, chest movement, upper body movement using the optical sensing devices and/or thermal sensing devices.

The behavioral information can include eye movements, mouth movements, facial movements, facial recognition, head movements, body movements, hand postures, hand placement, body posture, and gesture recognition etc. Vehicle information that is related to the vehicular-sensed driver state includes vehicle conditions, states, statuses, behaviors, and information about the external environment of the vehicle (e.g., other vehicles, pedestrians, objects, road conditions, weather conditions).

The driver state monitoring system includes a response system that can receive information about the states of the driver and automatically adjust the operation of the vehicle. The response system determines the driver state based on the receive information. For example, the driver state can be normal/drowsy or normal/distracted. The response system automatically modifies the control of the vehicle systems using various vehicle control systems. The vehicle control systems can include the automatic brake prefill system, engine control system, speed follow system, automatic cruise control system, collision warning system, lane departure warning system, blind spot indicator system, lane keep assist system, navigation system, HAVC control system, lighting control system, and vehicle mode selector system.

For example, if the response system determines that the driver is drowsy, the response system can modify the operation of the collision warning system so that the driver is warned earlier about potential collisions. The collision warning system can retrieve the heading, position, and speed of an approaching vehicle. In some cases, this information could be received from the approaching vehicle through a vehicle to vehicle (V2V) communication network, such as a DSRC network. The collision warning system can estimate a vehicle collision point using information about the position, heading, and speed of the motor vehicle.


More articles from Alex…


The Best Analyst Presentation at SEMI ISS 2016!

The Best Analyst Presentation at SEMI ISS 2016!
by Daniel Nenni on 01-22-2016 at 5:00 pm

The problem I have with semiconductor analysts and media today is that they rarely have depth in what they are talking about. Some because they have never actually worked in the industry and others because they have not worked in the industry since the 1970s. One famed analyst even repeated the mythical “Fabs cost $10B” generalization to spread his self-serving doom and gloom of future semiconductor success. What kind of Fab is that, logic or memory? Where is it built and how many wafers per month does it produce? All fabs are not created equal and for the record: TSMC builds logic fabs in Taiwan for $5B and the new one in China is budgeted at $3B but I digress…

The thing I like about Weston Twigg, Director of Capital Markets Research at Pacific Crest Securities, is that he is an actual semiconductor professional having worked at both Samsung and Intel at the process level (he has an MS degree in Chemical Engineering to go with his MBA). Weston thoroughly understands and respects the foundries which very few analysts do and his presentation reflects that:

Bulls, Bears, Bits: Investor Views on Changing Semiconductor Industry Dynamics
Investors are struggling to keep up with changes in the semiconductor industry. Semiconductor economics are becoming challenged as process complexity increases; compute functions are shifting from local compute to the cloud; PC, tablet, and smartphone demand is decelerating or declining; and signs of maturation are emerging as M&A becomes a dominant theme. With all of the uncertainty, however, investors generally remain upbeat with two prevailing investment themes: story-driven stocks and M&A beneficiaries.

In our never ending quest to find “The Next BIG Semiconductor Thing” Weston and I share a similar view: From the demand side Internet of Things (IoT) and cloud are ramping up but if demand for smartphones decelerates (which it has) what else will drive big die size chips? Weston highlights three potentially big trends which I agree with, absolutely:




Weston’s conclusion:
Investors are contemplating disruptive shifts:

  • Semiconductor economics are becoming challenged as process complexity increases
  • Compute functions are shifting from local compute to the cloud
  • PC, tablet and smartphone demand is decelerating or declining
  • Signs of maturation are emerging as M&A becomes a dominant theme

Questions that investors are working on:

  • Can technology-driven (leading-edge) companies be successful if node transitions slow?
  • Which trends could drive more leading-edge demand?
  • Are there companies positioned at n-x nodes that might perform well?
  • Which segments should remain relatively high growth?

Despite the uncertainty, investors generally remain upbeat over the long-term on semiconductors

My personal view of the next big thing is along the same lines. The products we have today will need to be increasingly smarter which will result in much larger die sizes and more wafers at the leading edge. It will also lead to increased design complexity and stricter power requirements which will benefit the fabless semiconductor ecosystem as well.

In my humble opinion, IoT will go through a similar transformation. Today’s IoT chips are pretty dumb if you think about it, which I have. Before you know it the “IoT SoC Revolution” will begin and it will be the lather, rinse, repeat shampoo cycle yet again.

More articles from Daniel Nenni


The Death of Moore’s Law

The Death of Moore’s Law
by Michael Barger on 01-22-2016 at 12:00 pm

For the last several years, people have predicted the end of Moore’s Law. The reasoning is that there is a limit at which one can’t shrink transistors any further. A reoccurring comment has been “You can’t divide an atom.” I had assumed that its demise would be at the hands of a new paradigm like quantum computing. Now, with Intel’s announcement that the next doubling of transistors will take 2 ½ years, it looks like it may die of old age.

I, personally, do not believe that Moore’s Law needs to die of old age. Having worked within and in support of the semiconductor industry, I believe that the scaling argument is based on a faulty assumption; that one must only use two dimensions. I also believe that the industry is finally waking up to this fact with the surge in interest in 3D integration. But it has come too late to keep the industry on the Moore’s Law curve.

I have watched as the increased cost of scaling has forced the formation of collaborative research organizations, e.g. Sematech. Chip companies have shifted market and business strategies like the fabless ecosystem. And continued M&A has resulted in massive organizations with deep pockets that make barriers to market entry by new players almost impossible. As a result, I believe that the Semiconductor industry is ripe for disruption.

When I worked at the Hughes Technology Center in the early 90’s, we were working on enabling technologies for 3D integrated circuits (3DIC). Our strategy was to freeze scaling at 0.25 micron (that’s 250 nm folks!) and build another active layer on top, doubling the circuit density. There were several technologies that we were developing to do this. For example, HRL had developed a TSV on which I was able to grow high quality silicon epitaxy. This was used to build a 3D version of a Pentium-based PC in a “cube” as demonstrator. We filed for a patent disclosure, but corporate declined to pursue.

Another development was wafer bonding and thinning. We developed a scanning plasma process that flattened the device wafer while thinning it. We had a 200mm demonstrator wafer bonded to a handle wafer that was 10nm thick with +/- 1nm variation. Obviously, 10nm is not very useful, but it meant that FDSOI was comparatively easy. Our bonding technique allowed conductors and dielectrics to be bonded, simultaneously. Our university collaborator used this process to demonstrate the fabrication of a CMOS circuit by bonding NMOS and PMOS circuits. There other technologies developed that I won’t go into for lack of reader attention. But these were only steps toward the ultimate goal, which was monolithic 3D integration.

Monolithic 3D integration was not to be the stacking of processed layers, but depositing and processing layers on a continuous process. Think in terms of transistors along with other components embedded in a matrix of dielectric with interconnects routed for optimal distances. This would require different equipment and different chemistries. One enabler we were working on was atomic layer deposition (ALD). The sub-category, atomic layer epitaxy (ALE) was the process we believed would provide the embedded transistor structures. I submitted a proposal the develop ALE silicon, which was declined just prior to GM Hughes Electronics’ demise. With Hughes’ breakup, all of these technologies have fallen into disuse. I believe that it is time to resurrect some of these concepts and develop the necessary equipment and processes to revitalize Moore’s Law.

I have an initial product concept that I would like to develop that would be an enabler to control the new processes. I am interested in finding investors who would fund the startup. If you are one or know of one, please contact me.

https://www.linkedin.com/in/mjbarger


Qualcomm Shows Their First 5G Demo At Industry Analyst Day

Qualcomm Shows Their First 5G Demo At Industry Analyst Day
by Patrick Moorhead on 01-22-2016 at 7:00 am

Complete 5G solutions aren’t something that you’ll be seeing in phones or networks any time soon regardless of what you may see in the headlines or companies are claiming. In fact, the first official release of the 5G standard isn’t likely to be finalized until 2018, at which point true 5G networks will very likely not roll out until 2020. However, due to the increased demand for added capacity and throughput, certain parties are getting impatient and want to push up the implementation of specific 5G technologies to as soon as 2018.

What we are more likely to see is that specific 5G technologies will get adopted sooner than others as the spectrum and technology allow. Part of the introduction of 5G includes the use of higher frequency signals that can range anywhere from 3.5 GHz to 60+ GHz, much higher than current 4G LTE networks. As a result, the companies in the wireless industry are moving up their time tables and preparing for 5G sooner due to the demand for increased capacity and faster throughput. Qualcomm recently held an industry analyst day to explain and demonstrate the company’s vision for 5G. Representing Moor Insights & Strategy were Anshel Sag who covers mobile and wireless and Mike Krell who covers Industrial IoT.

Part of Qualcomm’s 5G vision included how the company sees 5G evolving beyond “just another wireless technology” for smartphones and tablets, expanding into every facet of life. This included presentations on how 5G incorporates a more flexible network, making devices on the network more than just endpoints as well as their 5G unified air interface (UAI). Qualcomm’s 5G UAI combines a multitude of features including massive MIMO, reliable high capacity and high frequency spectrum and much more. To make 5G more real for the analysts in attendance, Qualcomm took analysts deep inside of their research and development building, also known as the Qualcomm Research Center (QRC). Deep down inside of the basement of the building analysts were given a peek at some of Qualcomm’s own 5G technologies in a working demonstration.

To deliver the multi-gigabit speeds that users should experience with 5G, the use of higher frequencies is needed, as mentioned earlier. For Qualcomm’s demonstration, they chose to use 28 GHz, which is right in the middle between Qualcomm’s currently supported 4G LTE bands (below 3GHz) and their 802.11ad Wi-Fi operating at 60 GHz. The industry has coined these multi-gigabit high frequency wireless technologies as mmWave, to represent the measurement of the wave lengths in millimeters due to their high frequency and short length.

Because of the nature of these waves, Qualcomm utilizes extremely small antennas in a broad array in conjunction with directional beamforming to deliver a robust wireless signal regardless of the objects in the way. This is important because higher frequency wireless signals tend to be easily obstructed or blocked when something dense gets between them and their target device. For example, Wi-Fi operating at 60 GHz also known as 802.11ad, is best applied for in-room applications and that’s generally used with 32 tiny little antennas. The technology is designed to constantly adapt and adjust to the best possible beam based on the current conditions, combining all of the antennas on the base station transmitting to the antennas on the client device. The purpose of this demo was not to show us how fast 5G can be, but rather how well Qualcomm’s 5G implementation can handle less-than-perfect conditions, which is what most users experience on a daily basis. Many of those issues come from the actual deployment of the technology and the challenges that 5G brings to the table like non-line-of-sight connectivity and the consequences of how users use their devices on 5G signal.

For this demo, Qualcomm had set up a base station and a client device in a hall way and had people walk between the two as well as move the client device around to show how their 5G technology adapted. To accomplish this, Qualcomm used a base station with a 128 antenna elements with 16 controllable RF arrays. Commercial base stations could have significantly more antenna elements in the real world depending on the area that they are trying to cover and the size that they need to be. The client device receiving the signal had four selectable antenna sub-arrays, each with their own 4 controllable RF channels, meaning that each had four antennas as well. Having multiple antennas in multiple arrays allows for the client device to switch between the best antenna array that delivers the best signal, dynamically, while also beamforming, to also improve signal and catch the best channel and signal.

In their tests, they were able to achieve speeds of 400 Mbps pretty consistently using one antenna out of the possible 16, which is pretty good when you consider that 4G LTE right now can do 100Mbps. The system is designed to find the right beam and deliver it to the right antenna in order to deliver the best possible service utilizing one beam at a time. The demonstration was using only 16 QAM modulation, which means that there is still room for improvement in terms of throughput when they are able to achieve higher order modulation like 64-QAM or higher. Qualcomm’s engineers stated that they have simulated tests of this technology in urban environments and had line of sight range of 350 meters and non-line of sight coverage of 150 meters in Manhattan.

Qualcomm’s 5G demo at their industry analyst day shows that the company is well into development in 5G technologies, and that they are already working on solving many of the problems that come with using wireless frequencies above 3 GHz. Qualcomm’s acquisition of Wilocity mid last year, the first creators of 60 GHz Wi-Fi technology, may have put the company in a unique position ahead of the competition as they have dealt with many of the issues with mmWave technologies and are already in their second generation of 802.11ad Wi-Fi. Expertise doesn’t come overnight, and there is still going to be a lot more work to be done in the 5G space in order for it to become a commercialized technology, standardization included.

Qualcomm 3D mmWave Signal Visualization of 4 Antenna Array (Credit: Anshel Sag, Moor Insights & Strategy)


More from Moor Insights and Strategy



Microsoft Cloud-based Connected Car Service Insights from Patents

Microsoft Cloud-based Connected Car Service Insights from Patents
by Alex G. Lee on 01-21-2016 at 4:00 pm

Microsoft patent application US20150262486 and patents US9092984 and US9218740 illustrate a cloud computing service to assist drivers with respect to improving driver safety. The cloud-based driver assistive system can warn drivers upon impending collisions.

The cloud-based driver assistive system includes many grid cloud servers. Each grid cloud server is associated with a grid of grids, in which each grid corresponds to a geographic area. For example, each grid cloud server divides space into square grids that have approximately even load. To do this efficiently, the cloud-based driver assistive system identifies geographic regions of varying sizes and quickly determines which server is responsible for any location. To this end, each cloud service uses the standard military grid reference system (MGRS).The MGRS enables the cloud-based driver assistive system uniquely identifies varying sized regions in a hierarchical manner.

Each grid cloud server receives information corresponding to trajectories of the vehicles that are known to the cloud server via the wireless communications that are sent from mobile devices associated with the vehicles. The mobile devices can be implemented in drivers’ smartphones or the devices built into vehicles (e.g., vehicle navigation or entertainment system). The cloud-based driver assistive system periodically collects from a GPS device and other sensors on the mobile device of a vehicle. The information includes data regarding location, speed, course, acceleration, and yaw. For example, in a normal-to-heavy traffic situation, the cloud-based driver assistive system uploads its location information every 100 ms; in lighter traffic situations, the cloud-based driver assistive system uploads its location information less frequently, e.g., every 200 ms.

Each grid cloud server determines from the trajectory-related information whether vehicles that are known to the server to be in or approaching its associated grid are at risk of collision. If so, the grid cloud server warns drivers by transmitting the alert to the vehicles that is at risk of collision. The risk of the collision can be whether a vehicle is within a threshold distance of another vehicle and/or whether the vehicle is in a lane departure state.

Using mobile devices and relatively inexpensive sensors and wireless connections to the cloud service, the cloud-based driver assistive system can be implemented inexpensively for enriching the driving experience without needing new roadside infrastructure for vehicle-to-infrastructure (V2I) communications and embedding the Dedicated Short-Range Communications (DSRC) device to every vehicle for inter-vehicle (V2V) communications.


More articles from Alex…


How Makers are changing the world—and why I’m so excited about it

How Makers are changing the world—and why I’m so excited about it
by Sander Arts on 01-21-2016 at 12:00 pm

I’ve spent my entire career in the tech space, being exposed to some of the world’s biggest and most innovative companies. But these days, the thing that excites me a most is how Makers are using technology to make the world a better place.

Consider this: The recent Hackaday prize challenged Makers to build something that matters in the world. All of the prize-winning projects—and in fact, 80 percent of the finalist designs—were powered by Atmel-based Arduino boards. Examples include ALS patient Patrick Joyce and his 2015 winning Hackaday team who created an eye-controlled wheelchair system that offers life-changing mobility and independence for people without the use of their hands. Then there’s the team of graduate students who expanded the open source concept to bionics, giving amputees access to affordable, customizable, 3D-printed prosthetic hands. And the vineyard owner who took on the California drought with a sensor-driven water conservation system that saved 430,000 gallons of water in its first year.

It’s clear that now anyone can change the world using technology, and that presents tremendously exciting opportunities.

We help our customers make meaningful contributions with technologies that have literally turned product design into child’s play. This is part of an industry evolution that Atmel helped drive by making investments into integrated hardware, reference applications and software libraries, and high-quality, production-ready development tools. An example that’s familiar to many Makers is Arduino, which is powered by an Atmel microcontroller and is a launchpad for many Maker projects. Just search for ‘Arduino’ on Kickstarter or Indiegogo and you’ll find hundreds of projects. Some of the most-funded campaigns—from 3D printers and drones to household humanoid robots and smart home solutions—feature Atmel technology.

But as important as easy-to-use development platforms are, we believe that silicon vendors need to do more than just help Makers prototype. At Atmel, we recognized the need not only to make design easier, but also to make the transition from prototype to production easier. The Arduino environment is intuitive and easy to use for prototyping, but it has limitations that make it unsuitable for taking a project all the way to production. To solve that, Atmel provides free software development tools that let Makers import an Arduino project directly into our Studio debugging environment, which natively supports Arduino libraries. And we offer a full suite of microcontrollers at varying cost and performance levels, as well as components for connectivity, security, and touch interfaces, to take prototypes to final products. That kind of ecosystem compatibility just isn’t available anywhere else.

The next challenge for Makers is to bridge the chasm from makerspace to marketplace, and we’re there to help as well.

While Arduino simplifies design, crowdfunding sites such as Kickstarter and Indiegogo and outlets such as Hackaday and Instructables make it easier to bring those concepts directly to the investors who will fund their development and to the consumers who will pay for the finished product. But as Makerspace becomes more crowded, it’s becoming more challenging for individual Makers to get attention and differentiate their products from the slew of innovation on these sites.

That’s why we support Maker Faires around the world, and why we bring tech tours for training and supply chain assistance to local Makers. We also support Makers using our extensive social networking influence to highlight their projects in the marketplace. That kind of support is available nowhere else.

Atmel is ranked as the number one social semiconductor company by Publitek, with a social media influence that is the highest in the industry. The Atmel blog has millions of views and shares—more than all other 39 semiconductor companies combined (Publitek research, 2015)—and our Facebook, LinkedIn, Google+, and YouTube pages add millions more impressions. We have 55,000+ Twitter followers today, and that number is growing by ~25 percent per quarter. These impressive numbers make a significant difference to our customers and their ability to reach their prospective markets—as you can read in their own words here.

It’s this sort of validation that makes me so excited. The ultimate power is with the Makers who are changing the world. And really, this is what technology has always been about. While Makers see a way they can make the world a better place, we have the amazing opportunity to provide technologies, connections, and a full range of support that lets us become a champion for people who are solving world problems. We say that we are ‘enabling unlimited possibilities,’ and we truly do that. Next week, at Emtech Asia, I will speak on this topic. I am very excited to represent this company and the Makers that truly change the world.

This week, this story also hit the media. Have a look if you’re interested: http://www.newelectronics.co.uk/electronics-blogs/from-makerspace-to-marketplace-unlimited-possibilities-to-change-the-world/113289/


Japan..silent but strong players of semiconductor industry

Japan..silent but strong players of semiconductor industry
by kunalpghosh on 01-21-2016 at 7:00 am

Japanese stood to be the world leaders during 1980-1990 regime in semiconductor manufacturing[SUP] [2][/SUP]. During my research, I found that Japanese semiconductor firms are very strong leaders in process which gave them a competitive advantage over others. Situation is bit different today, as we have fabless firms taking the leap, it’s always good to know how things started.

Let me begin with a small example (DRAM) to understand the strong base and deep roots of Japan in semiconductor industry in 1980-90 regime.

Concept:

Dynamic RAM (DRAM) is a memory architecture which comprises of a transistor and a capacitor that stores 1-bit of data. Due to its simple structure, a high density can be easily achieved which makes it cheaper and slower compared to Static RAM (that uses 6 transistors to store a bit).

Though it looks simple, but it’s highly process driven (and hence a perfect vehicle to learn about next gen process technology). The learning was also applied on other types of chips like DSP’s, microcontrollers, etc. The reason is the capacitor itself, which if leaks, tends to fade out the information. This needs periodic refresh process (and hence the name ‘Dynamic RAM’).

To protect capacitors from leaking, the transistors used for DRAM cells should be ‘extremely low leakage’ transistors which can be attained using substrate bias.

Next, the transistors threshold should be high in order to have lower leakage (at cost of lower switching time). And (again), attaining high threshold would need an increase in dopant concentration or a bulk bias as is quite evident from below threshold voltage equation:

Threshold Voltage Equation:

Vt = Vto + gamma(square_root(-2phiF + Vsb) – square_root(-2phiF))
Where
Vto = Threshold voltage at Vsb = 0, and is a function of manufacturing process
gamma = body effect coefficient, expresses the impact of changes in body bias Vsb (Unit is V[SUP]0.5[/SUP])
phiF = Fermi Potential (dependent on doping concentration)

Japanese DRAM market:
Japanese firms heavily invested in number of engineers to design their 1K and 16K DRAM (reports of 50 engineers for 1K[SUP] [1][/SUP]and 100 engineers for 16K[SUP] [1][/SUP]), which led to careful attention for fabrication process issues. For eg. sharp vertical walls created while etching, was a concern for lower node technologies as it resulted in inefficient etching of corners for material deposited in later stages. While leading Japanese engineers observed this short-coming and suggested a robust process of having sloped walls rather than sharp vertical walls.

This resulted in higher yields for Japanese producers (up to 68% by 1986[SUP][3][/SUP])

Now this shift in DRAM market very well reflected to overall semiconductor industry. It’s evident from below chart[SUP] [4][/SUP], that by 1990, 6 Japanese firms out of top 10 accounted for 38% of semiconductor market share with net of $20.7B out of total semiconductor market of $54.3B.


From above table, we can observe the no. of players moved from 6 to 1. This was the era when other top industries feared of market shift and focused on other segments (like microprocessors, mobile semiconductors, etc.) and became market leaders.

However, Japan still ranks #1 for installed fab capacity, #2 in terms of largest semiconductor material market. More than 50% of semiconductor materials supplied by Japan is being consumed globally and over 30% of semiconductor fabrication equipment are being purchased globally[SUP] [5].[/SUP]

Notes:
1) “The Decline of the U.S. DRAM Industry: Manufacturing” accessed at https://www.princeton.edu/~ota/disk2/1990/9007/900711.PDF
2) “Semiconductor sales leaders by year” accessed at https://en.wikipedia.org/wiki/Semiconductor_sales_leaders_by_year#Ranking_for_year_1987
3) Peter D. Nunan, Sematech, personal communications, May 11, June 23, Aug. 4, and Oct. 10, 1989.]
4) “Why Did Japan’s Semiconductor Industry fall?” accessed at http://marketrealist.com/2015/09/semiconductor-industry-japan-fall/ )
5) “Seven Facts about Japan Semiconductor Manufacturing Supply Chain” accessed at http://www.semi.org/en/node/52146


IoT Markets: let’s get real about the numbers

IoT Markets: let’s get real about the numbers
by John Moor on 01-20-2016 at 4:00 pm

I am not sure about you but whenever I see those big market forecasts for IoT (and I’ve seen a lot)… my brain takes a short cut to “ok, I get it, it’s big”. And I believe it will be. But is “it’s big” that helpful? Or, to put it another way, does it hurt?

Robin Duke-Woolley of Beecham Research
thinks it does and has recently been on a mission to try and bring some sanity back to the numbers. He says, “let’s get real about the numbers… $trillions of new dollars is ridiculous – it’s a habit to exaggerate the numbers to get the headlines… it is not realistic… so let’s be rational here”.

“It’s very large, but not monstrous”
Why is he saying this? Is he jockeying for position? Maybe, but perhaps more importantly he feels it distracts us from doing the things we need to do to develop the market(s). Yes he is interested in the growth and proliferation of IoT solutions; the platforms, the standards, the devices – yet he’s also worried that we’ll remain distracted and overlook those elements necessary for growth. Especially when it comes to appropriate security – he believes it will undermine the uptake in the applications.

Having spent the last 15 years or so monitoring the M2M space, Robin is concerned how some security-solution-vendors have simply replaced “M2M” with “IoT”. And IoT is a whole lot more complex than M2M when it comes to security. In the M2M world, systems tend to operate within more clearly defined silos and therefore tend to be better understood. In the IoT world, data can jump silos, traverse interfaces, move across networks of variable provenance and get mashed with other data sources coming from a completely different part of the system.

Add to the threat landscape the variations in economic model, variations in threat levels and security requirements and it’s easy to see why the new world of IoT (in spaces such as cities, healthcare, energy and transport) is tangibly different from anything we’ve encountered before. I’ll add something else here too – these systems will likely be expected to operate for a decade or more too, so we have an extra time dimension – and in environments which vary greatly too.

All these elements create a business risk profile, and with risk comes the potential that markets will slow unless the benefits are significantly greater. So Robin sees the need to develop the markets from a more realistic opportunity, he feels that security should be seen as an enabler of markets and not a cost of deployment. It appears we need to change the way security is accounted for perhaps over the lifetime of a service rather than a unit cost of a physical solution.

Robin spoke at the IoT Security Foundation conference at the Royal Society recently and made a fuller case for his conclusions than I have here. Fortunately you can see his talk on Youtube – if you’re interested in IoT markets I would recommend you prep a coffee and invest 30 minutes to hear what he has to say – especially as it goes against the herd.


When Good Standards Get Lost – the UVM Register Model

When Good Standards Get Lost – the UVM Register Model
by Bernard Murphy on 01-20-2016 at 12:00 pm

Some time ago I wrote a DeepChip viewpoint on DVCON 2014 in which I praised a Mentor paper “Of Camels and Committees”. The authors argued that while the UVM standards committee had a done a great job in the early releases, the 1.2 release was overloaded with nice-to-have features with questionable value for a standard, particularly since this came at a cost of 12,000 new lines of presumably less than battle-hardened code. They argued that standards should stick to the minimum refinements required to enable users and competitors to progress efficiently, and no more. It’s difficult to argue with that position. Even if you are an open-source fan, a standards committee is the wrong place to evolve a code-base of this size. Either changes move too slowly to provide real-time fixes for live use, or move too quickly to support a stable standard.

Rich Edelman and Bhushan Safi of Mentor are pushing the case further, this time arguing for simplifications in the UVM register model; this subset of the standard consumes 22,000 lines of code and a quarter of the User Guide. A natural rebuttal would be that the code size merely reflects the importance of register verification. Software developers have to trust that the hardware can be controlled and observed as described in the register documentation. While it’s true that this area of verification is very important, it’s less obvious that its importance necessarily implies a need for such a large body of code and features in a standard.

Registers can be complex beasts. There are the simple cases – just address, read, write and reset – but all sorts of complications can be added: read-only, write once, clear-on-read, different reset values, different fields within a register which differ in these characteristics, shadow registers, and many more variants. Trying to abstract all of this in a complete but still easy-to-use way is a Herculean (maybe impossible) task. Another rebuttal might be to ask how else it could be done. An object-oriented (OO) approach is surely the natural way to abstract and hide complexity? But as I remember, standards should prescribe interfaces, not implementation, which is normally left to market innovation / competition. Something seems amiss.

Standards philosophy aside, what’s really important is how this is working out in deployment. Mentor experience with a significant percentage of users is that the latest rev of the Register package has missed the easy-to-use objective by a fairly wide margin, at least among its target user-base. Rich tells me a majority of their users don’t understand the class structure or behavior; they do OK with the basic registers but with more complex cases they struggle (and generally fail) to find a way to model them correctly. The outcome is that they use the package to do the basic address and read/write tests, but write their own tests for more complex behavior, circumventing the intended use of UVM registers and field classes, which makes their testbenches run big and slow. And that makes them unhappy, especially if an emulator is idling for long periods, waiting for the testbench to catch up. On the other end of the scale, some verification teams enthusiastically adopt the OO style but lack the required skills and are spending a lot more of their verification cycles debugging the testbench than the design.

Rich and Bhushan make a case for a simpler approach (the genie may be out of the bottle now, but they can at least make their case). They argue for a simpler register model based on structs rather than (in their view) overkilling the need with classes. Sure classes are more flexible and extensible and inheritable and all that good stuff, but unless you are comfortable in OO approaches and skilled in UVM register classes (which it seems most verification engineers are not), all that capability becomes a steep learning curve rather than a simplification. The Mentor suggestion is a simpler (and less intimidating) programming model, easily extensible to handle those complex quirky registers. And it’s more time- and space-efficient since a packed struct is almost always going to be smaller than an equivalent class (especially for bitfields).


Rich admits their approach is not ideal either. It puts more of the programming burden on the verification team, but that seems to be happening anyway with workarounds in place of existing UVM Register capabilities. Perhaps the ideal approach would be C-based, perhaps it requires a move to some more standardized approach to register management. I’m sure users will know it when they see it; it’s not apparent that they see it in the UVM Register model as it currently stands. Perhaps that’s their problem – they have to adapt or or find a different job. But I’m not sure that’s what they or their managers expected from the standard. Or perhaps, in good standards as in good design, less is usually more.

The Mentor paper on “Getting Beyond UVM Registers” is HERE.

More articles by Bernard…