CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Safety = Security?

Safety = Security?
by John Moor on 07-17-2016 at 4:00 pm

Have you ever wondered what the difference is between safety and security? I would be surprised if you had unless you work in these areas as we tend to use the words interchangeably. Languages such as German, Norwegian and Spanish have one word to mean both – or so I am told by my associates who claim those languages as their mother tongue. Ever since the similarity caught my attention a couple of years ago, I have come across many native English speakers who regard the two as equivalent as well.

Yet they’re not quite the same, are they?

So what’s the difference?

If you take a moment to think about it, and assuming you have not done so before, it’s likely you’ll struggle to find a solid separation point you’re happy with. I confess I had to enlist the help of the interweb to catalyse my thoughts, and having done so, I thought I’d share them with you as I still frequently encounter the muddling of the two terms.

I’ll start gently; ask yourself a couple of questions that relate to a situation you are likely to be familiar with already:

When you leave your house in the morning, would you think that forgetting to lock your front door or leaving the iron on as presenting identical risk? Or put a different way – do the Police and the Fire Brigade do the same job?

I hope you thought “no” to both. In the case of the Police and leaving the front door unlocked, we’d normally relate those to security and protection – i.e. our possessions being stolen or personal harm inflicted from another person. As for the Fire Brigade and the iron, we’re typically thinking of our safety – in this case from accidental fire. You could also think of each example being a case of “incident” (people related – security) or an “accident” (environment – safety).

Hopefully by now you’ll start to see the difference and resist the urge to argue they’re both the same.
It goes much deeper of course: Security matters tend to deal with malicious intentions – those threats derive from people and their motivations, some form of benefit coming from mischievousness, crime, terrorism, geo-politics or hacktivism. Security threats tend to be planned and can evolve over time meaning we have to react, adapt and defend – sometimes described as an arms-race. Safety hazards are usually derived from environmental situations where accidents can happen and are less-likely to be caused deliberately. Accidents from lightening strikes or flooding do not tend to be malicious – i.e. they seldom have a planned outcome that directly advantages someone (they are less targeted/discriminating by nature).

If you’re anything like me you’ll probably be looking for examples which break the generality of the argument. I have no doubt you will succeed as I can think of a few right now. However, that is not the point. The point is that safety and security, whilst similar, are not exactly the same thing. They differ in important ways – most significantly source and intent. They are closely related as both present us with risk (danger or harm), yet we protect ourselves in different ways from the security “threats” and those safety “hazards”.

And this distinction matters because?
Almost every industry I can think of is being revolutionised by new technology – our world is changing. A good example from an industry I have been watching these past several years, is automotive. In past years automotive has had intense focus on making driving safer and introduced many of the safety features we take for granted today – air bags, ABS, side bars etc. And we have to give them credit – they have been very successful at making cars safer. Today the traditional OEM’s are pressing their interests in security matters as new technology (especially connectivity) and new entrants interested in disruption and innovation, are driving competition. The automotive industry is heading towards electrification of the drive chain and grappling with the transition to autonomous vehicles. Security issues have caught some of the traditional OEM’s by surprise – last years darling story was the Jeep hackother scare stories are available. Safety issues – e.g. battery fires – will be on the minds of traditional and new auto suppliers alike.

Automotive is just one example of how our future world is transforming because of technology. Safety and security will feature prominently as we build and connect more and more things using Internet technologies. It is vital that we make these things correctly from the start – and even more, operate to resolve problems once discovered to ensure they are equally safe and secure – critically before we encounter known threat and possible hazard. Product and service companies must assure their customers that safety and security is designed in from the outset so we can trust our new environment.
I am almost done. My aim has been to explain some of the differences between safety and security to help us think of how we manage both types of risk in a technology-transforming world. Both are important.

To conclude, allow me to rewrite the original equation and add a strap line:

Safety ≈ Security :: Both = important.


Why Is The Modem Still The Unsung Hero Of Mobility?

Why Is The Modem Still The Unsung Hero Of Mobility?
by Patrick Moorhead on 07-17-2016 at 12:00 pm

The unsung hero of the mobile world is not the CPU, it isn’t the GPU, it isn’t even the memory. All of those components have grown extremely quickly in recent years in terms of processing capability, capacity and the ability to shrink thanks to improvements in process technology. The CPU and the GPU seem to get all the accolades, too. The real unsung hero of the mobile world is the wireless modem for a multitude of reasons, but primarily because it has enabled the explosive growth of all kinds of data and services. This is because a modem is the wireless chip that connects your phone to cell networks and the carriers that use them. Without rapid innovation of wireless modems, we wouldn’t be having nearly the fun we do with Snapchat or the immediacy of business communications. It’s time to give the wireless modem the credit it deserves.


(Photo credit: Patrick Moorhead)

Content is driving explosive mobile data traffic
I remember what the experience was like 5 years ago trying to video chat with anyone on mobile, yet today I find myself and others reliably streaming video in HD almost anywhere. The explosion of data usage has been absolutely immense, for both upload and download speeds. In fact, applications like Facebook’s Instagram, Snapchat and Google’s YouTube are driving the demand for more data with lots of user generated content in high resolutions. Instagram has 80 million photos uploaded per day and more than 400 hours of YouTube video are added every minute.

In fact, YouTube’s latest statistics point to the average mobile viewing session being more than 40 minutes. Keeping in mind that more than half of YouTube’s views come from mobile devices and there are 4 billion views per day on YouTube, that comes out to more than 2 billion views per day on mobile devices. All of this is amplified by the fact that nearly every flagship smartphone and tablet today supports 4K video capture, which further increases the demand for data both up and downstream.

The growth of internet traffic driven by mobile devices and the content they generate also shows no signs of stopping. Even though the smartphone market’s growth may have slowed, the pace by which smartphones and other connected devices are increasing data usage is expected to continue to grow greatly. Cisco Systems published a whitepaper in February of 2016 that looks at global mobile data traffic up to 2020 and the company projects monthly global mobile data traffic to exceed 30 exabytes. This passes my “smell test” if I dig into their research methodology and assumptions and prior forecasts have been relatively accurate. Looking back to 2015 alone, average smartphone data usage grew 43% to 929 MB a month compared to 648 a month in 2014.

This growth trend was recently verified by the CTIA with them stating that American’s data usage more than doubled in 2015. Smartphones only represented 43% of total global handsets in use in 2015, but represented 97% of global handset traffic. And with the rollout of 4G in places like India this year and next year, those numbers are only going to grow. We believe that smartphones will cross 80% of mobile data traffic by 2020. Our expectations are also promising for services like Google’s YouTube which already have billions of views per day as we expect 80% of the world’s data traffic to be video by 2020.
Modems have played a crucial part in both enabling the growth of mobile content consumption as well as its creation. Without the advances in modem technology people wouldn’t be able to upload or download the large file sizes usually associated with HD video. They also wouldn’t be able to reliably stream video like they can today, something that many of the early video and streaming services struggled with and unfortunately the modem technology and infrastructure weren’t good enough yet.

Modems are almost everywhere and expanding to everything
Today, most devices that use modems are smartphones, smartwatches, tablets, some computers, mobile hotpots, point of sale machines and cars. As the future moves forward we can expect more devices to become more ‘connected’ with modems and other devices to grow in modem usage. There is a revolution of sorts happening in the automotive industry right now with more cars every year coming with Wi-Fi hotspot capabilities and 4G LTE connectivity. Smartwatches are also slowly becoming more connected as well and with the recently announced updates to Android Wear we can expect to see even more LTE connected watches coming down the pipe. There are also future IoT uses for modems in a multitude of infrastructure, appliance and medical uses that are only beginning to take off. Many of these devices are going to require modems if they aren’t going to be fixed to one location or are fixed but cannot rely only on a Wi-Fi connection for reliability’s sake.

The standard metrics of modem speed
Right now, the primary metric of performance of modems is based upon their speed, but modems performance is not always a function of just their maximum potential speed. Most modems are classified under a certain 3GPP UE (user equipment) Category or commonly abbreviated ‘Cat’ with LTE Categories ranging from Cat. 0 with download and upload speeds of 1 Mbps to Cat. 16 which has a maximum download speed of 1Gbps or 1000 Mbps and a maximum upload of 150 Mbps. These speeds are all possible thanks to the increased capabilities of modems to combine different wireless signals into a single connection. Using the latest LTE standard you need at least three signals with a four antenna configuration along with other technologies in order to be able to achieve speeds of up to 1 Gbps (1000 Mbps). Waiting for downloads with such speeds will be a thing of the past and continue to push things like 4K and 8K video and ‘instant’ services.

Modems today are generally rated based on their category rating, be it Cat. 6, Cat. 12 or even Cat. 16. However, these category ratings do not guarantee that the devices will ever be able to reach such speeds due to modem features like carrier aggregation (CA) and MIMO, antenna configuration, network coverage, network bandwidth and a multitude of other factors that might not behave the same across all modems. As such, most users generally find themselves going to places like Speedtest.net in order to find out their true internet speeds. OOKLA, the parent company that runs Speedtest.net publishes annual rankings of different carriers and the speeds they are delivering to their customers, in 2015 T-Mobile’s customers in the US saw the fastest speeds, according to OOKLA. Places like Australia, however, had much faster speeds thanks to their faster LTE networks and support for faster UE Category devices. However, these speeds do not only test the speed of the LTE connection, they also test the capability of the carrier’s connection to that tower which is another factor in a user’s experience.

Carrier aggregation is key
One of the most common features in an LTE modem today is the implementation of carrier aggregation. However, many modems either can’t aggregate more than two connections in order to be able to deliver speeds above 300 Mbps (2x CA) which is now the standard across the world. The future will require 3x CA or even 4x CA to obtain speeds like 450 Mbps, 600 Mbps and even 1 Gbps. Carrier aggregation is one of the most commonly used features in modems today. Carrier aggregation also provides the network operators much needed capacity and the ability to handle more connections and heavier loads and improve the experience. However, carrier aggregation is not the only feature that increases performance and overall experience. That can and will be saved for another time, but things like modulation and other features can affect how a modem performs in a real network.

Wrapping up
All of these things inside of the modem ultimately enable other components inside of the smartphone to shine. Modems now enable the use of larger resolution displays and cameras through connected apps, they also enable larger file sizes and the services that utilize them. We take applications like Snapchat, Facebook’s Instagram, Twitter’s Periscope, Facebook, Apple Facetime and Microsoft Skype for granted. Without modems, none of these applications would have the opportunity to thrive and continue to exist. The applications that utilize the most constant mobile connectivity like Snapchat are also very popular with millennials and I expect that to be a trend that continues as things like AR and VR start to take off and push the boundaries of video bandwidth and latency. The modem is critical in keeping up with the rapid pace of growth in internet traffic due to new applications and enabling new applications that take advantage of modems’ improved capabilities.

More from Moor Insights and Strategy


Safety Verification for Software

Safety Verification for Software
by Bernard Murphy on 07-17-2016 at 7:00 am

When automakers are thinking about the safety of an embedded system in a car, while it’s good to know the hardware has been comprehensively tested for safety-specific requirements, that isn’t much help if the software component of the system is not supplied with similarly robust guarantees.

The challenge is that the software state space is invariably massively bigger than the hardware state space and much more dependent on external components – standard libraries and open source software for example. So traditional dynamic testing, while necessary, is even further from being sufficient than it has become for hardware. This problem highlights the importance of finding solutions which can provide more complete use-case-independent coverage, especially for safety-critical applications.

One partial solution is static verification. Linting has been the main workhorse and tools in this area have evolved considerably beyond the simple checks familiar from compiler warnings. But they still have significant limitations. Analysis is primarily structural (on a control/data-flow graph), complemented by peephole behavior views. Where correct understanding requires a broader behavioral view they tend to be noisy, producing lots of false violations. Equally they are likely to miss hazards apparent in a more global view of the code.

Formal analysis for software is always an active topic of research but has had a hard time making it into the production world. Challenges here are similar to those found in formal for hardware, amplified by massively increased state space. Successful demonstrations tend to be around highly constrained tests, raising concerns about the correctness of constraints and generally limiting confidence in coverage for real-world problems. As one example, the Synopsys Coverity product initially touted formal methods in their product but have since de-emphasized that capability.

Another direction is symbolic testing where you test with symbol values for variables, rather than actual values. This comes in two forms: dynamic symbolic execution (DSE) and static symbolic execution (SSE). In either case you can test safety assertions, much as you might in formal analysis. DSE in practice seems to combine symbolic values for some variables with actual values for others, and propagates formulae which can be tested against assertions. A challenge with DSE is explosion in the number of possible paths through everyday code. An alternative is static symbolic execution (SSE) which merges formulae through multiple paths simultaneously. A problem for SSE is that the formulae become very complex and difficult to solve.

DSE and SSE have been effective in finding some important bugs and security holes, in earlier versions of Windows for example (Microsoft is particularly active in this area). But alone neither scales well to most realistic programs. However a 2014 publication introduces a method called Veritesting which combines DSE and SSE methods in a (non-commercial) tool called MergePoint. MergePoint aims to minimize the difficult of solving formulae by alternating between DSE and SSE and allowing explicit values through DSE, where appropriate to limit the state-space search. Alternation makes analysis on realistic size code much more feasible.

An important advantage of this tool is that it can also be applied to directly to binaries without needing access to source (even for pre-processing). The authors have demonstrated the capabilities of MergePoint by applying it to Linux and Debian distributions with thousands of binaries. Their original work found over 10k bugs, of which already ~230 have been patched in those distributions (in other words, the bugs were agreed to be real and in need of fix).

While I’m sure this approach will not be the last word in safety/security-checking software, it does look like an important step forward. I have no idea if or when this might be commercialized but presumably you can contact the authors to learn more about how you can access their code. You can read more about Veritesting HERE.

More articles by Bernard…


IC and System Design for Mobile and Wearable Devices!

IC and System Design for Mobile and Wearable Devices!
by Daniel Nenni on 07-16-2016 at 7:00 am

The Linley Mobile and Wearable Conference is coming up so let’s take a look at what is in store for us. Bernard Murphy, Tom Simon, and I will be covering the event live for SemiWiki and we will also be doing a book giveaway/signing for our new “Prototypical” book (compliments of S2C Inc.) during the networking event on Tuesday evening. If you are not able to attend you can get a free PDF version of the book HERE.

In case you have not attended a Linley event you should. It is FREE to qualified attendees, there is FREE food and drinks throughout the event, and you will meet a very HIGH caliber of semiconductor professionals (me for example).

The event starts out with a keynote from Linley Gwennap himself followed by:

  • Peter Carson, Senior Director, Marketing, Qualcomm
  • Emmanuel Gresset, Director of Business Development, CEVA
  • Ali Khayrallah, Engineering Director, Ericsson
  • Barry Seidner, VP Americas, INSIDE Secure
  • Asaf Ashkenazi, Cryptography Research, Rambus
  • Ron Lowman, Strategic Marketing Manager Synopsys
  • Fawad Khan, Senior Manager, MediaTek

Day two starts with a keynote from Jim Morrison, VP Competitive Technical Intelligence, Chipworks, TechInsights followed by:

  • David Heine, Senior Design Engineering Architect, Cadence
  • Joe Rowlands, Chief Architect, NetSpeed Systems
  • JP Loison, Corporate Application Engineer, Arteris
  • Pankaj Kedia, Sr. Director, Qualcomm
  • Cliff Lin, Senior Director, MediaTek
  • Scott Runner, VP, IoT & Automotive, Aricent
  • Hezi Saar, Staff Product Marketing Manager, Synopsys
  • Peter Hartwell, Senior Director of Advanced Technology, InvenSense

You can see the full Linley agenda HERE.Stop on by and get a book.It would be a pleasure to meet you.

Linley Mobile & Wearables Conference 2016
Focusing on IC and system design for mobile and wearable devices

Innovation in mobile chip and system design is accelerating. Heterogeneous processors and advances in LTE are driving changes in smartphone design. Wearable devices create new, even smaller form factors and add new functions for communications, entertainment, fitness, and health. To deliver these new capabilities, mobile processors must integrate a plethora of IP cores, including big and little CPUs, DSPs, GPUs, video, security, and NoCs to connect them all. Mobile systems also require complex RF design, advanced power management, and a plethora of sensors.

The Linley Mobile & Wearables Conference will be held on July 26 – 27, 2016 at theHyatt Regency Hotel, Santa Clara, CA.This two-day, single-track conference features technical presentations addressing design issues for smartphones, tablets, smartwatches and other wearable devices. The Linley Group will also present an overview of the market, technologies, equipment-design, and silicon trends for designers of mobile devices.

This event is the only one of its kind focused on next-generation mobile platform design.
This conference is intended for *designers of mobile chips, mobile devices, and mobile software as well as service providers, press, and the financial community. Attendance is free to qualified registrants.

The conference includes:
– Sponsor exhibits and demos
– Evening networking reception on the first day
– Raffles by The Linley Group
– Hosted speaker tables at lunch (meet the presenters)
– Free parking
– Continental breakfasts and gourmet lunches
– Free Wi-Fi

To keep informed about this and other upcoming events, subscribe toLinley on Mobile, our free newsletter.

Location: Hyatt Regency Hotel, Santa Clara, CA

Main Meeting Room:Santa Clara Ballroom, 2nd level
Lunch:Lafayette & San Tomas, 2nd level
Reception:Lafayetter & San Tomas, 2nd Level, July 26 only


Registration for *qualified attendees is FREE if on-line registration forms are received by 5 PM PT on July 21, 2016. Registration for non-qualified attendees is $795 if received by 5 PM PT on July 21, 2016.


On-site registration opens at 8 AM, July 26 & 27.

The cost for *qualified attendees is $195 and $995 for non-qualified qualified.


Webinar alert – Hybrid prototyping for ARMv8

Webinar alert – Hybrid prototyping for ARMv8
by Don Dingee on 07-15-2016 at 4:00 pm

All the talk about ARM server SoCs has been focused on who will come up with the breakthrough chip design. Watching trends like OPNFV develop suggests the big breakthrough is more likely to come on the ARMv8 software side. How do you quickly validate ARMv8 software when you don’t have the exact ARMv8 SoC target? Continue reading “Webinar alert – Hybrid prototyping for ARMv8”


Autonomous Driving @ the Crossroads

Autonomous Driving @ the Crossroads
by Roger C. Lanctot on 07-15-2016 at 12:00 pm

One of the most terrifying moments one can experience as a driver or passenger in a Tesla Model S driving with autopilot turned on is the realization that the system cannot recognize intersections or traffic lights. It seems like such a basic and obvious requirement for automated driving but the Model S can’t hack it – which is one of many reasons the system is not considered an autonomous driving solution – maybe advanced cruise control is more accurate.

But the reality is even more stunning. The U.S. Department of Transportation does not have a complete inventory of U.S. intersections and traffic lights. In fact, the U.S. DOT won’t have such a complete inventory for years to come.

Talk to anyone in the traffic business and you will be told that the system for estimating how many intersections there are boils down to a rule of thumb: 1 traffic light for every 1M in population. This calculation yields somewhere between 300,000 and 330,000 intersections in the entire U.S. But it’s only an estimate. No one knows the actual total.

Now it would be logical to then consider how many of these intersections are connected to the Internet and can thereby be managed or changed dynamically to respond to evolving traffic conditions. Best guesstimate I have seen is 115,000 in the U.S. – but this, too, is only a guess.

Companies like Global Mobile Alert and ConnectedSignals among others are working on solving this riddle. Global Mobile Alert is building a system to alert drivers, distracted by their smartphones, to the proximity of traffic lights. ConnectedSignals’ Enlighten app is designed to inform the driver of the signal phase timing of upcoming traffic lights.

For now, the lack of a comprehensive intersection inventory represents a massive barrier to the advance of autonomous driving system development. If your car can’t grok the presence of an intersection, it’s going to have a tough time determining the phase of the light.

The Google car’s purely sensor-based system and its slow speed of operation is designed to recognize and cope with intersections. But the Google car is only tuned to operate in a limited and thoroughly mapped area.

The story is even worse. After all, we know that one third of all traffic fatalities occur at intersections. It would seem that intersections are a logical place to start working on whittling down the recently increasing rate of driving fatalities in the U.S. – but the relevant regulatory authority doesn’t even possess the relevant data.

So we can rest assured that autonomous driving is years away from becoming reality while we lose sleep knowing that our best minds don’t even know where all the intersections are located – let alone which ones are or are not signalized or connected. Is this any way to run a transportation system? We clearly have a lot of work ahead of us. Look out!

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Microsoft Raises The Mixed Reality Bar Yet Again With New Holographic Platform

Microsoft Raises The Mixed Reality Bar Yet Again With New Holographic Platform
by Patrick Moorhead on 07-15-2016 at 7:00 am

There have been many hardware and software developments over the past 18 months in the worlds of AR and VR. However, most of them solely existed only within the separate realms of AR or VR. Many within the industry, including I, eventually see VR and AR merging into some hybrid reality that fluidly moves between the two. Some companies call this concept “mixed” or “blended” reality and Microsoft appears to be heavily leading this notion. Microsoft created a new mixed reality platform called “Windows Holographic” which they announced in Taipei last week at Computex. I view this as yet another leadership move which puts “mixed reality” distance between them and their primary competition in this area, Apple and Google.

AR and VR are coming together
Mixed reality just makes sense when you think of it. The best VR experience with AR capabilities will enable you to have mixed experience home or work without bumping into things. Hypothetically in gaming, you could map your entire house to be a game level and play against real or imagined people and monsters. The best AR experience with VR capabilities will have extremely complex 3D objects to work with, not just these transparent and light ones today in AR. VR and AR are coming together and its just a matter of time. Google realizes this too. While Google hasn’t officially connected AR and VR, Google does show off Daylight scenarios with Tango to create mixed reality experiences.

Windows Holographic enables VR and AR users to collaborate together
Windows Holographic is designed to bring together the different hardware and software aspects of AR and VR and to combine them together under one platform. By allowing AR and VR devices to communicate well among one another, Microsoft wants to accelerate the pace of growth in mixed reality and the solutions that utilize it. The simplest way to think of it is to drive users by connecting separate VR and AR worlds. Microsoft is partnering with OEMs, ODMs and various hardware partners to build mixed reality devices and other hardware with the Windows Holographic platform. This move is Microsoft’s first major play to control the expanding AR and VR spaces before they outgrow Microsoft and their cur rent PC platforms.

Microsoft published a new video showing how AR and VR systems would work and suggest you watch it.

Microsoft needed to do more than just participate in VR

Until now, Microsoft was primarily the facilitator of things like PC VR through their Windows operating system and DirectX 12 low level graphics API. However, many would agree that this simply isn’t enough to genuinely be considered a player in the VR market. And even though nearly all of the new devices are shipping on Windows, Microsoft had very little hand in how the devices were made or used. This is because Microsoft behaved as more of a passive facilitator than an active contributor.

Mostly PC, one mobile SoC partner

With the announcement of Windows Holographic and its new partners, Microsoft is finally addressing multiple problems at the same time. By bringing in new hardware partners like Acer, Advanced Micro Devices, ASUS, Cyberpower, Dell, Falcon Northwest, HP, HTC, iBuypower, Lenovo, MSI and Qualcomm, Microsoft diversifies the potential hardware offerings for Windows Holographic beyond just HoloLens and truly opens the platform to VR and AR platform users. By having a single platform for all of these hardware partners, Microsoft is looking to do something that nobody else has done before, create a hardware agnostic mixed reality platform. Microsoft’s vision will enable VR and AR users to communicate, collaborate, entertain and learn with one another using Windows Holographic.

By offering Windows Holographic Microsoft is also exposing a definite set of APIs and hardware requirements for both AR and VR that haven’t quite existed. By working closely with their hardware partners, Microsoft will be able to ensure that developers and users never have to worry about a Windows Holographic device lagging because it is underpowered for either AR or VR. The industry has desperately needed something like Microsoft Holographic and for someone like Microsoft to deploy it with a series of major hardware partners.

Wrapping up

Microsoft’s announcement of Windows Holographic expands the platform in multiple ways, by making it both an AR and VR platform and by extending Windows Holographic beyond Microsoft’s own hardware. As you can remember, when Microsoft tries to go it alone in a platform doing both hardware and software it generally doesn’t end well like Nokia. Microsoft’s decision to expand Windows Holographic beyond Hololens and AR in general really means big things for the company and the platform and gives Microsoft a real fighting chance in the impending platform wars we are bound to see from the big companies in either AR or VR. Until the lines of AR and VR blur into something that is truly mixed reality, platforms like Microsoft’s Windows Holographic are going to be critical in enabling new applications and use cases for both AR and VR.

As Microsoft did with their initial HoloLens announcement, they have moved up the competitive bar. Google isn’t publicly merging AR and VR yet and Apple hasn’t showed their hand yet.

More from Moor Insights and Strategy


Learn How to Debug UVM Test Benches Faster – Upcoming Synopsys Webinar

Learn How to Debug UVM Test Benches Faster – Upcoming Synopsys Webinar
by Bernard Murphy on 07-14-2016 at 4:00 pm

UVM for developing testbenches is a wonderful thing, as most verification engineers will attest. It provides abstraction capabilities, it encapsulates powerful operations, it simplifies and unifies constrained-random testing – it has really revolutionized the way we verify at the block and subsystem level.

However great power usually introduces new challenges and UVM testbenches are no exception. If we’re honest, many of us will admit that we now spend more time debugging the testbench than we spend on debugging the design. Hopefully this is because design debug has become significantly more effective thanks to more powerful testbenches. But still – when something new becomes the long pole, you have to work on knocking that pole down.

The old monitoring and logging approaches don’t work in the complex world of UVM. Those approaches are rather like trying to debug C++ at the C level. You need debugging methods that understand the complexities of classes, constrained random and other features. Synopsys has been working hard to bring interactive debugging up to the needs demanded by UVM testbenches, among other things a very cool ability allowing you to roll back time to discover the root cause of problems (a critical feature when dealing with randomized stimulus).

The Webinar is on July 20[SUP]th[/SUP] at 10am PDT. You can REGISTER HERE.

Web event: Time-travel in a SystemVerilog/UVM world – Interactive Testbench Debug Unleashed!
Date: July 20, 2016
Time:10:00 AM PDT
Duration: 60 minutes

The volume of testbench code and complexity of testbench environments have far surpassed those of the designs they verify. As teams migrate to SystemVerilog and UVM class-based testbenches for higher verification efficiency and increased verification reuse across projects, testbench debug remains the long pole. Traditional methods of debugging testbenches with waveforms and carefully crafted logging are unable to scale with modern testbenches. From testbench bring-up to constrained random simulations to regressions, inefficient testbench debug is what stands between verification engineers and their ultimate aim of finding a design bug.

In this Synopsys webinar, we will show how interactive debug is ushering in a new era in testbench debug. Specifically, you will learn:

[LIST=1]

  • How interactive and reverse interactive debug capabilities allow you to quickly root-cause and debug simulation failures
  • How what-if analysis improves TB debug efficiency by combining diagnosis and cure into a single step
  • How to navigate and effectively debug a UVM-based testbench

    Speakers:

    Vaishnav Gorur
    Product Marketing Manager, Verification Group

    Vaishnav Gorur is currently Staff Product Marketing Manager for debug products in the Verification Group at Synopsys. He has more than a decade of experience in the semiconductor and EDA industry, with roles spanning IC Design, field applications, technical sales and marketing. Prior to joining Synopsys, Vaishnav worked at Silicon Graphics, MIPS Technologies and Real Intent. He has a Masters degree in Computer Engineering from University of Wisconsin, Madison and is currently pursuing an M.B.A. at University of California, Berkeley.

    Mansour Amirfathi
    Sr. CAE Manager, Verification Group

    Mansour Amirfathi is currently Sr. CAE Manager for debug products in the Verification Group at Synopsys. He has more than 25 years of experience in the semiconductor and EDA industry, with roles in design and verification for wireless, graphic and signal processing. Prior to joining Synopsys, Mansour worked at Mentor Graphics as a High-level synthesis specialist. He has also held roles at Siemens, Cadence, Infineon Technologies and Cadis. He has a Masters degree in Communication from the University of Aachen in Germany (RWTH).


  • 5 Reasons Why Platform Based Design Can Help Your Next SoC

    5 Reasons Why Platform Based Design Can Help Your Next SoC
    by Daniel Payne on 07-14-2016 at 12:00 pm

    Semiconductor design IP and verification IP have been around for decades, but just because your company has lots of IP doesn’t mean that you’re getting all of the benefits of a design reuse methodology. Maybe your business has encountered some of the following issues:


    Android Auto-Rooting Malware – You Can Run But You Can’t Hide

    Android Auto-Rooting Malware – You Can Run But You Can’t Hide
    by Bernard Murphy on 07-14-2016 at 7:00 am

    There has been a startling rise in a class of Android auto-rooting malware which is believed to affect over a quarter of a million phones in the US and well over a million in each of India and China. The attack has primarily infected older versions of Android (so far) – KitKat, JellyBean and Lolipop primarily.

    The malware, known as Shedun or HummingBad, is believed to be produced by Chinese mobile ad-server company Yingmob and primarily installs fraudulent apps and serves malicious ads. Yingmob today generates healthy revenue purely from these services but having root access to millions of Android devices obviously allows them to expand into even more malicious services in support of cyber-criminals, state actors and others.

    The malware seems to start, at least in some cases, through drive-by download. You visit a website (porn websites are apparently notorious for this) from which the software installs without you having to accept any download. Once downloaded, the exploit gains root access to the host phone and installs itself as system software.

    The exploit is quite sophisticated in its installation and is nearly impossible to remove. Among other things it updates recovery information so that even if you do a recovery on the phone, it will be recovered along with other software. It seems that the only cures to removing this exploit are to either reflash the ROM or buy a new phone. Users are advised to live a virtuous life (stay away from porn sites) and to bar all downloads from outside Google Play; that action alone apparently reduces success rates in general for Android malware.

    You can read more HERE.

    More articles by Bernard…