CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Intel Stratix 10 MX FPGA Highlights

Intel Stratix 10 MX FPGA Highlights
by Claudio Avi Chami on 09-15-2016 at 7:00 am

These days, FPGAs are fairly complex pieces of silicon. Being that the case, it would take several articles even to put a summary of the features embedded in high-end FPGA devices. Hence, in this article, I will concentrate in just one feature, namely, the new embedded memory blocks of the recently released Intel-Altera Stratix 10([SUP]1[/SUP]).

Even medium sized FPGAs include quite a big quantity of memory blocks. For example, Altera’s Cyclone V family includes memory blocks in the range of 1.4 to 12.2 Mbits ([SUP]2[/SUP]). These memory blocks are not concentrated in a single spot but distributed over all the FPGA silicon, to reduce routing complexity when connecting the memories to the FPGA logic blocks. These memory blocks find plenty of uses: buffers, FIFOs, filters, fast memory/cache for embedded processors, register banks, etc.

As useful as these banks are, they are light-years away, size-related, compared to today’s DDR memory banks. Well, this has changed completely with the release of Stratix 10 MX, since these devices embed DDR memory banks. Intel’s acquisition of Altera has had many consequences, one of them being the merging of technologies from both firms. The Stratix 10 MX includes Intel’s Embedded Multi-die Interconnect Bridge (EMIB) technology, to interconnect between the FPGA fabric and the DDR memory blocks.

The DDR memory blocks used on the FPGA are 3D stacked blocks, integrating high speed data channels, dubbed HMB2 – High Memory Bandwidth. The HMB2 3D memory is connected to the FPGA core through parallel channels. Each channel can provide a bandwidth of 16Gbps, multiplied by 16 channels, give a total bandwidth of 256Gbps.

Moreover, the memory is separated in up to 4 “tiles”, each one connected with its own 16 data channels. The total bandwith for four tiles is of 1Tbps. Compare this number with current BW from a DDR1600 memory bank, which is in the order of 100Gbps, or even DDR2133, which provides around 140Gbps.

Currently available Stratix devices have embedded memory banks ranging from 4 to 16 GByte. In case you were wondering, these new memories do not replace the aforementioned static memory banks. Stratix MX 10 devices have between 86 to 127 Mbit in static memory blocks.

Other advantages of the integrated memory blocks, compared to current distributed solutions ([SUP]3[/SUP]), include lower power consumption and reduction of the real estate on board, as well as a reduction in PCB interconnection complexity.

The availability of these new devices promises to change the architecture for solutions that are currently dominated by CPUs and/or GPUs, like database management, cyber security, genetic algorithms and deep machine learning. For an example regarding this last category, please refer to my article: FPGAs and Deep Machine Learning

My blog: FPGA Site

References:
Stratix 10 MX Devices Solve the Memory Bandwidth Challenge
Altera’s 3D System-in-Package Technology
Stratix 10 MX Product Overview Table

Image Source:
Stratix MX10 blocks – Intel/Altera

Notes:
(1) – Altera and Xilinx are the major players on the FPGA arena. Last year (2015) Altera was acquired by Intel.
(2) – These numbers can be increased a bit more, around 15%, by converting some of the ALM logic blocks onto memory blocks.
(3) – A typical distributed solution is based on separated CPU, FPGA and memory SODIMM cards, compared to the Stratix MX10 solution that includes CPU (ARM Cortex), FPGA, and memory on a single package.


Organizing Data is First Step in Managing AMS Designs

Organizing Data is First Step in Managing AMS Designs
by Don Dingee on 09-14-2016 at 4:00 pm

Efficient collaboration is essential to meeting tight chip design schedules. In analog and mixed signal (AMS) design, collaboration has many facets. Design tools are usually specific to roles, and handoffs are numerous, especially when moving a design to a foundry. Continue reading “Organizing Data is First Step in Managing AMS Designs”


IOT Security – Ongoing Challenge

IOT Security – Ongoing Challenge
by Bill McCabe on 09-14-2016 at 12:00 pm

It is near impossible to read or have a conversation about IoT, without security becoming a major topic. For IT professionals involved with IoT projects, security needs to be a major consideration, starting with planning and design, and continuing all the way through to deployment, implementation, and maintenance. Security is not just essential to protect the integrity of systems, but also to safeguard corporate and personal data and privacy. In many cases, the viability and acceptance of emerging IoT technologies will be largely dependent on how innovators are able to gain confidence from potential users, which means that security and privacy must be as much of a priority as innovation and functionality.

The Complex Nature of IoT
The Internet of Things is often described in simple terms, with simple networks, and logical infrastructure. In the real world, this is not always the case. Consider a smart city where traffic management uses aggregate data from a number of sensors and devices, to help track and manage traffic flow. This example could include parking sensors, vehicle counters on roads and highways, traffic cameras, and even license plate scanners. Potentially, all of these devices would use different technologies, they would collect different types of data, and most importantly, they would all connect through varying types of network infrastructure. Some systems could be wireless, while others might piggyback off of existing networks. Some state of the art networks that were implemented at the time that sensors were installed, while some could run on legacy solutions. The point is that for one main function – collecting aggregate data – there would be numerous networks, devices, sensors, and even network protocols, all in use at the same time. Implementing security on such a system means that not only must the individual devices be secure, but the networks, interconnecting devices, and servers, all need to be secured as well.

Without developing networks that are secure by design, the vulnerabilities in such a system would be too numerous and too complex to identify and patch, without significant investment in time and expertise.

Secure by Design – The Most Efficient Philosophy
Every individual device and sensor is vulnerable to attack. This is something that must be assumed from the beginning of any IoT project. Security analysts must look at each individual device, every piece of running software, every network element, all network protocols, and the servers that collect and manage data, and only then is it possible to determine the potential vulnerabilities, and the best strategy for securing the entire network. Some devices will have embedded security, whereas others might have none at all. Network architects and security professionals will need to make important decisions, such as whether they will use public or private infrastructure, which type of encryption they will apply to data streams, and they will even have to determine who has access to data, where that data is stored, and where it can be accessed from. Most systems will eventually interface with public networks, especially if those systems are consumer facing, and this means that design needs to take into account the increased vulnerability that exists when third party networks and hardware are introduced into the equation.

To put it quite simply, ensuring security by design will be a significant challenge, even on smaller projects. However, when security is done right, the rewards will be significant.

Security Benefits IoT Growth and Adoption Rates
A recent Gigya survey revealed that consumers want more robust authentication options when using sensitive software and services. When it comes to IoT, or any large interconnected network, users want the confidence that their data is secure, and without that confidence, there will be no buy-in for new technologies.

Technology companies already recognize the challenge and importance of security in the digital age, but with IoT, connected networks are being built outside of traditional industries. Healthcare, production, retail, and even entertainment, are all industries that are incorporating elements of the Internet of Things, into their operations and offerings. Companies in these industries will face technological hurdles that they’ve never had to deal with before, which is why architects, developers, and security professionals, are all going to be in high demand in the coming years.

The winning companies will be those that are able to recognize the challenges of incorporating IoT into their business models, and then proactively seek the leading talent that will help them to achieve their goals.

For More Information and Posts Check out our new website at www.internetofthingsrecruiting.com


Mentor Functional Verification Study 2016

Mentor Functional Verification Study 2016
by Bernard Murphy on 09-14-2016 at 7:00 am

Periodically, Mentor commissions a user/usage survey on Functional Verification, conducted by the Wilson Research Group, then they publish the results to all of us, an act of industry good-citizenship for which I think we owe them a round of thanks. Harry Foster at Mentor is breaking down the report into a series of 15 blogs. He’s also going to host a webinar on September 20[SUP]th[/SUP] at 8am Pacific to talk through the results.

REGISTER FOR THE WEBINAR

One especially interesting conclusion to come out of this survey is the rapid growth in verification investment and sophistication in methodology for FPGA-based designs. For those of us who still struggle to understand why anyone even bothers with functional verification for FPGA, this may come as a surprise. One view might be that you design, do a bit of sanity-checking, program the device and test at speed in-system, fix any problems you find then repeat, an approach generally known as “burn and churn”.

For simple designs this may still be common practice, but it no longer represents a practical methodology for the bulk of FPGA design. To understand why, look first at who is using FPGAs today. About half the dollar volume (per Gartner) is in communications. Industrial and mil/aero together take about half of what’s left and the rest is divided up between consumer, automotive and data-processing. In many of these markets, FPGA-based design dominates either because device volumes can’t justify ASIC NREs or because designs must be built to be adaptable to rapidly evolving standards.

When FPGAs carry the bulk of functionality those designs become significantly more complex. 59% have at least one embedded processor and 32% contain 2 or more embedded processors. These are programmable SoCs, not general-purpose programmable logic. A Zynq-7000 SoC (not the most complex SoC offered by Xilinx) provides a dual ARM9 MPcore, multiple DDR interfaces, USB, Gigabit Ethernet and SD/SDIO interfaces and a full range of security features. Verifying a design built around all of this, together with the software running on those cores, is every bit as complex as verifying a full-ASIC SoC.

At this complexity, it’s really irrelevant that burn and churn doesn’t cost you $$ in fab costs and fab cycle-time. There is no possible way you could ever converge on a working design through trial and error – you have to follow the same disciplined verification methodologies used for ASIC designs. This is partly a function of the intrinsic complexity of the verification task – interoperating CPUs, memory, peripherals and security, plus your own programmed logic – and partly a result of extremely limited controllability and observability in the programmed device. The debug options you have are limited to external pins and debugger access to memory and state registers, which may be OK for software debug but is definitely not OK for hardware debug.

This means that FPGA verification teams find they must debug designs as comprehensively as possible before committing to burn. The survey shows there is growing use of coverage metrics and assertions, and constrained-random simulation to help get to higher levels of coverage. And, hold onto your hats, 15-20% of projects are using formal methods, spread among property-checking and automated formal checks. Take a moment to let that sink in – a non-trivial percentage of FPGA design teams find it essential to do some level of formal-proving before they burn a design. How times have changed – for FPGA design and for formal verification.

In fact, the evidence shows that FPGA verification investment (in engineers and in advanced verification methodologies) is maturing quite rapidly, much more so that in ASIC/IC design where maturity seems to have flattened out. This isn’t to say that ASIC/IC teams are laggards; per Harry they just got to the (current) peak faster and are now having to throw more bodies at the problem as the verification task grows.

Looking at design overall, the survey measured, among other factors, demand for design engineers and for verification engineers. The Wilson study shows compound annual growth-rate (CAGR) for design engineers more or less steady at 3.6%, which Harry attributes to improvements in automation and IP reuse. But the CAGR for verification engineers is 10.4%, a much, much faster rate of growth reflecting (my conclusion) less rapid progress in automation and reuse in verification.

In the webinar, Harry will provide detailed stats for FPGA and ASIC/IC design. This should be a very useful benchmark for verification teams who want to understand how they stack up against industry norms. It’s certainly an eye-opener on how much verification methods have evolved for FPGA-based design.

REGISTER FOR THE WEBINAR

You can access Harry’s blogs HERE. As of this post he has published 7 of the series of 15. Links to additional blogs should appear under this link as they are posted.

More articles by Bernard…


TSMC 16nm, 10nm, 7nm, and 5nm Update!

TSMC 16nm, 10nm, 7nm, and 5nm Update!
by Daniel Nenni on 09-13-2016 at 4:00 pm

Word on the street is that TSMC is on schedule with 16FFC, 10nm and 7nm, which is a very big deal for the fabless semiconductor ecosystem. As Scotten Jones has illustrated in the graphic below, for the first time in the history of the semiconductor industry a pure-play foundry (TSMC) will have the process lead over Intel. And this is not just about TSMC, this is about the fabless semiconductor ecosystem delivering 10nm chips in the first quarter of 2017 and 7nm in the first quarter of 2018, absolutely.



Also read: The 2016 Leading Edge Semiconductor Landscape

To be clear, our smartphones will be powered by the fastest silicon the semiconductor industry has to offer and that my friends is simply incredible! Super computing power at the tips of our fingers, literally.

A complete 16FFC, 10nm, and 7nm process update will be made available via the TSMC OIP Ecosystem Forumat the San Jose Convention Center on September 22[SUP]nd[/SUP] from 8am to 6:30pm and I can tell you from my conversations inside the fabless semiconductor ecosystem it will definitely be worth your time.

SemiWiki bloggers: Tom Simon, Tom Dillinger, Bernard Murphy, and myself will be there as well as more than 1,000 semiconductor professionals from around the world. If you are attending let us know as it would be a pleasure to meet you.

Just in case you missed it here is the TSMC OIP overview and agenda:

The TSMC OIP Ecosystem Forum brings together TSMC’s design ecosystem companies and our customers to share practical, tested solutions to today’s design challenges. Success stories that illustrate TSMC’s design ecosystem best practices highlight the event.

More than 90% of last year’s attendees said that, “the forum helped me better understand TSMC’s Open Innovation Platform” and that “I found it effective to hear directly from TSMC OIP member companies.”

This year’s event will prove equally valuable as you hear directly from TSMC OIP companies about how to apply their technologies to address your design challenges!


This year, the forum is a day-long conference kicking-off with trend-setting addresses and announcements from TSMC and leading IC design company executives.

The technical sessions are dedicated to30 selected technical papers from TSMC’s EDA, IP, Design Center Alliance and Value Chain Aggregator member companies. And the Ecosystem Pavilion feature up to 60 member companies showcasing their products and services.

Learn About:

Attendees will discover:

  • Emerging advanced node design challenges including 7nm, 10nm, 16FFC, 16nm FinFET+, 28nm, and ultra-low power process technologies
  • Updated design solutions for specialty technologies supporting Internet-of-Thing (IoT) applications
  • Successful, real-life applications of design technologies and IP from ecosystem members and TSMC customers
  • Ecosystem-specific TSMC reference flow implementations
  • New innovations for next generation product designs

Hear directly from ecosystem companies about their TSMC-specific design solutions.

Network with your peers and more than 1,000 industry experts and end users.

The TSMC Open Innovation Platform Ecosystem Forum is an “invitation-only” event. Please register to attend. The views expressed in the presentations made at this event are those of the speaker and are not necessarily those of TSMC.


Apple, Google Go Home

Apple, Google Go Home
by Roger C. Lanctot on 09-13-2016 at 12:00 pm

For some marketers the operative mantra is go big or go home. It looks like Apple and Google are both taking a harder look at the automotive industry and have decided to go home.

The media is rife with reports of Apple hemorrhaging automotive engineers while senior executives on Google’s automated driving team have been skipping off to more intriguing or lucrative or less problematic ventures. The problem: The inability to locate the high volume, high revenue pony in the pile of high cost development, regulatory redtape and loathsome liability that constitutes the automotive industry.

The first hint of trouble was Google’s decision to jump out of the car insurance business nearly as soon as it jumped in. Google Compare Auto Insurance entered the market in 2012 in the United Kingdom, an attractive market for insurers because of the existence of a single regulator for a large homogeneous marketplace.

Google followed the UK launch with a U.S. launch late last year in California, but the morass of state-by-state regulatory hurdles and slow-footed insurance partners sent Google to the exits in both the U.S. and U.K. It was clear to those involved that the slow path to a profitable and eventually dominant enterprise in the car insurance market was intolerable in the context of internal expectations of high growth and a rapid ramp.

Shift gears to self-driving cars and both Google and Apple are confronting extreme technical challenges, prying eyes, Federal and state regulatory oversight and increasing competition from incumbents. Swizzle into this cocktail of conflict an ill-defined marketplace where mobility as a service is already being adequately served by cheap ride-hailing services and increasingly driverless public transportation – and the market prospects dim rapidly.

Ford is in the midst of convincing its own investors of the volume market prospects for driverless cars as its stock swoons in the midst of its own aggressive self-driving car announcements and investments. So if you are big and taking on the self-driving car opportunity, you have everything to lose and it’s pure risk. If you are a tiny start-up – like Otto (to which Anthony Lewandowski decamped from the Google self-driving car team) or Cruise Automation, it’s all opportunity and upside.

Google and Apple are not prepared to suffer the blowback in the manner of Tesla Motors Chairman and CEO Elon Musk should anyone lose their life or be severely injured in a Google or Apple self-driving car. Those crusty old car companies are actually better equipped to establish the standards and safety protocols and withstand the liability exposure of self-driving technology.

But the more fundamental challenge is the reality that a self-driving car is not likely to be owned, which means getting into the transportation business – the public transportation business. That’s a very different market from mobile devices and downloading content and selling cloud services. Apple and Google likely both perceive opportunities from enabling the systems, services and software that bring these applications to life – but would rather not take responsibility for creating and selling the hardware.
So, if you can’t go big, you go home. Or, if you currently work for Apple’s or Google’s self-driving car programs, you polish up that resume. It’s time to turn all that hard work into a real opportunity outside of those organizations. Your path to a profitable exit will be far shorter on the outside than by remaining inside these two large, newly-timid organizations. Investors are waiting and whatever you create will be your own.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Zero Tolerance = Vision Zero

Zero Tolerance = Vision Zero
by Roger C. Lanctot on 09-13-2016 at 7:00 am

Just returning from Sweden where the highway fatality rate is a marvel of modern transportation policy. Long before Sweden adopted a Vision Zero approach to reducing highway fatalities the country set itself apart from most others with a 0.02 blood alcohol limit for drivers. There is no question that this has contributed significantly to Sweden’s annual highway fatality rate per 100,000 population: 2.8.

The only country with a lower fatality rate than Sweden, according to the World Health Organization, is Iceland. I swear I detected a slight lip curl and eyebrow twitch on the part of my Swedish colleagues at the reminder that some other country was surpassing their admirable highway fatality reduction performance.

By comparison, the highway fatality rate in the U.S. is more than three times Sweden’s rate – 10.6. How has Sweden achieved this? Is there something in the water?

The allowable blood alcohol level for drivers in Sweden is 0.02. Sweden has, in effect, adopted a zero tolerance policy for alcohol consumption prior to driving. An “allowable” legal level of 0.02 leaves no wiggle room whatsoever.

Just a few weeks ago, Sweden’s minister for Higher Education, Aida Hadzialic, was breathalyzed while returning from a concert in Copenhagen at 0.02. She was charged with driving while under the influence of alcohol and subsequently resigned. (It’s worth noting that the “allowable” blood alcohol level in Denmark is 0.05 – so as the minister crossed the bridge from Denmark to Sweden she became an outlaw.)

Swedes will tell you that this driving restriction is deeply embedded in the culture and no one tests its bounds. This contrasts mightily with the U.S., as an example, where the 0.08 allowable blood alcohol level has drinkers in bars checking their watches to see if they can sneak in one more drink. In fact, state-level Department of Motor Vehicle communications warn that 0.08 means only one drink per hour.
By comparison to Sweden, the US DMV mentality practically recommends or advises one drink per hour. A 0.08 “allowable” blood alcohol level creates an unbearable temptation for many – while vaguely equating all drinks as equally deleterious – not if you’re getting the 20+-ouncer at the football game!

The Swedes are different. They enter navigation destination information by hand while driving because it would be more dangerous to pull over to do so – right? But the zero tolerance for alcohol in the blood while driving is serious business that has produced an admirable outcome.

Sweden, of course, hasn’t stopped there and the announcement this week that Autoliv was partnering with Volvo Cars to create an automated driving joint venture was just the latest indication that the country will remain a nexus for safe driving technology for the foreseeable future. A notable postscript is the burgeoning collaboration with Chinese car makers in the form of Geely’s ownership of Volvo Cars, the emergence of NEVS (National Electric Vehicle Sweden AB) from the ashes of Saab and the arrival of China Euro Vehicle Technology AB (2000 employees and growing).

The collaboration of the country with the safest roads and the country with the largest and fastest growing auto market with the most dangerous roads in the world is intriguing. It’s hardly a coincidence that Volvo’s market prospects are on the rise.


Can it ever be game over in tech?

Can it ever be game over in tech?
by Don Dingee on 09-12-2016 at 4:00 pm

The opening line of a recent Benedict Evans piece makes a bold statement: “The smartphone platform wars are pretty much over, and Apple and Google won.” Reading that line reminded me of the William Shatner scene in Airplane 2; let’s just shut it down and go home. That’s not the point Evans is making, however, Continue reading “Can it ever be game over in tech?”


Requirements Management and IP Management Working Together

Requirements Management and IP Management Working Together
by Daniel Payne on 09-12-2016 at 12:00 pm

I first heard about requirements management back in 1995 while marketing a graphic HDL entry tool for an EDA vendor, and it sounded like a very useful automation approach, however our team quickly discovered that there were too many different vendors for requirements management, so there could be no simple way to integrate with all of them. Living in Oregon I’ve heard about Jama Software, and visiting their web site I saw that their requirements management tool is named Jama and that they serve industries like:

  • Aerospace & Defense
  • Automotive
  • Medical
  • Semiconductor

Jama even quoted tier one company Infineon Technologies:

“Currently, IoT is still highly fragmented with a lot of single solutions. A key factor for success will be the competence to integrate and apply these single solutions. For example, it is crucial to combine the functionalities of sensors, actuators and computing power”
Dr. Reinhard Ploss, CEO of Infineon Technologies


Getting back to the question of integration it turns out that IP Management vendor Methodics has created an elegant way to integrate with Jama, so now semiconductor IP users can use both requirements management and IP management tools together. The basic idea is to allow users doing IP management to see their requirements in context of their actual SoC design.

Related blog –Go Native With Methodics at DAC in Austin

Does this integration require data translations? Thankfully no, instead you continue to keep all of your requirement management files in their native formats, leaving them as source documents. The IP management tool from Methodics is called ProjectIC, and one example of leaving native formats alone is for bug integration where bugs stay within the native bug tracking tool, then while using ProjectIC you can view any of the bugs associated with IP blocks in context of the SoC.

The methodology is to use Jama for all requirements management setup and updates, then inside of ProjectIC you view the requirements in a summary format while in the IP context. The Project IC system is extendable with widgets and this is where you define an executable script to run that fills in the widget with results as shown below:


This integration script can be written in either Perl or Python and it will extract the necessary information and then return it to ProjectIC using the popular JSON (JavaScript Object Notation) format. From the ProjectIC tool your integration uses Custom Fields to extend the definition of an IP block for additional meta-data like requirements management. Custom Fields in ProjectIC can also send data to your script.

Related blog – 5 Reasons Why Platform Based Design Can Help Your Next SoC

Jama has defined their own API which enables integrations to an IP Management tool like ProjectIC, and here’s a Jama screenshot using a sample project that lists the requirements on the left-hand side.

The integration script will extract this Jama data and make it visible inside of ProjectIC using an ID for each IP block. In ProjectIC our first step is to create a custom field named ‘JAMA_ID’ and it’s value will be passed to our integration script. Users will look in their Jama user story hierarchy for this particular IP block and get this number.

The following Python script reads the custom field value for ‘JAMA_ID’ and returns up to 2 level of the requirements hierarchy back to ProjectIC.
#!/usr/bin/env python
“””
Get a Jama tree up to 2 levels by ID
ID is passed as argument to the script
“””
from jama import API
import json
import sys

jama_id = sys.argv[1]
api = API()
jfunc = ‘getChildrenOfItem’
item = api(jfunc, int(jama_id))

res = {}
for i in item:
res[‘title’] = i.name
if i.hasChildren:
res[‘isFolder’] = True
kids = api(jfunc, i.id)
rkids = []
for k in kids:
t = {}
t[‘title’] = k.name
t[‘isFolder’] = False
rkids.append(t)
res[‘children’] = rkids
else:
res[‘isFolder’] = False

print(json.dumps(res))


Our third step is to create a widget in ProjectIC that will invoke our Python script:

With our widget defined and script debugged, we can start to view Jama requirement results while working inside of the ProjectIC IP management tool:

Summary
It’s now possible to quickly integrate results from your favorite requirements management tool like Jama within the ProjectIC IP project management tool. This type of integration gives you a fully traceable, hierarchical SoC development throughout the IP Lifecycle Management process, connecting system design requirements and IP implementation.

Read the complete White Paper here.


A Powerful Case for the ARC SEM Processor

A Powerful Case for the ARC SEM Processor
by Bernard Murphy on 09-12-2016 at 8:06 am

Building devices for the IoT has become especially challenging thanks to two conflicting requirements. The device has to be small and ultra-low power in most applications but also in many of those applications it has to provide a high-level of security, especially to defend high-value targets like smart metering, payment terminals, embedded SIM cards and mobile and wearable payment systems. There was a fantasy for a while that security heavy-lifting could be handed off to the cloud, but that idea died a quick death when we realized that remote security comes with significant latency problems, man-in-the-middle exposure and potentially worse power implications than you would find in local security.

But while local security consumes less power than a remote option, it consumes more power than no security. So when you’re trying to prove you have the least power-hungry yet still secure solution, differences in PPA profiles between different security solutions really matter.

We should also understand that many IoT devices are deployed with an expectation of long lifetimes and needing at most infrequent physical monitoring. Therefore attackers can, with little personal risk, install equipment around a device to inject faults, jiggle the power supply or use light (on a decapped device) to jiggle state elements, and they can steal keys by monitoring bus activity or even extract high-value keys through side-channel analysis on power rail variations, instruction timing or EMI emissions.

Traditional software vectors for attack will also be popular since these low-power devices cannot afford traditional software defenses, so malware exploiting well known weaknesses like buffer overflows can potentially inject itself into privileged operation modes or other opportunities for tampering.

But attacking a single device is generally not going to be the end goal. The big payback for an attacker is to find an exploit which can be reused on many targets. This is where security through diversity is an important part of a system-wide defense and where, I believe, there may be an unexpected weakness in over-reliance on a dominant CPU architecture. Interesting targets for hackers have to promise significant financial return or at least significant bragging rights in hacker circles. A successful exploit which can only compromise a limited number of targets offers neither. Which doesn’t rule out the possibility of an attack, but it does make you a much less interesting target.

The Synopsys ARC SEM architecture offers solutions to address each of these needs. First, the architecture itself leads in PPA in the industry. So you start with an ultra-low-power solution which also has built-in security.

The architecture provides multiple defenses against attack, some well-known, others quite intriguing for support of security through diversity, even between devices. In this latter class, while ARC is already well-established in support functions like audio and video in mobile apps, in home automation and in automotive and disk controllers, Synopsys acknowledges that it’s not the market leader in embedded CPUs. But that position puts them lower on the priority list for attacks – see above.

Second, the ARC processor extension technology (APEX) helps a chip-maker further increase diversity. Custom instructions added to the base set further complicate attacks like differential analysis because the instruction-set reference is no longer completely accessible. And third the pipeline is very tamper-resistant because instruction and data are read encrypted from memory, using scrambled addresses; these are unscrambled/unencrypted in-flight for computation and are never stored in plaintext. The development team have control over this process, and can even make it differ even from device to device.

Other defenses include support for uniform instruction timing and timing and power randomization, for defense against side-channel attacks. (Synopsys also offers a CryptoPack solution for cryptography algorithms using these features.) If you need JTAG access, they offer a challenge/response mechanism to support a secure JTAG option, though I expect most would advise fusing access through that port before shipping. The SEM core provides a secure memory protection unit supporting 16 regions, with per region scrambling and encryption.


The ARC SEM processor offers these and other security functions and interfaces for a comprehensive security solution on which you can build a full-featured Trusted Execution Environment. Software control is managed through SecureShield, which provides control of privilege levels, memory region access and scrambling/encryption in the pipeline, and secure peripheral and IP access, these together being managed at the OS level through a microvisor to support the creation and management of containers.

Still, you may think “why not just use a better-known solution?”. The first part of the answer has to be power. If you need the lowest power solution, you have to go with the CPU that meets that objective, while also offering strong security, independent of the supplier. Market adoption isn’t a problem – there are plenty of ARC-based systems in production. The SEM core builds on that proven platform, and if you know anything about Synopsys, you know they are not fans of research projects. They told me they saw customer demand to address gaps in the market, for legacy microcontroller-based solutions needing upgrade with an emphasis on ultra-low power and security in a small form-factor, also for emerging applications with similar needs. In both cases, the ARC SEM processor is targeted to address tradeoffs between technical and market needs where a default processor choice doesn’t necessarily fit well.

Finally, give a thought to that diversity topic. If a clever hacker figures out a way into a smart meter, are you sure that payments systems, machine controls and grid management will never be at immediate risk from the same attack? There are some interesting differentiation possibilities in being able to say your security systems don’t share DNA with competitive solutions so are intrinsically firewalled from attacks on mainstream CPU platforms (and also from attacks on systems based on similar but tweaked DNA). You can learn more about the complete Synopsys ARC Processor family by clicking HERE.