RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

What is Ambient Security?

What is Ambient Security?
by Bill Boldt on 12-17-2014 at 7:00 pm

ambient11

New technology and business buzzwords pop up constantly. Hardly a day goes by that you don’t see or hear words such as “cloud”, “IoT,” or “big data.” Let’s add one more to the list: “Ambient security.”


You’ll notice that big data, the cloud, and the IoT are all connected, literally and figuratively, and that is the point. Billions of things will communicate with each other without human intervention, mainly through the cloud, and will be used to collect phenomenal and unprecedented amounts of data that will ultimately change the universe.
As everything gets connected, each and every thing will also need to be secure. Without security, there is no way to trust that the things are who they say they are (i.e. authentic), and that the data has not been altered (i.e. data integrity). Due to the drive for bigger data, the cloud and smart communicating things are becoming ambient; and, because those things all require security, security itself is becoming ambient as well. Fortunately, there is a method to easily spread strong security to all the nodes. (Hint: Atmel CryptoAuthentication.)

Big Data

At the moment, big data can be described as the use of inductive statistics and nonlinear system analysis on large amounts of low density (or quickly changing) data to determine correlations, regressions, and causal effects that were not previously possible. Increases in network size, bandwidth, and computing power are among the things enabling this data to get bigger — and this is happening at an exponential rate.
Big data became possible when the PC browser-based Internet first appeared, which paved the way for data being transferred around the globe. The sharp rise in data traffic was driven to a large extent by social media and companies’ desire to track purchasing and browsing habits to find ways to micro-target purchasers. This is the digitally-profiled world that Google, Amazon, Facebook, and other super-disruptors foisted upon us. Like it or not, we are all being profiled, all the time, and are each complicit in that process. The march to bigger data continues despite the loss of privacy and is, in fact, driving a downfall in privacy. (Yet that’s a topic for another article.)

Biggering

The smart mobile revolution created the next stage of “biggering” (in the parlance of Dr. Seuss). Cell phones metamorphosed from a hybrid of old-fashioned wired telephones and walkie-talkies into full blown hand-held computers, thus releasing herds of new data into the wild. Big data hunters can thank Apple and the Android army for fueling that, with help from the artists formerly known as Nokia, Blackberry, and Motorola. Mobile data has been exploding due to its incredible convenience, utility, and of course, enjoyment factors. Now, the drive for bigger data is continuing beyond humans and into the autonomous realm with the advent of the Internet of Things (IoT).

Bigger Data, Littler Things

IoT is clearly looking like the next big thing, which means the next big thing will be literally little things. Those things will be billions of communicating sensors spread across the world like smart dust — dust that talks to the “cloud.”

More Data

The availability of endless data and the capability to effectively process it is creating a snowball effect where big data companies want to collect more data about more things, ad infinitum. You can almost hear chanting in the background: “More data… more data… more data…”

More data means many more potential correlations, and thus more insight to help make profits and propel the missions of non-profit organizations, governments, and other institutions. Big data creates its own appetite, and the data to satisfy that growing appetite will derive from literally everywhere via sensors tied to the Internet. This has already started.

Sensors manufacture data. That is their sole purpose. But, they need a life support system including smarts (i.e. controllers) and communications (such as Wi-Fi, Bluetooth and others). There is one more critical part of that: Security.

No Trust? No IoT!

There’s no way to create a useful communicating sensor network without node security. To put it a different way, the value of the IoT depends directly on whether those nodes can be trusted. No trust. No IoT. Without security, the Internet of Things is just a toy.
What exactly is security? It can best be defined by using the three-pillar model, which (ironically) can be referred to as “C.I.A:” Confidentiality, Integrity and Authenticity.

CIA

Confidentiality is ensuring that no one can read the message except its intended receiver. This is typically accomplished through encryption and decryption, which hides the message from all parties but the sender and receiver.

Integrity
, which is also known as data integrity, is assuring that the received message was not altered. This is done using cryptographic functions. For symmetric, this is typically done by hashing the data with a secret key and sending the resulting MAC with the data to the other side which does the same functions to create the MAC and compare. Sign-verify is the way that asymmetric mechanisms ensure integrity.

Authenticity refers to verification that the sender of a message is who they say they are — in other words, ensuring that the sender is real. Symmetric authentication mechanisms are usually done with a challenge (often a random number) that are sent to the other side, which is hashed with a secret key to create a MAC response, before getting sent back to run the same calculations. These are then compared to the response MACs from both sides.
(Sometimes people add non-repudiation to the list of pillars, which is preventing the sender from later denying that they sent the message in the first place.)

The pillars of security can be implemented with devices such as Atmel CryptoAuthentication crypto engines with secure key storage. These tiny devices are designed to make it easy to add robust security to lots of little things – -and big things, too.

So, don’t ever lose sight of the fact that big data, little things and cloud-based IoT are not even possible without ambient security. Creating ambient security is what CryptoAuthentication is all about.

Bill Boldt, Sr. Marketing Manager, Crypto Products Atmel Corporation


Lead, follow, or catch the next Silicon Valley wave

Lead, follow, or catch the next Silicon Valley wave
by Don Dingee on 12-17-2014 at 2:00 pm

What does the IoT mean for the next wave of Silicon Valley innovators? Looking at the previous waves of semiconductor economic development and the doctrine of “creative destruction” holds clues as to how this one develops and who emerges as the new leaders.

Given seven decades of progress, it may seem semiconductor firms on top in mobile would naturally lead the transition to the IoT. Continue reading “Lead, follow, or catch the next Silicon Valley wave”


Cuba No? Cuba Si!

Cuba No? Cuba Si!
by Eric Esteve on 12-17-2014 at 7:30 am

We are writing technical blogs all along the year, sometime it’s good to write about something completely different. Don’t worry, I don’t plan to write any geo-political analysis, neither propose tourism advertising for Cuba. I just wish to share with you the feeling you have when spending a few weeks in Cuba like I did in the mid 1990’s. Feeling is the proper word as the Cuban people are really nice person to meet with, but not only, as they can give you lesson that you will never forget.

The first impression when arriving in Cuba is that these people are happy and they enjoy life. Not because of communism and rather despite the political system they have to cope with, but they are friendly, smiling and laughing. This is just the way they behave. As I was renting a room in a flat or a house instead of resting in hotels, moving in La Havana in clandestine taxis instead of tourist dedicated busses or taxis, I had the opportunity to meet with many Cuban and talk with them. These people were desperately trying to survive, and a single day in La Havana was a great lesson.

Trying to buy a tooth brush or soap is an issue; if you want to have lunch, better to search for underground restaurant (often installed in a private flat or house, sometime with only two tables!) than going in an hotel where the food will cost ten times the price and will be awful. You realize that the things you usually do easily in Europe or in the US are much more complicated to do, but the Cuban people are crafty. It probably takes a lot more time to find the right place, but if you are well supported, you will succeed (even if the guide book don’t mention the tricks). Obviously the dollar gives you some power, but not only. The people are happy to help you, and proud to be able to overcome the stupid rules. So, just imagine how powerful they will become as soon as they will have the freedom to be official entrepreneurs!


I remember this car above, as I have use it as a taxi (nothing special so far, except that it’s a Chevrolet 1956 or 58). Which was special with this car, and the driver was very proud of it, was the motor: it was a freight elevator (monta-carga) motor revisited to fit in this splendid Chevrolet! Just one of the dozen examples of the people’s dynamism. But they had no choice: either they did any job (from doctor to professor or worker) for $30 equivalent per month, and can’t buy anything they really want to buy, either they still do the job and spend the rest of the day building plans, drive underground taxis, etc. so they could better survive… It was a good lesson for me, born in Europe.

As I went to this Caribe Island several times, and could rent a car to visit from Santa Clara to Vinales, and from Maria La Gorda to Pinar del Rio, I have seen many landscape, sceneries and people. Like the above picture, taken in Vinales where the best tobacco plant will be used to make the best cigars in the world, but with a middle age tractor. Amazingly, the first time I went to Cuba was from Miami in summer 1995, and I could buy my flight ticket in a shop in the USA. I just had to pass through the Bahamas. Once in Nassau, my flight was supposed to take off a couple of hours after my landing from Miami. Instead, I had to wait 36 hours before I could go into an Antonov (a cargo revisited for passengers), carrying 4 peoples on top of the crew, instead of the 90 or 100 possible passengers…

According with that I heard yesterday at the news, I guess that it will be much easier now to go to Cuba directly from the USA, the sea and the sun will be still there, but also the peoples. Probably even more happy and enjoying life than they were in 1995.

From Eric Esteve from IPNEST


Expert Tool to Easily Debug RTL and Reuse in SoCs

Expert Tool to Easily Debug RTL and Reuse in SoCs
by Pawan Fangaria on 12-16-2014 at 7:00 pm

SoC design these days has become a complex and tricky phenomenon involving integration of multiple IPs and legacy RTL code which could be in different languages, sourced from various third parties across the globe. Understanding and reusing RTL code is imperative in SoC integration which needs capable tools that can accommodate large designs and provide any level of detail about any portion of the code (including its corresponding schematic, connections into the main design and so on) on-the-fly at any instant.

I was impressed after seeing a couple of demos of RTLvision PRO from Concept Engineering that shows about how the tool can help best in understanding, debugging, modifying and integrating an RTL code into an SoC.

After reading any RTL code, Verilog, VHDL or SystemVerilog, the schematic can be seen as a whole or part in cone window along with the source code. The complete source view or schematic view can be seen separately as well. The visualization and switching between the views are fast enough to traverse the whole design and understand it completely. Cross-probing between any views can be done easily and elegantly.

Any object can be selected and dropped from source to schematic, hierarchy tree to schematic or between schematic and cone window as desired. Any object in the tree can be double clicked to see multiple icons that are instantiated in the schematic. The signals can be easily traced in the schematic by double clicking at the ports in the cone and expanding the connected components. The cone windows can be used to visualize critical portions of the design and investigate into the finest levels of details.

Automatic cone extraction engine can be invoked (through dialog box by clicking on ‘Cone’ and ‘Extract Dialog’) to extract paths from any source to targets such as clocked cells or i/o ports and view them with best clarity by using available options for selective viewing. Any component in the extracted result can be double clicked to see the same in the schematic.

Identifying different clock domains, analyzing clock trees and crossings between clock domains is a key challenge for SoCs that have multiple clocks and work in various modes of operations. These must be analyzed at the RTL level and any issues with clock synchronization must be fixed there to avoid larger issues to crop up later in the design flow.

RTLvision has a versatile Clock Tree Analyzer that automatically extracts all clocks from an RTL description and provides clock tree analysis including CDC (Clock Domain Crossing) identification.

After reading the design containing multiple clock domains, the tool displays all clock trees in iconic form. By double clicking any clock tree icon, its complete clock tree structure can be loaded and displayed in a cone window. Different clock trees can be highlighted in different colors to improve visibility and analysis. In the schematic, the number of flops count in a module (at any level) connected to a clock feeding that module is displayed along with the clock connection which lets designers to verify the clock and flops with each module without getting into the cumbersome task of looking at each individual flop and clock.

The Clock Tree Analyzer displays CDC between each clock domain with thick and thin lines depicting the connection strength (determined by the number of connections) between them. The clock domains and their respective flops in schematic can be displayed in different colors. There are multiple options to arrange flops with different contexts for better viewing and easy analysis.

An integrated Waveform Viewer compiles VCD simulation data into its own high-speed format for accelerated waveform viewing and analysis. The signals can be interactively traced between source code, schematic and waveform window.

RTLvision also provides automated documentation of new, changed or reused RTL code which can be Verilog/VHDL schematic, PDF file, postscript output or bitmap image. Also, TCL based UserWare APIs are provided to extend RTLvision functionality according to the specific needs of any organization and interfacing with other tools.

Look at the demos RTLvision basic features and Clock Tree Analyzer at Concept Engineering website to know more details. Contact info@concept.de or sales@edadirect.com for any more specific information you may need.

More Articles by PawanFangaria…..


How are the IoT and ESL Related?

How are the IoT and ESL Related?
by Daniel Payne on 12-16-2014 at 2:00 pm

A recent comment by a DACattendee mentioned that the IoT acronym was so over-used as to make him get upset at EDA vendors that all purport to be enabling the growing IoT revolution. One of the most common requirements that I hear about IoT electronics is that the power needs to be well understood and controlled during the design exploration phase. Gene Matter at Docea Power is an expert on system level power analysis and modeling, so I’ve followed up with him on this topic.

Q&A

Q: Why is system-level power analysis relevant?

At the recent Cadence Low Power Summit, Nov. 19[SUP]th[/SUP], in San Jose Dr Alon Elad, UC-Berkeley, gave a talk on “Realizing Energy Efficient SoCs Demands Vertically Integrated Design(ers)”. I find myself in agreement with the premise that a Gestalt view of system design can yield a better solution. Specifically, when much of our SoC design has focused on performance, cost, compatibility and features/functions while low power operation is being addressed, energy efficiency may be hindered by the legacy of digital ASIC, CPU-centric design. The re-emergence of sensor-aware networks (motes, smart dust), “Smart” devices (wearable), connected roughly lumped together as the Internet of Things may provide more ideal conditions to look at energy efficient design in a new way.

Dr David Flynn from ARM and Sunrise Micro Devices presented the “Design Challenges in developing Sub-Volt IP Designs for IoT Applications” which gave a useful taxonomy of IoT devices:

  • Typically on mature process technology: 180, 90, 65/40 nm: for cost, fabrication, wafer starts/capacity on fully depreciated fabs
  • Native power: ICs may operate direct from battery or supply (unregulated) operation which would entail a wide range of voltage operation , low current and duty cycling.
  • Mixed signal with multiple sensors: Sensor devices could be fabricated with MEMs + low power analog, altimeter, gyro, accelerometer, magnetic are quite common for position, location. This may also entail multi-die and stacked/heterogenous components vsd. Monolithic silicon
  • Near or sub-volt ultra low power connectivity: Mesh and grid-like topology with near field, short range and chirp/small packet payloads, low power wireless such as UWB, BT4, and other low power interfaces.

CORDIO BT4 – Bluetooth Smart IP Component

The devices can be:

  • Ultra small, thin, but ruggedized, operating in wide environmental conditions
  • Disposable, possible degradable or recoverable
  • Un-attended: which requires autonomic/adaptive HW and SW. FW/SW upgrades might be field upgradeable over lossy networks

Q: So what does this have to do with an ESL (electronic system level) approach to power and thermal modeling and simulation?

Docea Power’s approach has been used successfully to model application use cases at the fine grain, instruction, bus and transaction level as well as over long durations such as a “day in the life” scenario.

A good example is the pedestrian detection application with CEA- Letiand STmicroelectronics.

The system designer, can create and analyze scenarios which include a complete network of things (sensor nodes + a mesh network between nodes, an aggregation point and up to a web-based application) with fine grain detail and behavioral or transaction level detail where appropriate.

We have built complex system models with heterogenous devices including mixed signal, analog (PLL, SDDAC, VRs’, sensors), memories and low power digital logic as well as multi-chip, 3D ICs. Here’s the DAC session in 2014 with CEA-LETI.

Q: How is the modeling done in your approach?

By modeling both the electrical behavior with voltage, current, switching activity and task load/task consumption, you get a realistic evaluation of peak, average and transient power data. UsingAceplorer and PTM you can model the adaptive power management efficiency for the types of applications of interest. Building a physical model which is the geometry, material properties, environment, it becomes possible to model power as a function of temperature for different ambient conditions, operational ranges usingThermal profiler.

Q: The IoT appears to be comprised of many different market segments, is that your take too?

While I cannot offer a definitive answer to what is IoT, it is clear that system level optimizations for ultra low power, sub or near threshold logic will create some interesting challenges in the design flow. The smart devices, network of things, has some different critical design parameters in terms of operational life, (1 or 2 or even 5+ years on single battery). Connectivity RF, but not necessarily TCP-IP or internet protocol, but chirpy, short burst activity, not huge packet payloads, always on, but on may just mean sensing activity, accessible and connected. The system architect may need to step away from the legacy installed base of network connectivity, Internet protocol to develop a more compact and efficient networking stack (HW and SW). The power management needs to go beyond race to halt or conventional power management to deal with the wake on event, respond and then nap/slumber nature of IoT nodes.


IEDM: TSMC, Intel and IBM 14/16nm Processes

IEDM: TSMC, Intel and IBM 14/16nm Processes
by Paul McLellan on 12-16-2014 at 7:10 am

This week is IEDM. Three of the presentations today were by TSMC, Intel and IBM going over some of the details of their 14/16nm processes. They don’t provide the slides at IEDM, just the single page papers so this may end up being a somewhat random collection of facts.

TSMC were up first. They talked about the improvements that they had made going from their 16FF to the second generation 16FF+ under the title An Enhanced 16nm CMOS Technology Featuring 2nd Generation FinFET Transistors and Advanced Cu/low-k Interconnect for Low Power and High Performance Applications. They already reported on the basic 16FF process last year so this is an update.

The new process core devices are re-optimized to provide additional 15% speed boost or 30% power reduction. Device overdrive capability is also extended by 70mV through reliability enhancement. Superior 128Mb High Density (HD) SRAM Vccmin capability of 450mV is achieved with variability reduction for the first time. Metal capacitance reduction by ~9% is realized with advanced interconnect scheme to enable dynamic power saving.It seems they are using SADP when forming the fins:Fin patterning and formation on bulk silicon with a 48nm fin pitch is realized using pitch-splitting technique where the fin width is determined by the sidewall thickness of a mandrel. Fin profile and gate profile are carefully co-optimized to balanceamong the needs to maintain excellent short channel control, to enhance drive current and to reduce parasitic capacitance of the devices. Poly-silicon deposition and gate patterning with a gate pitch of 90nm on the 3-dimensional fin structure is followed by high-K metal gate (HK/MG) RPG process.

Metal1 pitch is 64nm obtained using an “advanced” patterning scheme (I’m assuming LELE double patterning). Higher levels of metal at 80/90nm pitch are single patterned. There is a 15% speed gain or a 30% power reduction compared to 16FF.

Intel
presented their 14nm Logic Technology Featuring 2nd-Generation FinFET 2 , Air-Gapped Interconnects, Self-Aligned Double Patterning and a 0.0588um[SUP]2[/SUP] SRAM cellsize. They said that their area per transistor shrink was slightly better than the normal shrink (at 49%), and the cost per transistor continues to fall exactly on Moore’s law. The minimum metal pitch is 52nm (only on metal2, metal1 pitch is 70nm and metal0 is 56nm). The fin pitch is 42nm, and the fins are also taller (42nm) and thinner and more square. The contact to gate pitch is 70nm. They have airgaps on just two metal layers, M4 and M6, which products 14-16% performance increase. SADP is used on critical patterning layers. Variation in Vt, which was getting worse with each planar node, improved and 22nm and improves again at 14nm.

They admitted that they have had yield problems, which is public knowledge. 22nm is the highest yielding process in Intel history and 14nm is now almost at the same level. It is shipping in volume.


Using gate pitch multiplied by metal pitch as a proxy for density, Intel have been slightly behind (since TSMC did 28nm when Intel did 32nm, then 20nm when Intel did 22nm, although the timing was such that Intel had earlier production). At 14/16nm this reverses (see diagram).


IBMtalked about their High Performance 14nm SOI FinFET CMOS Technology with 0.0174μm2 embedded DRAM and 15 Levels of Cu Metallization. Of course this is a process that GlobalFoundries will take over when the acquisition of IBM’s semiconductor division is complete.

They have a 42nm fin pitch and 80nm contact/poly (so single pattern and cut mask). Metal1 is 64nm pitch. One interesting feature of the process is that they can created decoupling capacitors on-chip without any additional mask. They can make a 31.5uF decap. With the addition of two masks they can make multi work function. There is a 5X leakage reduction. The 14nm eDRAM unit cell has been scaled down to 0.0174um2, which provides a unique memory solution for cache starved processors.

In the Q&A they were asked if they had SiGe in the fins and refused to comment, which may or may not be significant.

Bottom line: Intel is ahead (by their own reckoning). IBM has the most perfect process for server processors. But I don’t expect to see competitive SoCs out of Intel before TSMC. Competitve microprocessors from IBM sure, although they are not in the merchant market. Competitive microprocessors ahead of TSMC obviously. But SoCs, let’s see how it pans out.

More articles by Paul McLellan…


Chinese Apple In Trouble – What to look forward?

Chinese Apple In Trouble – What to look forward?
by Pawan Fangaria on 12-15-2014 at 7:00 pm

Xiaomi, an aspirant to top the Smartphone market through its disruptive entry into the maturing mobile phone market with its high quality Smartphones at rock bottom prices is getting roadblocks in its second largest market, India. It surreptitiously entered into today’s top mobile phone market in its home ground, China and in just ~3 years it’s already #3 in worldwide mobile market. See Paul’s blog – Xiaomi Already #3 in Smartphones Behind Samsung and Apple
Continue reading “Chinese Apple In Trouble – What to look forward?”


Winning the IoT protocol battle with DSP

Winning the IoT protocol battle with DSP
by Don Dingee on 12-15-2014 at 2:00 pm

There are too many IoT protocols. Way too many. Anyone who says one single protocol will be the winner from end-to-end in all IoT applications and markets is smoking something. Software defined, multi-protocol gateways are the only hope on the IoT – and DSP cores enable this strategy. Continue reading “Winning the IoT protocol battle with DSP”


Jean-Louis Gassée on Intel and Mobile

Jean-Louis Gassée on Intel and Mobile
by Paul McLellan on 12-15-2014 at 7:10 am

I came across a very interesting article/blog written over the weekend by Jean-Louis Gassée on Intel and mobile. It covers some similar ground to several of my blog posts on the topic but also has some new facts. And it has additional credibility since Jean-Louis was head of product development and worldwide marketing at Apple (pre-iPhone).

He also refers to an interviewfrom last year that I hadn’t read with Paul Otellini, Intel’s then-CEO on his last day. Apparently Jobs gave Intel the chance to build the processor for the first (and so presumably subsequent) iPhones. Intel turned them down. Otellini said:We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we’d done it. The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do. At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.

That was the day Intel failed in mobile. This would have been an ARM-based chip. But at the time Intel had an ARM-based division called Xscale (the old Digital StrongARM) who were desperate for business. Eventually they sold off the whole division to Marvell in 2006.

When I was at VLSI I worked for a guy called Cliff Roe who said “I wouldn’t want to be Intel’s next CEO [this was in the Andy Grove era]. He has to find a way to educate the company and Wall Street that they can’t go on making those kind of margins.” Well they still have those kind of margins (Intel’s gross margin is an astounding 65%). But it is also a genuine problem. If Intel had won Apple’s business they would now be building about 250M application processors per year at a cost of, say, $20-40 which is a $5-10B business, significant given Intel’s total business is around $60B. But it would be at a much lower margin since the ARM market is competitive and the x86 market is not (AMD is the only competitor and not a strong one). Whether Wall Street would treat this as good or bad is unclear. They love Intel’s margins.


Intel’s entire strategy has been that they are 3 years ahead of everyone else in manufacturing and process. Indeed they highlighted this at their recent analyst day. The chart is also a bit self-serving. Intel introduced FinFET at 22nm, compared to TSMC at 16nm. But TSMC introduced double patterning at 20nm compared to Intel at 14nm. I have no insight into whether Intel’s 14nm yield problems have anything to do with double-patterning learning ramp, but it is certainly the biggest different from 22nm.

But they need to ship $50 of negative revenue with each tablet processor because they are so competitive! And even in their main business a lot of Broadwell is being pushed out to the middle of next year. I think what actually happens is that Intel has a process lead but only for microprocessors where their price is so high that they don’t need the process to be truly competitive. Everything else has to run in old processes making them non-competitive too.

Herethey were in 2012:Intel dismisses ‘x86 tax’, sees no future for ARM or any of its competitors

How’s that going a couple of years later? ARM is still solidly entrenched in mobile and the rumors that Apple will switch their MacBook Air and maybe Pro to ARM are getting stronger.

Now Intel will roll mobile in with the non-server processors so the fact that they are currently losing $1B per quarter on it (perhaps $7B over the last two years) will be be hidden from view. Or maybe they are quietly giving up, who knows?

But the next big thing is Internet of Things (IoT) and that is a perfect market for Intel with lots of differentiation, because…nobody has a clue. And margins in IoT are likely to be even lower than mobile and ASPs lower still (these will not be mammoth chips in 10nm). I believe most IoT devices will be build on older nodes so whether Intel is 3 years ahead or not is irrelevant.

See also Intel Quits Mobile
See also Intel is Not Quitting Mobile
See also Intel Quarterly Results


More articles by Paul McLellan…


NoC IP boosts SoC reliability, fault tolerance

NoC IP boosts SoC reliability, fault tolerance
by Majeed Ahmad on 12-14-2014 at 7:00 pm

System-on-chip (SoC) devices are increasingly becoming more complex in terms of adding functionality yet they need to be more reliable and fault tolerant for automotive, aerospace and industrial electronics.

Arteris Inc.—which invented the network-on-chip (NoC) interconnect technology back in 2006—is now offering FlexNoC Resilience Package to allow SoC designers augment network-on-chip interconnect IP to boost safety and reliability in mission-critical electronics.

“It is part of the computational consolidation of the multicore SoC designs as CPU-only safety mechanisms like ECC and dual-core lockstep (DCLS) are not sufficient for automotive, aerospace, defense, industrial equipment and other electronics markets requiring fault tolerance,” said Kurt Shuler, VP of Marketing at Arteris. “The network-on-chip interconnect IP solutions like FlexNoC can serve as the central nervous system of an SoC with 150-200 blocks of communication.”

The SoC devices for safety-related applications in the automotive, industrial and medical markets usually use ARM Cortex-R5 and Cortex-R7 processors. These CPU core IPs implement techniques like ECC, parity data protection, DCLS redundancy, duplicate internal memories, safety checkers, and built-in self-test (BIST).

The CPU-only approach, however, falls short in providing the end-to-end protection of any or all IP-to-IP communication within the SoC. That’s where the resilience features for on-chip interconnect—like FlexNoc—come into play and provide support for ARM Cortex-R5 and Cortex-R7 processors port checking.

The FlexNoC Resilience Package enables the implementation of data protection and control features, and it doesn’t require the replacement of existing fabric IP or tools. The easy partitioning of SoCs means that IC designers can take an existing chip and specify which parts of an SoC require resilience and which do not.


FlexNoC Resilience Package block diagram

What is resilience? It’s the ability to maintain an acceptable level of service in the face of faults and challenges to normal operation. Digital ICs can fail in many ways. There could be glitches in power supply or clock supply leading to transient electrical problems, or there could be soft errors or physical damage.

How network-on-chip works

Tom Hackenberg, Principal Analyst, Automotive Embedded Processors, IHS Technology Inc., said: “A growing number of chipmakers are turning to safety and security optimized network-on-chip subsystems for SoCs, such as FlexNoC Resilience Package, to lower the development costs and time it takes to achieve the ISO 26262 certification, enabling both media-intense processing and certifiable mission-critical solutions in an integrated SoC.”

Take the automotive industry as a case study. Automotive OEMs are using high-performance SoC solutions common to the wireless market amid the growing demand for media-rich and telematics applications. However, they have to merge these SoC solutions with time-consuming and intense MCUs and CPUs that are ISO 26262 and ASIL certified, so that these SoCs can provide safe and reliable control systems. According to Hackenberg, while it’s common to design separate electronic control units, it can increase the cost and potentially add to driver distraction.

That’s where network-on-chip subsystems for SoCs come into the picture as a valuable piece of technology. The network-on-chip IP solutions like FlexNoC Resilience Package protect safety-critical portions of the entire CPU, SoC interconnect and memory path, and thus help OEMs create more reliable and fault tolerant systems in shorter time and with lower cost. Semiconductor vendors like Altera and Renesas have been using FlexNoC Resilience Package to make their SoC designs faster and more dependable.

More product details are available in the technical paper, “SoC Reliability Features in the FlexNoC Resilience Package.”

Image credit: Arteris Inc.

Majeed Ahmad is author of Age of Mobile Data: The Wireless Journey To All Data 4G Networks that chronicles the evolution of mobile data technology and how that eventually led to pure data LTE network architecture.