CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

A Powerful Case for the ARC SEM Processor

A Powerful Case for the ARC SEM Processor
by Bernard Murphy on 09-12-2016 at 8:06 am

Building devices for the IoT has become especially challenging thanks to two conflicting requirements. The device has to be small and ultra-low power in most applications but also in many of those applications it has to provide a high-level of security, especially to defend high-value targets like smart metering, payment terminals, embedded SIM cards and mobile and wearable payment systems. There was a fantasy for a while that security heavy-lifting could be handed off to the cloud, but that idea died a quick death when we realized that remote security comes with significant latency problems, man-in-the-middle exposure and potentially worse power implications than you would find in local security.

But while local security consumes less power than a remote option, it consumes more power than no security. So when you’re trying to prove you have the least power-hungry yet still secure solution, differences in PPA profiles between different security solutions really matter.

We should also understand that many IoT devices are deployed with an expectation of long lifetimes and needing at most infrequent physical monitoring. Therefore attackers can, with little personal risk, install equipment around a device to inject faults, jiggle the power supply or use light (on a decapped device) to jiggle state elements, and they can steal keys by monitoring bus activity or even extract high-value keys through side-channel analysis on power rail variations, instruction timing or EMI emissions.

Traditional software vectors for attack will also be popular since these low-power devices cannot afford traditional software defenses, so malware exploiting well known weaknesses like buffer overflows can potentially inject itself into privileged operation modes or other opportunities for tampering.

But attacking a single device is generally not going to be the end goal. The big payback for an attacker is to find an exploit which can be reused on many targets. This is where security through diversity is an important part of a system-wide defense and where, I believe, there may be an unexpected weakness in over-reliance on a dominant CPU architecture. Interesting targets for hackers have to promise significant financial return or at least significant bragging rights in hacker circles. A successful exploit which can only compromise a limited number of targets offers neither. Which doesn’t rule out the possibility of an attack, but it does make you a much less interesting target.

The Synopsys ARC SEM architecture offers solutions to address each of these needs. First, the architecture itself leads in PPA in the industry. So you start with an ultra-low-power solution which also has built-in security.

The architecture provides multiple defenses against attack, some well-known, others quite intriguing for support of security through diversity, even between devices. In this latter class, while ARC is already well-established in support functions like audio and video in mobile apps, in home automation and in automotive and disk controllers, Synopsys acknowledges that it’s not the market leader in embedded CPUs. But that position puts them lower on the priority list for attacks – see above.

Second, the ARC processor extension technology (APEX) helps a chip-maker further increase diversity. Custom instructions added to the base set further complicate attacks like differential analysis because the instruction-set reference is no longer completely accessible. And third the pipeline is very tamper-resistant because instruction and data are read encrypted from memory, using scrambled addresses; these are unscrambled/unencrypted in-flight for computation and are never stored in plaintext. The development team have control over this process, and can even make it differ even from device to device.

Other defenses include support for uniform instruction timing and timing and power randomization, for defense against side-channel attacks. (Synopsys also offers a CryptoPack solution for cryptography algorithms using these features.) If you need JTAG access, they offer a challenge/response mechanism to support a secure JTAG option, though I expect most would advise fusing access through that port before shipping. The SEM core provides a secure memory protection unit supporting 16 regions, with per region scrambling and encryption.


The ARC SEM processor offers these and other security functions and interfaces for a comprehensive security solution on which you can build a full-featured Trusted Execution Environment. Software control is managed through SecureShield, which provides control of privilege levels, memory region access and scrambling/encryption in the pipeline, and secure peripheral and IP access, these together being managed at the OS level through a microvisor to support the creation and management of containers.

Still, you may think “why not just use a better-known solution?”. The first part of the answer has to be power. If you need the lowest power solution, you have to go with the CPU that meets that objective, while also offering strong security, independent of the supplier. Market adoption isn’t a problem – there are plenty of ARC-based systems in production. The SEM core builds on that proven platform, and if you know anything about Synopsys, you know they are not fans of research projects. They told me they saw customer demand to address gaps in the market, for legacy microcontroller-based solutions needing upgrade with an emphasis on ultra-low power and security in a small form-factor, also for emerging applications with similar needs. In both cases, the ARC SEM processor is targeted to address tradeoffs between technical and market needs where a default processor choice doesn’t necessarily fit well.

Finally, give a thought to that diversity topic. If a clever hacker figures out a way into a smart meter, are you sure that payments systems, machine controls and grid management will never be at immediate risk from the same attack? There are some interesting differentiation possibilities in being able to say your security systems don’t share DNA with competitive solutions so are intrinsically firewalled from attacks on mainstream CPU platforms (and also from attacks on systems based on similar but tweaked DNA). You can learn more about the complete Synopsys ARC Processor family by clicking HERE.



SoC FPGAs for IoT Edge Computing

SoC FPGAs for IoT Edge Computing
by Claudio Avi Chami on 09-11-2016 at 4:00 pm

One of the reasons for the explosive growth of IoT is that embedded devices with networking capabilities and sensor interfaces are cheap enough to deploy them at a plethora of locations.

However, network bandwidth is limited. Not only that, but also, the latency of the network can be of seconds or minutes. By the time the sensor data is acquired by the centralized computers, its value for decision making could be lost. In other words, for the IoT solution to be effective, it should not only deliver meaningful data securely (and filter it as much as possible to avoid network congestion), it should also analyze it and act upon it at the origination point of the data. At the very edge of the network.

Applications for edge computing that call for intelligent sensors include:

  • Smart buildings
  • Autonomous transportation
  • Machine control
  • Healthcare
  • Augmented reality
  • Voice, image and video recognition

IoT edge applications can be implemented based on ASICs, FPGA and CPUs. SoC FPGAs combine the advantages of the last two, namely, CPU and FPGA.

A typical SoC these days include a powerful processor and FPGA. Both Xilinx and Altera are offering SoCs based on ARM Cortex multicore processors and hundreds of thousands of processing logic elements, embedded memory and DSP blocks.

The combination of CPU and FPGA brings the following advantages:

  • Increased flexibility and reconfigurability
  • CPU offloading of data intensive processing where FPGAs excel:

    • DSP algorithms parallel implementation

      • Data filtering
      • FFT analysis
      • Software Defined Radio – SDR
      • GPS
    • Slow and fast sensors data acquisition, response, filtering and analysis

      • SPI
      • I2C
      • RS-232, RS-485
      • Bluetooth
    • Physical and low layers implementation of diverse networking protocols

      • Ethernet
      • WiFi
    • Security implementation (data encryption and data access validation)

In the last ISDF (Intel SoC Developer Forum), in 2016, Fujisoft presented its current solution for Edge Computing based on Intel/Altera SoC FPGA, as well as their vision for the future of SoC FPGAs for IoT Edge applications. In the future, their plan is to use the growing capabilities of FPGAs with embedded powerful CPUs and high data bandwidth memory access to implement, manage and reconfigure the Edge solutions based on Genetic and Deep Learning algorithms.

Image source:
Fujisoft – Intel SoC Development Forum 2016

My blog:
FPGA Site

For additional reading:
Edge computing (Wikipedia)
Edge computing, the door to IoT data kingdom (GE)
Why edge computing is crucial for the IoT (RTI)

Another article from me:VHDL pseudo-random number generator tutorial


How Rapidly the Robots Will Rise

How Rapidly the Robots Will Rise
by Roger C. Lanctot on 09-11-2016 at 12:00 pm

“For car buyers, an end to the days of dickering?” reads the headline across the center of the front page of the Washington Post this morning. No, it’s not an article about new tools to make car buying easier. It’s a story about electric vehicle maker, Tesla Motor’s impact on car retailing.

The article details the state by state battles Tesla continues to fight for the right to open showrooms and actually show, tell and sell Tesla vehicles. The article notes that more than 20 U.S. states allow sales of Tesla vehicles from company showrooms, but highlights the growing interest across the country in promoting the sales of electric vehicles generally from any car maker including Tesla.

The article arrives as I am reading Martin Ford’s “Rise of the Robots” which details the forces disrupting service (among other) industries such as retail – as industry after industry faces the kiosk-ification of product sales (think Redbox). Decades ago I was an editor at Computer Retail Week which was part of CMP (now United Media) which owned Computer Reseller News.

A once-mighty and weighty publication, Computer Reseller News effectively chronicled the rise and fall of computer (value-added) reselling. I still remember the special reports and editorials pointing out the flawed model that Dell, the Tesla Motors of the computer industry, was hopelessly pursuing vis-a-vis Compaq. (Compaq?)

Computer Retail Week faded along with the likes of Computer City, CompUSA, Incredible Universe, Circuit City and many smaller regional computer superstore chains (notably Micro Center and Fry’s continue to survive and thrive). Blockbuster is the most frequently cited poster child of retail consolidation and decline today, but regional and national chains serving a range of consumer needs continue to decline in the face of direct sales. Been to Borders lately for a book?

But reselling cars is a different proposition, you might say. Consumers want to touch and feel the product. That’s why dealers have massive lots with loads of inventory.

If you want a flavor of the future, visit Europe where most dealers carry little inventory and as much as 50% of car purchases are built to order. In the U.S. consumers still like to kick the tires… for now.
The National Automobile Dealer Association touts the broad social, political and economic impact of dealers. According to 2015 statistics from NADA:

  • There are 16,545 franchised new car dealers in the U.S.
  • Dealers account for 2,305,159 jobs – including 1,088,001 direct and 1,217,158 induced and indirect jobs
  • An average of 67 employees per dealership
  • $63B payroll – $52,144 average annual earnings – $20.3B state and Federal income taxes paid
  • $862.7B total retail sales – 17.9% share of state sales

The impact of dealers is more than economic. Dealers influence local politics, interact with most banks to conduct their business, and are active in their communities. Is Tesla sponsoring any local charity events or softball teams?

The case for dealers is strong, but when faced with the rise of the robots in the form of direct sales it’s important to remember that sympathy for dealers has its limits. Everyone has a story reflecting unpleasant interactions with dealers. My Toyota dealer recently tried to get me to replace a left control arm ($2,000+) that my independent repair shop told me was not necessary. My son’s Ford dealer swapped two cars out from under him AFTER all the paperwork was signed before he finally drove away a “happy” customer in his new Fiesta.

There’s a reason they’re called “stealers.”

If Tesla’s popping of the franchise dealer bubble were to open the flood gates for direct vehicle sales few consumers would be shedding tears. But the legal line in the sand stands and Michigan Tesla buyers, like those in other states that do not allow Tesla stores, will have to turn to Ohio or Illinois if they want to buy a Tesla.

The key issue is that electric vehicles like the Tesla Model S require little or no service which is a core consumer value proposition for dealers. Also, the structure of the industry has hardened such that new car makers lack the DNA to deal directly with consumers. For car makers, selling cars is a B2B proposition. Car makers sell do dealers.

But the B2B bubble is quivering on an entirely separate front. While the Washington Post article is focused on the new sales model enabled by the introduction of EVs, the growing role of car sharing and driverless vehicles is also emerging as a dealerless vehicle-oriented value proposition. Why buy a car, if I can borrow one?

Dealers have ample reason to be concerned. NADA lobbyists are surely mobilizing not only to fight the opening of Tesla stores, but to preserve a role for dealers in the emerging world of driverless cars. Lobbying alone cannot overcome a lack of creativity.

It’s time for dealers to turn in the direction of the skid. Dealers need to confront the structural weaknesses in their existing sales and service model, own up to their unfriendly consumer-facing processes and rework their operations to tackle the changing transportation landscape that is emerging faster than they realize.

Visit any dealer on any given day and watch the sales process unfold. Watch the delaying tactics, the wearing down of the customer(s), the dissemination of misinformation and disinformation. If new car dealers want consumer support and sympathy for their good works in the community they need to change their ways in the showroom and the shop. Otherwise, the robots will rise earlier than we think and we’ll all be buying or borrowing our next car with the press of a button on an app.


IOT and Your Utilities Services – Big Savings Coming

IOT and Your Utilities Services – Big Savings Coming
by Bill McCabe on 09-11-2016 at 7:00 am

The Internet of Things has progressed rapidly in the last decade, providing numerous benefits for consumers, industries, and even government organizations. As a consumer, it can be difficult to break through the noise to see the most important benefits of IoT, especially when the spotlight is often focused on entertainment and convenience services. One benefit of IoT that is sometimes underrepresented, is the ability for new technologies to increase the efficiency and reduce costs of utility services.

Data from the Open & Agile Smart Cities initiative in Europe estimates that gross savings in a moderately sized smart city could be as much as 15% for water, 25% for waste management, and 50% for electrical lighting. Although these estimates might seem generous, they do reflect the optimism of other developed markets. As an example, data from the New Jersey Institute of Technology suggests that smart energy sensors could save the United States up to $1.2 billion dollars per year in the largest cities.

A Proven Case Study
The figures are exciting, but how exactly do they directly impact consumers? To answer this, we can look at how smart water sensors have benefitted residents in the city of Dubuque in Iowa, U.S.

In 2009, the city developed programs to introduce IoT connected sensors to consumer utility lines. Rather than traditional metering systems, residents and businesses were connected to smart meters that could automatically report data back to utility providers, allowing for real time usage monitoring and reporting. With the new meters, residents were better able to monitor their real time water usage and costs, which allowed for a 7% reduction in total water usage. The same system allowed for speedy detection of water leaks and flow problems, which were proactively monitored by the utility company.

Because consumers had immediate access to their usage statistics, they could also identify leaks, faucets, or appliances in their homes that could be contributing to water waste. Considered a huge success, a similar system was adopted in the Australian city of Townsville, with similar positive results.
Considering this example of how IoT sensors have benefitted water utilities, it becomes easy to see how comparable systems could benefit electric and gas utilities. The savings aren’t just found from reducing usage and detecting leaks or faults, but also by reducing the cost of actually monitoring utility usage. Machine generated data can be interpreted by computers, eliminating the need for manual data interpretation. Meter reading at the service termination point also becomes unnecessary.

Wider Benefits that Integrate with Smart City Concepts
Using smart meters connected to the Internet of Things is clearly the future of utility metering, but there are still benefits beyond what has been discussed. With a smart city that proactively collects and interprets data, there are possibilities to improve utility infrastructure, identify trends, and plan utilities for new developments based on existing data.

Overall, the potential cost savings and benefits will far outweigh any investment that is made to modernize existing utility networks. Any city of significant size should be able to clearly measure the benefits of IoT, and the adoption rate of new technologies will serve the interests of both service providers, and the end of line consumers.

For more info about IOT check out our new website www.internetofthingsrecruiting.com


TSMC and Solido to Share Experiences with Managing Variation in Webinar

TSMC and Solido to Share Experiences with Managing Variation in Webinar
by Tom Simon on 09-10-2016 at 7:00 am

TSMC knows better than anyone the effect that variation can have at advanced process nodes. Particularly in memory designs and in standard cell designs, variation has become a very critical because of its effects on yield and because of the high-cost of compensating for it. Smaller feature sizes combined with lower voltage thresholds have pushed designers to look harder for effective solutions. A number of years ago TSMC chose to work with Solido Design Automation to solve variation problems.

On September 28th TSMC and Solido are teaming up to share what they have learned about dealing with variation in advanced process nodes. They are hosting a webinar where they will talk about variation in memory and standard cells designs. The focus of the webinar will be on how TSMC uses Solido’s new Variation Designer 4.

Jacob Ou from TSMC will be speaking. He is a technical manager at TSMC with extensive experience in simulators, PDK’s, routing, and supporting customer designs. Solido’s Kristopher Breen we’ll also be speaking. He is vice president of customer applications at Solido Design and also has extensive experience in the development, deployment and support of variation aware design and verification solutions.

The components of a memory design need to be verified at High Sigma. Without adequate verification methods designers often resort to adding redundancy, increasing supply voltages or running at lower clock rates. All of these potential solutions have high costs and can affect a product’s success in the marketplace. Solido’s Variation Designer 4 Includes several powerful proven technologies for solving these problems. One is their High Sigma Monte Carlo the other is their Hierarchical Monte Carlo. Better verification leads to more competitive memory products.

Conventional methods of standard cell verification for cell delays and transition times are simply impractical from a compute resource and tool license perspective. Yet standard cells need to be carefully analyzed because the effects caused by variation do not manifest in a classical Gaussian curve. In fact, they have extremely long tails which makes adding arbitrary amounts of margin ineffective as a definitive way to guarantee chip performance. Once again High Sigma is required to ensure a high success rate. Solido has two technologies for helping out with standard cell verification. Fast Monte Carlo can be used on large batches of standard cells out to three Sigma quickly and reliably. Then High Sigma Monte Carlo can be used for accelerating High Sigma verification so that it is feasible for standard cell libraries.

One of Solido’s advantages is its extensive experience with variation in semiconductor designs. It should be very interesting to see what Jacob and Kristopher have to say about their experiences in these two areas. The webinar will be held on September 28 at two different times. It will be available at 10 AM Pacific Time and also at 10 AM in China Time Zone. This webinar should be interesting for IC designers, design managers, cad managers, as well as design directors.

Here is the link for registering for this webinar on the Solido website.


Somebody actually REDUCED their IoT forecast?

Somebody actually REDUCED their IoT forecast?
by Don Dingee on 09-09-2016 at 4:00 pm

Some analysts are starting to get the idea that their credibility is worth something. Research firm IC Insights has actually dialed back its latest IoT semiconductor projection through 2019, although still calling for what would be quite robust overall growth. Continue reading “Somebody actually REDUCED their IoT forecast?”


Power-Aware Debug to Find Low-Power Simulation Bugs

Power-Aware Debug to Find Low-Power Simulation Bugs
by Daniel Payne on 09-09-2016 at 12:00 pm

When I worked at Intel designing custom chips my management would often ask me, “Will first silicon work?” My typical response was, “Yes, but only for the functions that we could afford to simulate before tape-out.” This snarky response would always cause a look of alarm, quickly followed by a second look of disbelief, and so it goes on in a similar vein today. You can design an IC and get first silicon working well enough to begin selling it, however the big challenge is how to find all of those functional bugs and fix them before tape-out happens.

On Tuesday night I went out for my weekly group ride with other road bike enthusiasts here in Tualatin, however my Garmin 520 bike computer showed only a 2% charge, so I quickly opted for plan B which was to use my Android phone and the Strava app instead. I started up the app, clicked the big Green Go button. I then clicked the power button lightly and the screen shut down to save power, then I did my bike ride. At the end of the ride, I clicked the power button and pushed the Red Stop button. To my horror the app didn’t save the GPS route of my ride at all, something glitched in the app while the screen was powered down. I cannot say that it was the fault of the Android phone, or just the app, but it illustrates what can happen when a complex device like a smart phone tries to conserve power by having multiple power-saving modes.


Garmin 520 – please make the battery last longer


Strava app – please make it work reliably when the phone goes into power savings mode

To help you avoid a field failure like mine, you would want to view an archived webinar from Synopsys called:

Related blog – Catching low-power simulation bugs earlier and faster

The three main ideas in this second webinar of the series are:

[LIST=1]

  • How visualization of the power architecture can help identify power strategy and connectivity issues upfront
  • How to use annotated power intent on source code, schematics and waveforms to rapidly root-cause power-related errors back to UPF/RTL
  • How to debug unexpected design behavior such as Xs caused by incorrect power-up/down sequences etc

    Synopsys has two people in this webinar, Vaishnav Gorur and Archie Feng, and they go over the methodology of using a power architecture defined in UPF while the design is specified with an RTL description. Verifying that your power architecture is implemented properly is a big challenge because of how complex power reduction schemes have become, which enable longer lasting battery-powered devices like the Garmin cycling computers and popular smart phones.


    Using Verdi to debug an SoC


  • Checkout the Upcoming Synopsys Power Webinar

    Checkout the Upcoming Synopsys Power Webinar
    by Bernard Murphy on 09-09-2016 at 7:00 am

    This is part 3 of a series of 4 on low power design, scheduled for September 21st at 10am. Kiran Vittal and Ken Mason will be discussing using the SpyGlass Power solutions (analysis and verification) to optimize power at RTL. Atrenta always had a leading position in this area; I expect a year following their acquisition by Synopsys, they must have further solidified their position, based on how well I hear the acquisition is working out.​ I’m very familiar with what Synopsys has to offer here, since I was at Atrenta for many years – we were rapidly displacing competitive solutions so I expect that story continues, now a year into the acquisition :cool:.

    REGISTER HERE

    Design for low power is a complex problem spanning system architecture and application software all the way down to design process and library selection. This series looks at multiple aspects of the RTL design problem:

    • (Part 1): Static checking for the RTL (for correct and consistent UPF specification for example) and simulation to detect potential dynamic conflicts

    • (Part 2): Using Verdi power-aware debug to catch power bugs

    • (Part 3 – this upcoming Webinar): Optimizing your power design through power estimation and finding opportunities to further reduce power

      • Atrenta led the field in accuracy in power estimation at RTL, thanks to obsessive work on correlating accuracy of our fast synthesis with physical synthesis, in matching macro choices, clock and signal buffering and building correlation databases against implemented designs. Still, this is always a bit of a challenge when you don’t also own the physical synthesis tools. Acquisition by Synopsys solved that problem so correlation can only have improved further. We also had very strong solutions in micro-architecture optimization for low-power (formally detecting and proving ways to add or strengthen gating based on initial user choices) and what-if analysis to guide improved user-selection of Vt mixes, power gating, macro-level clock-gating and all the rest of the spectrum of power-control options.

    • (Part 4): Using Verdi technologies (Siloti correlation and replay simulation) to bridge between RTL simulation data and gate-level accuracy

    If you still think low power design is just about specifying clock gating pragmas for synthesis, take a deep breath. Low power design, as supported by these kinds of analysis, has become a major area for differentiation with many dimensions and multiple opportunities to make very good or very bad choices. The broad palette of low-power design solutions, connected to implementation technologies, is a big part of why Synopsys has such strength and depth in this area. You need to know what methods and tools are being used in the industry today to help you stay at the forefront of competitive low-power design.

    REGISTER HERE

    Web event: SpyGlass Power: Comprehensive Power Optimization Solution for Faster RTL Signoff (Part 3 of 4)
    Date: September 21, 2016
    Time:10:00 AM PDT

    Duration: 60 minutes

    In an electronic world driven by smaller devices packed with larger functions, power becomes a critical factor to manage. With power consumption leading to heat dissipation issues, reliability of the device can be affected, if not controlled or the device not cooled. Moreover, for mobile devices such as smartphones or tablets that run on battery, low power consumption is essential. For a holistic solution to the power problem, it is important that this is addressed at the source, i.e. the RTL design stage.

    In this webinar, we will discuss how SpyGlass Power delivers an integrated early power analysis and exploration solution that includes: estimation, profiling, reduction and exploration. SpyGlass Power leverages the industry leading SpyGlass Platform and GuideWare methodology for an easy to use and comprehensive flow for RTL signoff.


    GlobalFoundries Enhances FDSOI Roadmap with 12FDX

    GlobalFoundries Enhances FDSOI Roadmap with 12FDX
    by Eric Esteve on 09-08-2016 at 4:00 pm

    Last year, GlobalFoundries filled the competitive gap by offering FD-SOI technology on 22nm, offering better performance than 28nm, you may have read about the news in Semiwiki. Timing is important, as Samsung has announced FD-SOI support one year before (2014) GlobalFoundries, but for 28nm. The announcement made by GlobalFoundries today “GLOBALFOUNDRIES Extends FDX™ Roadmap with 12nm FD-SOI Technology”could open more doors for FD-SOI adoption, especially in Mobile and Automotive.

    I say “could” because you also read this in the PR: “Customer product tape-outs are expected to begin in the first half of 2019”. Many things can happen in three years in the semiconductor industry, like Samsung deciding to also extend their FD-SOI roadmap or TSMC finally deciding to support FD-SOI, but these are pure speculations.

    If you compile the various quotes coming from Linley Gwennap founder and principal analyst of the Linley Group, G. Dan Hutcheson, chairman and CEO of VLSI Research, Handel Jones, founder and CEO, IBS, Inc, Dr. Xi Wang, Director General, Academician of Chinese Academy of Sciences or Wayne Dai, president and CEO of VeriSilicon, 12FDX will allow addressing the intelligent systems of tomorrow across a range of applications, from mobile computing and 5G connectivity to artificial intelligence and autonomous vehicles, cost-sensitive mobile and IoT products, IoT and Automotive, connected systems for Intelligent Clients, 5G, AR/VR, Automotive markets. I have piled these different quotes on purpose, so we can synthesize and extract which applications will get the highest benefits from this low-power, cost optimized technology.

    12FDX could be the great opportunity for FD-SOI adoption for mobile, which is today the market segment generating the largest semiconductor sales. The two mobile manufacturer leaders (also designing their own application processor), Samsung and Apple are disputing a race which doesn’t leave room for such innovation like FD-SOI adoption, and FinFET provides undisputed performance together with lower power, if you can pay the incredibly high development cost and definitely higher IC production price. But we have seen the emergence of chip makers addressing the cost sensitive mobile segment, which is growing at a faster pace than the pure performance driven segment.

    From a technical standpoint, better power efficiency is a great benefit offered by the FD-SOI technology. By nature, FD-SOI has lower leakage than bulk. You can also take advantage of body biasing by delivering more performance when needed by applying forward biasing, but apply a reverse bias to reduce static leakage when high performance is unnecessary. From a marketing standpoint, you can roughly position 22FDX FD-SOI device as offering (almost) the same performance than 14FinFET for a price similar to a 28nm Bulk device. We can expect 12DFX to be cheaper than 14nm FinFET. Smarter power consumption and better price should be two important decision factors when selecting the technology to support cost sensitive mobile application.

    Automotive ASICs supporting application of the future (ADAS and more) are in design today and will not stop any soon. Even if it could be surprising at first, these automotive applications will require using computing intensive devices, to support complexes algorithms, artificial intelligence or image recognition. The automotive market has always been a cost sensitive market and this will stay true, ringing one bell for FD-SOI selection. Unlike for data center, you don’t expect to spend a lot of money just to cool the device and power consumption ring another bell.

    I would add another FD-SOI advantage over FinFET with reliability. Reliability requirements in the automotive market are very stringent. That makes sense as you are expected to use a car for 10X longer than a smartphone. It seems that the fin shape of a FinFET transistor, very tall but narrow, could cause reliability issues… Let’s rank automotive advanced applications in the potential adopters for 12 FDX.

    What about IoT? If we talk about IoT devices, I have always been skeptical about the fit with any advanced nodes, FinFET or 12FDX. If IoT applications generate intensive computing in the cloud, then we are talking about servers/storage application, not FD-SOI friendly. If it appears that IoT systems will require local computing, intensive enough to justify advanced node usage, then a part of this IoT system will benefit from the smart power efficiency offered by body biasing, lower cost than FinFET and low power of FD-SOI.

    All of the above would be pure speculation in the absence of a solid FD-SOI ecosystem. Just after the PR announcing FD-SOI roadmap, this news came on the wire titled “GLOBALFOUNDRIES Unveils Ecosystem Partner Program to Accelerate Innovation for Tomorrow’s Connected Systems”.

    The goal is to show that the tools and IP to ease migration to FD-SOI from bulk nodes such as 40nm and 28nm are available:

    • tools (EDA) that complement industry leading design flows by adding specific modules to easily leverage FDSOI body-bias differentiated features,
    • a comprehensive library of design elements (IP), including foundation IP, interfaces and complex IP to enable foundry customers to start their designs from validated IP elements,
    • platforms (ASIC), which allow a customer to build a complete ASIC offering on 22FDX,
    • reference solutions (reference designs, system IP), whereby the Partner brings system level expertise in Emerging application areas, enabling customers to speed-up time to market,
    • resources (design consultation, services), whereby Partners have trained dedicated resources to support 22FDX technology, and;
    • product packaging and test (OSAT) solutions.

    “22FDX is increasingly gaining momentum as the platform of choice to build differentiated, highly-integrated system solutions,” said Alain Mutricy, senior vice president of Product Management at GLOBALFOUNDRIES. “Now is the time to step up industry collaboration to enable our customers to accelerate adoption of 22FDX. FDXcelerator will extend the reach of the FD-SOI ecosystem by creating a market place for truly innovative FDX-tailored solutions and services.”

    Last, but not least, this commitment from Marie Semeria, CEO of LETI, an institute of CEA Tech. (as a reminder, LETI has been very active in supporting FD-SOI technology transfer to GlobalFoundries):

    “12FDX development will deliver another breakthrough in power, performance, and intelligent scaling as 12nm is best for double patterning and delivers best system performance and power at the lowest process complexity,” said Marie Semeria, CEO of Leti, an institute of CEA Tech. “We are pleased to see the results of the collaboration between the Leti teams and GLOBALFOUNDRIES in the U.S. and Germany extending the roadmap for FD-SOI technology, which will become the best platform for full system on chip integration of connected devices.”

    Even if STMicroelectronics and Samsung have supported FD-SOI before GlobalFoundries, the Foundry is the first to offer a clear roadmap with 12FDX to come after 22FDX. We expect this positioning to open doors for power conscious, cost sensitive applications like (low cost) mobile, automotive or IoT.

    From Eric Esteve from IPNEST


    FPGAs at Deep Machine Learning

    FPGAs at Deep Machine Learning
    by Claudio Avi Chami on 09-08-2016 at 12:00 pm

    The concept of machine learning is not new. Attempts at systems emulating intelligent behavior, like expert systems, go as far back as the early 1980’s. And the very notion of modern Artificial Intelligence has a long history. The name itself was coined at a Dartmouth College conference (1956), but the idea of an “electronic brain” was born together with the development of modern computers. AI as an idea accompanies us from the dawn of human history.

    Three latest development are pushing forward “Machine Learning”:

    • Powerful distributed processors
    • Cheap and high volume storage
    • High bandwidth interconnection to bring the data to the processors

    As in many other fields, development of Machine Learning is also seeing development on algorithms that take advantage of the new hardware capabilities.
    There are four types of algorithms used in machine learning:

    • Supervised – The vast majority of systems today. These systems are ‘trained’ based on past data on an attempt to predict future outcomes.
    • Unsupervised – These systems try to build models, by themselves, of the process analyzed.
    • Semi supervised – is a combination of the first two, where a small amount of data is ‘labeled’ (i.e. related to known training rules) and the machine uses this as a seed to label the rest of the data
    • Reinforcement – The algorithm creates its rules through trial and error.

    According Wikipedia, Deep Learning is “a part of a broader family of machine learning methods based on learning representations of data. An observation (e.g., an image) can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc. Some representations are better than others at simplifying the learning task”.

    In the past, most Deep Learning solutions were based on the use of GPUs. However, FPGAs are being seen as a valid alternative for GPU based Deep Learning solutions.

    The main reason for that is the lower cost and lower power consumption of FPGAs compared to GPUs in Deep Learning applications.

    Microsoft adopted Altera Arria 10 devices for their Convolutional Neural Network (CNNs), estimating that the usage of FPGAs would increase their system throughput roughly at 70% with the same power consumption.

    A recent article on Next Platform comments on how Baidu has also adopted FPGAs for deep learning solutions. Teradeep is another company (startup) developing CNNs, and one among the first of those adopting FPGAs as an alternative to GPUs. In May this year Xilinx announced it invested in Teradeep and continue working closely together to optimize its technology.

    My blog:FPGA Site

    References:
    Statistics and Machine Learning at Scale – Whitepaper
    Deep Learning on FPGAs – Past, present and future
    Wikipedia entry about Deep Learning
    Microsoft CNN
    Fpga based deep learning accelerators take on ASICs – The Next Platform article
    TeradeepFPGA accelerated Deep Learning solutions

    [INDENT=8]Another article from me:
    Keeping your design files organized