NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Quantum Resistance on the Edge

Quantum Resistance on the Edge
by Bernard Murphy on 03-21-2017 at 7:00 am

I’ve written recently about the trend to move more technology to the edge, to mobile devices certainly but also to IoT edge nodes. This is based particularly on latency, communications access and power considerations. One example is the move of deep reasoning apps to the edge to handle local image and voice recognition which would be completely impractical if recognition required a round trip to the cloud and back.


Another example concerns quantum computing and its potential to undermine cryptography, so that anything you think is secure (on-line shopping/banking, personal medical information, the national power grid, national security) will be easily accessed by anyone who can afford a quantum computer (nation states and criminal enterprises). If this is possible at the desktop/cloud level, it should be even more of a risk in mobile and IoT devices.

Conventional cryptographic methods rely on the significant difficulty of solving a mathematical problem, such as factoring an integer computed as the product of two large prime numbers. While the complexity of these problems can be arbitrarily high, a combination of clever mathematics and distributing the solver over massive networks of PCs has broken some impressively large cryptography keys. Cryptographers have been able to crank up the size of the key to stay ahead of the crackers, but quantum computing threatens to break though even that line of defense. (Other techniques based on elliptic curves and discrete logarithms are similarly limited.)

The mathematics of determining how hard it can be to crack an algorithm are challenging and in practice lead to upper bounds based on best-known cracking algorithms, those bounds being progressively refined over time as improved methods are discovered. The best-known solution to the general factorization problem has a complexity (in terms of time taken to solve the problem for a given key-size) which grows at a rate slightly slower than exponential with key-size.

Enter quantum computing (QC). Skipping the gory details, the point about QC is that for a give number of quantum bits, it can evaluate for all possible settings of those bits at the same time. So if the QC had N bits, it could evaluate all 2[SUP]N[/SUP] possibilities in parallel. This makes QCs able to solve problems of exponential complexity in reasonable time. Certainly, the integer factorization problem (widely used in production cryptography today) would be completely exposed to machines with a large enough number of quantum bits.

So, RIP cryptography? Not so fast. I wrote about a year ago on lattice-based cryptography methods, specifically designed to be hard for QCs to crack. That’s the thing about math – you invent a way to crack a class of problems, then the mathematicians come up with new algorithm which defeats your invention. Support for this method was added to OpenSSL back in 2014, though this seems to have eluded many people who write on QC threats to encryption.


But while that solution is good for desktops and the cloud, it’s more problematic for mobile and edge devices which are much more power- and latency-sensitive (and therefore, as mentioned earlier, prefer to avoid or minimize tasks requiring cloud communication). To address this need, SecureRF now offers a solution using a Diffie-Hellman-like authentication protocol with a 128-bit key, but resistant to known QC attacks; it is also 60x faster than elliptic curve cryptography and uses 140x less energy. The digital signature algorithm is based on group-theoretic methods (built on braid groups), known to be quantum-resistant. Arrow’s Chameleon96 Community board hosts this solution today.

A word on quantum-resistance. Complexity bounds at these lofty heights are difficult to find and prove. What is known is that QC algorithms, like Schor’s and Grover’s algorithms which can crack non-quantum encryptors, can be defeated by resistant algorithms. Also, all quantum-resistant approaches I have seen are at least NP-hard, which means they are expected to be very hard to solve. That doesn’t prove they can’t be cracked by some QC method but no-one knows of such a method and QC has very little wiggle room. If a resistant algorithm is even a little bit super-exponential in complexity, then the effort to crack using QC increases faster than linearly as key-size increases, taking us back to where we started.

One last technology note for QC key-cracking enthusiasts. Commercial QC doesn’t yet exist beyond relatively small word sizes (e.g. at Google) though IBM and others claim they will release systems within the reasonably near future. Then again, the NSA is known to be working on QC and China has already launched a satellite to support quantum key exchange over long distances. Different technology, but it’s not a big stretch to assume they’re also working on QC. And we must assume Russia is doing the same. So yeah, you should probably assume that at least nation-state hackers will be able to crack your non-quantum-resistant cryptography today or very soon and therefore all responsible systems should plan to be resistant to quantum attacks. Solutions from vendors like SecureRF will necessarily be required in any system used in commerce, automotive, medical and other secure applications.

You can read the SecureRF press release HERE and get more information on their products HERE.

More articles by Bernard…


Joe Costello and Other Luminaries Keynote at DAC

Joe Costello and Other Luminaries Keynote at DAC
by Daniel Payne on 03-20-2017 at 12:00 pm

The most charismatic EDA CEO that I have ever witnessed is Joe Costello, who formed Cadence by merging SDA (Solomon Design Automation) and ECAD (known for DRC with Dracula). You will be impressed with his Monday keynote at DACon June 19th, starting at 9:15AM. Joe has long since left the EDA world and is currently the CEO of a company called Enlightedthat is bringing the IoT to smart buildings, and yes, they actually have big-name customers.

Monday Keynote
IOT: Tales from the Front Line
Monday, June 19, 9:15AM – 10:00AM

Joe Costello, Chairman & Chief Executive Officer
Enlighted, Inc., Sunnyvale, CA

There is a lot of talk about the potential of the Internet of Things. But what is happening on the front lines? Where are the examples of real impact?

Enlighted CEO Joe Costello will discuss how the IoT is impacting commercial real estate, the largest asset class in the world, by giving buildings a “sensory system” akin to a human body. Once deployed, there are a multitude of new opportunities to improve business processes thanks to granular data that has never been available before.

Learn how this technology is currently being developed and applied, the challenges, along with predictions for the future of IoT in commercial buildings.

Tuesday Keynote
The Rise of the Digital Twin
Tuesday, June 20, 9:00am – 10:00am

Chuck Grindstaff, Executive Chairman
Siemens PLM Software Inc., Plano, TX

A new concept is sweeping the industrial machine market: the digital twin. Using high performance software, a digital copy of the machine is created and developed simultaneously with the actual physical product. This allows design ideas to be quickly tested and constantly refined throughout a machine’s entire lifecycle.

Sound familiar? EDA has of course been doing this for decades in electronics with integrated circuit design, even as designs became staggeringly complex with billions of transistors.

In his keynote, Chuck Grindstaff, Executive Chairman of Siemens PLM Software, will explore the crucial role of digitalization in assisting engineers to design, simulate and verify products that increasingly incorporate both mechanical and electronic capabilities. For example, the Industry 4.0 initiative is digitally transforming factories, using sophisticated electronics to boost efficiencies from concept through all stages of the product life cycle. Another area undergoing massive transformation is the automotive industry, where today’s cars are becoming digital platforms on wheels, with the electronics approaching 50% of the BOM costs. Join Mr. Grindstaff as he examines the fertile new intersection of electronics and mechanical design and how it will transform both industries.

Wednesday Keynote
Accelerating the IoT
Wednesday, June 21 | 9:00am – 10:00am

Tyson Tuttle, Chief Executive Officer
Silicon Laboratories, Inc., Austin, TX

The Internet of Things (IoT) has been hailed as the next frontier of innovation in which the everyday “things” in our homes, offices, cars, factories and cities connect to the Internet in ways that improve our lives and transform industries. The IoT market is poised to exceed 75 billion connected devices by 2025, but several challenges remain in achieving the market’s full potential. Tyson Tuttle, CEO of Silicon Labs, will explore what it will take to accelerate the promise of the IoT. In his keynote, Tyson will consider the market imperatives and engineering challenges of adding connectivity to electronic devices, including cost, ease of use, energy efficiency, interoperability, future extensibility, and security. Addressing these challenges will unleash the limitless possibilities of a more connected world.

Thursday Keynote
EmotionTechnology, Wearables, and Surprises
Thursday, June 22 | 9:10am – 10:00am

Rosalind Picard, Professor
Massachusetts Institute of Technology, Cambridge, MA

Years ago, I set out to create technology with emotional intelligence, demonstrating the ability to sense, recognize, and respond intelligently to human emotion. At MIT, we designed studies and developed signal processing and machine learning techniques to see what affective insights could be reliably obtained. In this talk I will highlight the most surprising findings during this adventure. These include new insights about the “true smile of happiness,” discovering new ways cameras (and your smartphone, even in your handbag) can compute your bio-signals without using any new sensors, finding electrical signals on the wrist that reveal insight into deep brain activity, and learning surprising implications of wearable sensing for autism, anxiety, sleep, memory, epilepsy, and more. What is the grand challenge we aim to solve next?

Summary
DAC is still the premiere event for everyone in the EDA, semiconductor IP, SoC and foundry business to attend, so I hope to see you in Austin this summer enjoying the Keynotes, technical papers, exhibit and many networking events.


Recipes for Low Power Verification

Recipes for Low Power Verification
by Bernard Murphy on 03-20-2017 at 7:00 am

Synopsys hosted a tutorial on verification for low power design at DVCon this year, including speakers from Samsung, Broadcom, Intel and Synopsys. Verification for low power is a complex and many-faceted topic so this was a very useful update. There is a vast abundance of information in the slides which I can’t hope to summarize in a short blog so I’ll just highlight a few points that stood out for me. I suggest you get your hands on the slides (Tutorial 6 from the DVCon 2017 set) for more detailed study.

The tutorial kicked off with a review (by Amol Herlekar of Synopsys) on trends in low-power design based on global SNUG 2016 surveys. One surprising observation is just how many advanced power saving techniques are being used across a wide variety of applications, from cloud computing to PCs, digital home, mobile, auto, IoT, medical, mil-aero, industrial and test and measurement. I remember when, not so long ago, many designers thought the world of power management was bounded by clock gating, but now the survey shows widespread adoption of power gating, sequential clock gating, DVFS (wow!), state retention and use of multiple power and voltage domains. The bulk of respondents were using 10 or less voltage domains and 10 or less power domains, but that’s still a lot.

Users continue to migrate to UPF (>70% within a year) and especially UPF 2.0, while other formats continue to lose mindshare. And while UPF has made an impossible task possible, there are still many questions around best methodologies – how to deal with analog and other hard IP, how best to organize LP verification, how to get good coverage of low power states and transitions and how best to verify a PG netlist (remember all those switches for power and voltage gating?) Follow-on tutorials provided advice for verification engineers in the trenches on how they approach these problems.

Amol also presented the Synopsys top-level recommendation for a power verification flow – always get static verification clean, then proceed to dynamic verification and do this at RTL, post-synthesis and post-layout. Other speakers largely echoed these points (with a lot more elaboration on details), starting with Vikas Gupta from Samsung, who provided guidelines specifically on static verification. He talked about hierarchical UPF verification and the care required in getting this right, such as managing multiple instances of a block potentially appearing in different power configurations. He also stressed the importance of fully validating the UPF at each stage (RTL, post-synthesis, post implementation). And he stressed that in their environment, waivers are not allowed; you must get the UPF clean the right way, not by fudging.

One of my take-aways from this section was that static verification is manageable if (and possibly only if) you follow a disciplined approach to constructing and checking the UPF at each stage. My other take-away was that effectively what we have today solves half of the power intent problem; from UPF we can verify power intent, but the assembly of that intent is still (for most) largely manual. Users need tools/templates that will help build UPF following best practice guidelines for assembly closer to correct by construction.

Broadcom followed with a discussion on verification challenges. This presentation was from YC Wong; if anyone can stress-test an EDA concept or tool, YC and his team will get there first, so I’m unsurprised that he got this piece of the tutorial. Incidentally, he called out VC LP, VCS-NLP and Verdi use in their flow. For YC, it’s ultimately it’s all about PG netlist validation. Not the way many of us think about it but you can’t fault the reasoning. The PG netlist contains all the power and voltage switches and other power connections which are only implied (through UPF) in earlier gate/RTL representations. So his team starts by building a PG netlist (mapped, no optimization) even before they hand off to synthesis, and they run static verification on that netlist. And of course, they repeat this on the LVS netlist before checking LP strategies.

He also re-emphasized that you should do everything (at each stage) to maximize static verification before you get into dynamic verification. Static in this context isn’t just UPF linting versus RTL/gate netlists. It also includes formal and X-prop analysis. Especially when you get to PG simulation, it is way too expensive to be finding problems that could have been found statically. He particularly stressed the value of X-prop analysis in finding potential sequencing issues before you get into dynamic verification.

Satya Ayyagari from Intel closed with a discussion on low power simulation (where they use VCS-NLP and X-prop). Satya gave a very detailed description of strategies to approach different kinds of IP and to approach full-chip LP simulation, both on a simulator and in emulation (and even prototyping). Attention to gate level simulation was interesting, for mission mode, for power sequencing and for scan testing where scan paths cross power domains. Emulation was stressed as important for full-chip power verification across use cases, but he mentioned a point I didn’t realize – that he has seen no good method to handle power state tables in emulation. Satya suggested as a closing point that LP modeling would benefit for some level of understanding of voltage in logic verification; not the AMS kind of understanding which would be too slow, but enough to trap potential mismatches in level shifter and voltage switching expectations.

Srinivasan Venkataramanan in the audience asked a bunch of questions. I learned afterwards that he works at a verification consulting group in Bangalore. He liked a topic on complex power switches, raised by YC (which will require further extensions to UPF). He also liked the detail (in the trenches he called it), especially from the Intel speaker. And he liked that Synopsys acknowledged the need for extended assertion capabilities beyond standard SVA. Overall he said he really found high value in this tutorial, a notable endorsement from an independent member of the audience.

You can find the slides HERE. To get these you will need to have registered as a participant and you will need your badge ID from the event. Or you can just talk to a colleague who already downloaded the slides 😎.

More articles by Bernard…


Tesla’s Cat in the Bag

Tesla’s Cat in the Bag
by Roger C. Lanctot on 03-18-2017 at 8:00 pm

Some day soon, maybe this year or next, Tesla Motors is going to let the cat out of the bag that its cars are not only connected but are also subject to remote control. Remote control isn’t the sort of feature that consumers look for in their personal transportation, so it isn’t likely to be something Tesla is going to bring up. It also has a range of security, privacy and liability implications that make it a sticky topic to discuss.


Ethical and not-so-ethical hackers have already demonstrated unauthorized remote control of cars including Tesla’s, FCA’s Jeep Cherokees and OnStar-equipped vehicles from General Motors (on “60 Minutes”). In fact, OnStar offers “remote vehicle slowdown” as an anti-theft feature as part of OnStar which works through a cooperation with law enforcement agencies.

The topic of remote control is increasingly arising in connected car and automated driving conversations as companies, such as Local Motors, introduce driverless shuttle systems in Berlin and elsewhere, which come with remote monitoring. In fact, consumers are increasingly being offered smartphone applications that enable the equivalent of short-range vehicle remote control for parking cars in tight spots. Tesla already offers this. BYD in China has been showing off remote control of a car via smartwatch for several years.

At Mobile World Congress 2017 in Barcelona Ericsson demonstrated the use of 5G network technology for remote control of a car driving on a distant racetrack. The car was not moving quickly, but the low latency communication enabled by 5G connectivity was used to demonstrate remote control as a feature.

Operators of fleets of driverless vehicles realized early on that these vehicle fleets would require remote control. The integration of cameras on cars has enabled all-around-view technology combined with high speed LTE wireless networks further assisting the development of remote control as an application.

The concept came up during a panel discussion at CityCarSummit in Berlin yesterday, with an executive from Local Motors, which has become a fleet management company managing its Olli driverless shuttles, questioning whether car companies are capable of managing fleets. A Daimler executive sharing the stage at the time smiled wryly at the comment behind the back of the Local Motors exec. (Daimler manages both its Car2Go car sharing fleet and operates Fleetboard for commercial trucks. Daimler is also in the business of making and operating buses.)

Car companies have long known that if their cars were connected the car companies will some day bear the liability and obligation to take control of one of those cars remotely, if they could, should that vehicle become involved in criminal or life-threatening activity. Let’s call it a moral obligation, because it’s never happened.

Law enforcement officials already take advantage of embedded connections to track vehicles, as occurred in the case of the Boston Marathon bombers who stole a telematics-equipped Mercedes. But remotely stopping a connected car in the midst of committing a terrorist act, for example, is a circumstance that has yet to arise.

Tesla is courting car insurers that are increasingly inclined to offer discounts to reward Tesla owners for driving their vehicles in autopilot mode – a behavior which Tesla CEO Elon Musk has claimed results in fewer crashes and insurance claims based on vehicle data collected by the company. But the prospect of remote control introduces an entirely different connected car value proposition for insurers.

Car makers building in vehicle connections are currently wrestling with cyber-security issues meaning they are simultaneously introducing an attack surface while trying to prevent attacks. But preventing intrusions and enabling remote control are not mutually exclusive.

Once a car maker is connected to its cars it has become a fleet operator and thereby bears some responsibility for the knowledge of how its vehicles are being operated. It is the automated driving proposition that introduces the need for more active remote monitoring and control.

The legal issues can be sticky and vary from country to country. You can get a flavor of the debate from this report regarding OnStar:

https://www.techdirt.com/articles/20170116/09333936490/law-enforcement-has-been-using-onstar-siriusxm-to-eavesdrop-track-car-locations-more-than-15-years.shtml

No car maker has gratuitously taken control of its cars remotely. OnStar’s remote slowdown feature is already considered to be fairly mundane – even though it is the only such offering on the market in the world. Though mundane for OnStar today, remote control is controversial as a concept.

If hackers can take remote control of a car, then car makers will be expected to have the same capability – particularly in the case of self-driving cars. In fact, car makers currently deploying intrusion detection software such as Harman’s Towersec, Argus Cyber Security, NNG’s Arilou or even QNX’s Certicom will need to develop the means for remotely restoring control and responding to those intrusions. The automotive industry has yet to sort these issues out. As of today, if a car maker detects a cyber attack on a vehicle, the customer may not even be notified.

It is clear that car companies installing connections know more and more about how their vehicles are being operated and how they can be compromised. What is missing are the procedures and protocols for responding to the information that is being gleaned regarding driving behavior, vehicle performance and intrusion detection.

The fleets of driverless vehicles envisioned by “futurists” and debated at events such as CityCarSummit are proliferating – which means that remote control of vehicles is about to become a growth industry. In essence, every car maker – whether that auto maker likes it or not – is in the process of becoming a fleet operator with remote control over both driven and driverless vehicles.

Sooner or later Tesla will find a way to convey this “news” to its owners as an attractive new feature rather than a creepy technology over-reach. My favorite application will be the insurance company notification of an impending hailstorm which will send owners racing to remotely pilot their cars to covered parking.

Law enforcement certainly welcomes the prospect of remote control as a crime fighting tool – along with the potential to subpoena access to microphones built into cars for hands-free phone systems to listen in on suspects’ conversations. The only question that remains is whether consumers will regard remote control as a benign enhancement or as an invasion of the vehicle snatchers. Make no mistake, remote control is coming to a connected car in your driveway or garage sometime soon. In fact, it’s probably already there. It’s 10 p.m. Do you know where your car is?


Succeeding with 56G SerDes, HBM2, 2.5D and FinFET

Succeeding with 56G SerDes, HBM2, 2.5D and FinFET
by Daniel Nenni on 03-17-2017 at 4:00 pm

eSilicon presented their advanced ASIC design capabilities at a seminar last Wednesday evening. This event was closed to the press, bloggers and analysts, but I managed to get some details from a friend who attended. The event title was: “Advanced ASICs for the Cloud-Computing Era: Succeeding with 56G SerDes, HBM2, 2.5D and FinFET”. Lots of advanced technology loaded into that title. Here is the summary of the event:

A dramatic increase in network bandwidth and cloud-computing infrastructure is on the way. Fueled by applications such as deep machine learning and massive data volumes from a connected world, the performance demands of ASICs to support these new applications are daunting.

Join eSilicon, Rambus and Samsung Foundry for an overview of the advanced technologies being deployed to address these challenges. We’ll discuss HBM technology and the associated PHY, high-speed SerDes technology, 2.5D integration, high-performance ASIC design, interposer/package design and the manufacturing and packaging technologies available to address this class of FinFET-based designs.

It seems that the main message was that it takes teamwork throughout the ecosystem to build advanced ASICs. eSilicon presented an overview of their FinFET ASIC, interposer and package design skills along with a discussion of some of their enabling IP. I was able to get a few of their slides. A lot of these advanced designs use 2.5D integration for HBM memory stacks.

Slide 1 is an overview of what’s needed for a successful HBM-based design. Some of the points here are familiar – reduced design time and the need for silicon-proven IP, along with comprehensive silicon characterization. There are some new items as well. Interposer design with electrical, thermal and mechanical analysis. Thermal and mechanical analysis for a substrate is new and seems to be an important element of success for these kind of designs. Low cost is nothing new, but the need to manage inventory (i.e., memory) and the associated assembly of the complete bill of materials is new.

I found slide 2 quite interesting. It seems that eSilicon has been running test vehicles on 2.5D integration for about six years. That’s a lot of experiments. This slide summarizes a few of those experiments. The series of tests shown progress from simple tests on the substrate and package, to thermal analysis and then full system operation. Becoming proficient in these kinds of designs is definitely not a casual exercise.

Slide 3 is the obligatory marketing slide. It summarizes what eSilicon offers for interposer and package design and gives a nod to their willingness to be the “one chokeable neck” for product delivery. These designs look very challenging. If you’re thinking of diving into that end of the pool, I would give eSilicon a call, absolutely.

About eSilicon
eSilicon provides products and services to the global semiconductor industry. Our services include ASIC design services and the coordination of the global, outsourced manufacturing supply chain that implements those custom integrated circuits. We call this model semiconductor manufacturing services. We deliver the manufactured custom ICs in volume to our customers at a pre-negotiated price.

We also develop memory IP and I/O products, both off the shelf and custom. Our memories are optimized across the spectrum of performance, power, area, and yield to address your specific market requirements.

Our customers are semiconductor companies, integrated device manufacturers, original equipment manufacturers and wafer foundries that sell their products into a variety of end markets, including communications, computing, consumer, industrial and medical products.


What’s better than silicon-proven IP? Lab bench-proven!

What’s better than silicon-proven IP? Lab bench-proven!
by Tom Dillinger on 03-17-2017 at 12:00 pm

The SoC industry depends upon the availability of validated IP. SoC designs require a huge investment, and assume the external IP that is licensed from outside parties satisfies all functional and electrical specifications. To support that requirement, IP providers typically pursue a strategy to demonstrate their designs are silicon-proven — their IP is submitted as part of a pre-production shuttle tapeout to a specific foundry process node. The die from the shuttle wafer lots are returned, packaged, and the silicon IP is characterized. Yet, the question remains — is the IP truly suitable for use across a broad set of customer SoC applications and product environments?

I recently had the opportunity to review this question with Abhijit Abhyankar, Vice President of Silicon Engineering at Flex Logix, Inc., providers of embedded FGPA (eFPGA) IP. For silicon validation, they have the added complexity that the end functional application is not fixed, but rather defined in the field.

We talked about some of the deficiencies commonly present in current silicon-proven IP methodologies.

  • packaging technology for the silicon IP

The package parasitics strongly impact the measured performance of the shuttle IP die. “You can’t just add some general-purpose input receiver and output driver I/O cells (GPIO) to the shuttle testsite design, and expect to adequately characterize high-performance IP.”, Abhijit highlighted. “You have to add an architecture around the IP on the testsite, to provide stimulus and capture results at-speed, with a synchronous interface at the IP boundary.”

  • internal IP voltage

Hard IP designs include technical specifications for the required supply voltage at the IP power pins. The SoC customer is expected to provide a global supply distribution network that meets a maximum local voltage drop requirement. Applying a voltage to the shuttle package pins for validation does not reflect the local voltage present at the IP.

  • internal IP temperature

Similarly, the IP specification includes the temperature range over which functionality is validated. Specifically, this is the device junction temperature, which is a function of the ambient, the thermal resistance between package/die attach/substrate, and the IP switching activity.

The customers for the FlexLogix eFPGA IP span the gamut, from very low-power IoT end products to high-speed network communications to mil-aero (please refer to the recent DARPA announcement here). As a result, the environmental voltage and temperature extremes required by customer applications are pushing the technology, whether it be 40nm, 28nm, or 16nm.

Abhijit described the approach that was taken to develop their shuttle design. “We needed to develop a validation strategy for the eFPGA IP that enabled us to accurately measure performance, as well as local voltage and temperature. We collaborated with other IP partners to integrate sensors on the validation testsite. Performance validation necessitated integrating a precision PLL to provide an internal, programmable (low skew) clock distribution. SRAM arrays surround the eFPGA IP, to provide the source test data and capture responses.”

The architecture for the eFPGA characterization testsite is illustrated in the figure below.

Note the presence of several eFPGA IP blocks on the shuttle design, reflecting the various eFPGA array types to be validated with device threshold voltage combinations to address customer power/performance applications. Voltage and temperature sensors are included around the IP blocks.

The ability to measure internal performance, while monitoring local voltage and temperature, is necessary but not sufficient to properly characterize an embedded IP block. The validation strategy requires applying environmental extremes, as well. Abhijit continued, “We partnered with package and board design firms, to develop a unique physical testbench. The eFPGA validation package is socketed to a board, which includes a fixture to attach an external thermal forcing source system.” (Please refer to the figures below.)

The thermal forcing system enables characterization over the temperature extremes (and temperature cycles) to meet mil-aero and automotive specifications, which are measured directly on-die using the sensor IP.

We chatted briefly about the unique temperature inversion phenomenon at advanced process nodes, and thus the requirement to measure performance over the full temperature specification range.

Then, Abhijit blew me away with the following insight. “eFPGA IP is unique. Our customers are seeking to measure the performance of their specific algorithms, when programmed on the IP. We provide delay calculation and static timing analysis tool support, which predicts performance with high accuracy, using (corner-based) foundry PDF extraction models. Yet, the customers want to explicitly measure performance in silicon, at their facility, with their specific eFPGA netlist.”

The validation strategy that Flex Logix has pursued for analysis also directly enables their customers to share the same PVT characterization approach at the customer’s site (potentially using foundry shuttle split lots). The validation report from Flex Logix illustrates how the clocking, SRAM stimulus/capture, and voltage/temp sensors are used to measure internal IP performance. This strategy takes the notion of silicon-proven IP to the next level, where customers can readily conduct their own lab bench characterization procedures, on isolatable IP from a foundry test shuttle.

For eFPGA IP, I learned that lab bench-proven is a customer expectation. My discussion with the Flex Logix team got me thinking that their approach may indeed be required for other complex IP blocks at advanced process nodes, as well. IP providers may need to invest the time and resources to provide customers with the collateral to be able to pursue their own unique silicon validation methodologies.

For more info on Flex Logix embedded FPGA IP, please refer to the following link.

-chipguy


Aldec Swings for the Fences

Aldec Swings for the Fences
by Bernard Murphy on 03-17-2017 at 7:00 am

In today’s fast-moving technology markets, companies who are prepared to step up to opportunity can break out of traditional bounds to become players in bigger and fast-growing markets. It looks to me like Aldec is putting itself on that path. They have announced an end-to-end hardware/software co-verification solution which they showed at Embedded World in Nuremberg recently.

The solution starts with linkage to a QEMU ARM emulation linked directly to HDL running on the Aldec Riviera-PRO simulator. Being a techie myself, I’m guessing other techies are going to say “but that’s not a big deal – others have virtual prototypes linked to simulators”. But business breakthroughs are usually not predicated on major technical leaps. It’s more important that they target hot problems with workable solutions, most often integrated around existing capabilities. Aldec’s also has a unique advantage here in their design for FPGA focus.

The solution currently targets Xilinx Zynq SoCs with dual ARM Cortex A9. As you know if you read the Mentor survey on functional verification, advanced verification methods are becoming much more common on these complex FPGA SoCs, where traditional “burn and churn” verification approaches have become impractical. So logic simulation coupled with QEMU system emulation is a very practical solution to managing hardware/software co-development. Hardware breakpoints can be set in Riviera-PRO, software breakpoints can be set through QEMU and concurrent debug can be managed through GDB and Riviera-PRO.

At the show, Aldec provided insight into using the solution to model the ARM core running in QEMU, together with a MIPI CSI-2 solution running in the FPGA. But Aldec didn’t stop there. They also showed off three reference designs designed using this flow and built on their TySOM boards.

The first reference design targets multi-camera surround view for ADAS (automotive – advanced driver assistance systems). Camera inputs come from four First Sensor Blue Eagle systems, which must be processed simultaneously in real-time. A lot of this is handled in software running on the Zynq ARM cores but the computationally-intensive work, including edge detection, colorspace conversion and frame-merging, is handled in the FPGA. ADAS is one of the hottest areas in the market and likely to get hotter since Intel just acquired Mobileye.

The next reference design targets IoT gateways – also hot. Cloud interface, through protocols like MQTT, is handled by the processors. The gateway supports connection to edge devices using wireless and wired protocols including Bluetooth, ZigBee, Wi-Fi and USB.

Face detection for building security, device access and identifying evil-doers is also growing fast. The third reference design is targeted at this application, using similar capabilities to those on the ADAS board, but here managing real-time streaming video as 1280×720 at 30 frames per second, from an HDR-CMOS image sensor.

So yes, Aldec put together a solution combining their simulator with QEMU emulation and perhaps that wouldn’t justify a technical paper in DVCon. But business-wise they look like they are starting on a much bigger path. They’re enabling FPGA-based system prototype and build in some of the hottest areas in systems today and they make these solutions affordable for design teams with much more constrained budgets than are available to the leaders in these fields. And they provide reference boards with embedded development kits to get those teams started in ADAS, IoT gateway and face recognition systems. That looks to me like a swing for the fences.

You can read the press release HERE.

More articles by Bernard…


TSMC Talks About 22nm, 12nm, and 7nm EUV!

TSMC Talks About 22nm, 12nm, and 7nm EUV!
by Daniel Nenni on 03-16-2017 at 12:00 pm

The TSMC Symposium was jam-packed this year with both people and information. I had another 60 minutes of fame in the Solido booth where I signed 100 books, thank you to all who stopped by for a free book and a SemiWiki pen. SemiWiki bloggers Tom Dillinger and Tom Simon were also there so look for more TSMC Symposium blogs coming in the next few days. If you have specific questions ask them here and I will make sure you get answers.

Rick Cassidy, President, TSMC North America again kicked of the conference with a nice overview of the semiconductor business. In fact, TSMC shipped 5.8M (12” equiv) wafers in 2016 to more than 450 customers with 5,238 products. Approximately 71% of the resulting revenue went through Rick and the TSMC North American organization so congratulations to them on a job well done.

One of the reoccurring points made by Rick and the other TSMC executives is that TSMC does not compete with their customers which is the foundation of the pure-play foundry business and the key to the success of the fabless semiconductor industry, absolutely.


This year TSMC really focused on custom process platforms for key market segments of the semiconductor industry. I went into a bit more detail on this in my pre-symposium blog TSMC Design Platforms Driving Next-Gen Applications. That blog went viral with more than 10,000 views in one week so you may want to check it out.

There were three big announcements yesterday in my opinion:
1. 22nm ULP
2. 12nm FFC
3. 7nm EUV
Most of us had advanced knowledge of this but it was nice to hear more details in front of more than 1,000 TSMC customers. Again, this is an invitation only event with no recording or photography allowed so much more information is made available than open events or conferences.


TSMC formally introduced 22nm ULP (an optimized version of 28nm HPC+) and 12nm FFC (an optimized version of 16nm). 22nm ULP offers a 10% area reduction and either a 15% performance gain over 28nm or a 35% power reduction. TSMC also has 55nm ULP, 40nm ULP, and 28nm ULP all targeted at IoT and other low power and low cost applications. 12nm FFC offers a 10% performance gain or a 25% power reduction. 12nm also offers a 20% area reduction with 6T Libraries versus 7.5T or 9T.

TSMC 10nm is now fully qualified and in HVM at Giga Fabs 12 and 15. TSMC is scheduled to ship 400,000 wafers in 2017 so you can expect the next Apple iProducts to sport TSMC 10nm SoCs, definitely.

Other than that, 10nm was not discussed much because it is another short node like 20nm. Remember, TSMC introduced double patterning at 20nm then quickly followed with FinFETs at 16nm. This proved to be very a wise approach since the same fabs were used for both 20nm and 16nm which simplified the 16nm ramp. We will see the same with 10nm and 7nm. TSMC ramped 10nm without quad patterning and will add it with 7nm, again using the same fabs.

7nm was the focus of the technical discussions of course because it represents several firsts for our industry. 7nm will also represent the biggest market share for TSMC for one node, second being 28nm I believe. It would be easier to count the big semiconductor companies that are NOT using TSMC 7nm and the only two I can think of are Samsung and Intel.

In comparison to 16FF+, TSMC 7nm is said to offer a 3.3x density, 30% speed, and a 60% power improvement. TSMC will again offer multiple versions of 7nm for platform specific applications (Mobile, IoT, AI, and Automotive). The 7nm SRAM bit cell is .37x compared to 16nm which I believe will be the smallest SRAM bit cell in production so congratulations to the SRAM team in Hsinchu. 7nm will hit risk production in April and HVM in the first half of 2018, and yes, next year’s iProducts will sport TSMC 7nm SoCs.

The big shocker to me was that TSMC is still committed to introducing EUV at 7nm in 2019. Based on what I saw at the SPIE conference last month EUV would miss 7nm completely. This will be another first for the industry (EUV in production) so I can see the incentive but I highly doubt the ROI will be there at 7nm.

TSMC also stated that 5nm development is progressing according to plan with good SRAM yield. TSMC is still scheduling 5nm for 2020 but they did not say at what level EUV would be used. Probably because it depends on the EUV success at 7nm.

Also read: Top 10 Updates from the TSMC Technology Symposium, Part I


Six Reasons to Consider Using FPGA Prototyping for ASIC Designs

Six Reasons to Consider Using FPGA Prototyping for ASIC Designs
by Daniel Payne on 03-15-2017 at 12:00 pm

There’s no doubt that programmable logic in FPGAs have transformed our electronics industry for the better. If you do ASIC designs then there’s always the pressure of getting first silicon correct, with no functional or timing bugs, because bugs will cause expensive re-spins and delay time to market. ASIC designers on the leading edge of design complexity have been adopting an FPGA prototyping approach to improve their chances of first silicon success, and I wanted to list the top six reasons for using FPGA prototyping for ASIC designs:

[LIST=1]

  • Reduces risk
  • Shortens the design schedule
  • Enables early software development
  • Allows real time system verification
  • Boosts reliability
  • Increases design flexibility

    I’ll be attending and blogging about a webinar on this topic next week, Tuesday, March 21st, “The Role and Benefits of FPGA Prototyping in the ASIC Design Cycle“, from 8AM to 9AM PDT.


    Related blog – Webinar: FPGA Prototyping and ASIC Design

    About the Webinar
    This joint Open-Silicon and PRO DESIGN Electronic webinar, moderated by Bernard Murphy of SemiWiki, will address the benefits of FPGA-based prototyping in the ASIC design cycle, and the role it plays in significantly reducing the risk and schedules for specification-to-custom SoC (ASIC) development and the volume production ramp. Early software development and real time system verification, enabled by FPGA prototyping, offers a cost-efficient high-end solution that shortens process cycles, boosts reliability, increases design flexibility, and reduces risk and cost. The panelists will outline best practices to overcome technical design challenges encountered in FPGA prototype development, such as design partitioning, real-time interfaces, debug and design bring-up. They will also discuss the key technical advantages that FPGA-based prototyping offers, such as architectural exploration, IP development, acceleration of RTL verification, pre-silicon firmware and software development, proof of concept and demonstrations. They will also talk about its affect on performance, scalability, flexibility, modularity and connectivity.

    Who should attend this Webinar
    This webinar is ideal for hardware system architects, hardware designers, SoC designers, ASIC designers, and SoC firmware and software developers.

    Moderator

    Bernard Murphy – ModeratorBlogger
    SemiWiki

    Bernard is a blogger for SemiWiki, covering IP and SoC design. He has also written past blogs for EE Times and has contributed to Semiconductor Engineering as well. Prior to joining SemiWiki, Bernard served as CTO for Atrenta for 15 years.

    Speakers

    Philipp Ampletzer
    Director of Sales and Business Development
    PRO DESIGN Electronic GmbH

    Philipp serves as Director of Sales and Business Development for PRO DESIGN Electronic GmbH in Germany. He has been with the company for over ten years, where he started as a project manager.


    Sachin Jadhav
    Technical Lead, Systems and Software Engineering
    Open-Silicon

    Sachin serves as Technical Lead of Systems and Software Engineering for Open-Silicon, where he manages the ASIC prototyping collateral operations. He has ten years of specialized experience in ASICs, architecture, embedded systems, debugging, embedded software, device drivers, communications protocols, shell scripting and kernel.

    Related blog – Open-Silicon Update, 125M ASICs Shipped!

    Webinar Registration
    This webinar requires online registration here, so I hope to see all of you next week as we learn together about all of the benefits to FPGA prototyping for ASIC designs.


  • Webinar: CEVA on basestation design for 5G NR

    Webinar: CEVA on basestation design for 5G NR
    by Bernard Murphy on 03-15-2017 at 7:00 am

    Conventional wisdom is that 5G is still somewhere on the hype curve – expected to arrive someday but still not a near-term technology. As is often the case, conventional wisdom seems to be wrong. Coming out of this year’s Mobile World Congress in Barcelona, semiconductor and carrier heavyweights have committed to accelerate deployment of 5G NR (New Radio) towards large-scale trials starting in 2019. Looks like this is based on a fast-track version of specs to come very shortly from 3GPP; Verizon and some other telcos are resisting, but it is not clear that will stop the acceleration, so best to assume the clock is now ticking for product teams who thought they had a lot more time before jumping into this area.

    REGISTER HERE for Webinar on March 29[SUP]th[/SUP] at 9-10am PST

    5G NR is going to make basestation SoCs a lot more complex. They’re of course SDR-based, so software/hardware partitioning is going to get a lot more interesting and now you’ll be dealing with beamforming for massive MIMO support. That means an even bigger role for DSPs in any solution. And in case you were uncertain on this point, the solution has to be low power and low latency. Yes, you need to be looking at who can help you deliver 5G NR compliance, with IP and with expertise in this area. This Webinar is a good place to start.

    Summary
    3GPP is currently actively working on 5G New Radio (NR). IMT2020 is defining advanced technology for dramatically increasing network capacity and coverage, in order to answer the ever-increasing demand for higher data rate and traffic with much reduced end to end network latency. Such technology includes wider RF channels in licensed and unlicensed bands up to mmWaves, aggregation of large number of components carriers of various widths and very short TTI.

    The need for flexible 5G base station implementations requires novel SDR SoC architectures and SW/HW partitioning strategies optimized to solve the daunting challenges of Beamforming with Massive MIMO systems while maintaining very low latency and blazing fast data rate within a low power budget.

    Join CEVA experts to learn about:
    · Cellular 5G market overview
    · Introduction to 5G
    · 5G challenges
    · Impact on SDR architecture and SW/HW partitioning
    · CEVA’s solution for 5G baseband
    Target Audience
    Communication and system engineers targeting 5G segment
    Speakers

    Emmanuel Gresset
    Business Development Director, Wireless and Wireline Communications, CEVA


    Tomer Yablonka
    Senior Communication System Architect

    About CEVA, Inc.
    CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, advanced imaging, computer vision and deep learning for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi (802.11 a/b/g/n/ac up to 4×4) and serial storage (SATA and SAS). Visit us at www.ceva-dsp.com and follow us on Twitter, YouTube and LinkedIn.

    More articles by Bernard…