RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Succeeding with 56G SerDes, HBM2, 2.5D and FinFET

Succeeding with 56G SerDes, HBM2, 2.5D and FinFET
by Daniel Nenni on 03-17-2017 at 4:00 pm

eSilicon presented their advanced ASIC design capabilities at a seminar last Wednesday evening. This event was closed to the press, bloggers and analysts, but I managed to get some details from a friend who attended. The event title was: “Advanced ASICs for the Cloud-Computing Era: Succeeding with 56G SerDes, HBM2, 2.5D and FinFET”. Lots of advanced technology loaded into that title. Here is the summary of the event:

A dramatic increase in network bandwidth and cloud-computing infrastructure is on the way. Fueled by applications such as deep machine learning and massive data volumes from a connected world, the performance demands of ASICs to support these new applications are daunting.

Join eSilicon, Rambus and Samsung Foundry for an overview of the advanced technologies being deployed to address these challenges. We’ll discuss HBM technology and the associated PHY, high-speed SerDes technology, 2.5D integration, high-performance ASIC design, interposer/package design and the manufacturing and packaging technologies available to address this class of FinFET-based designs.

It seems that the main message was that it takes teamwork throughout the ecosystem to build advanced ASICs. eSilicon presented an overview of their FinFET ASIC, interposer and package design skills along with a discussion of some of their enabling IP. I was able to get a few of their slides. A lot of these advanced designs use 2.5D integration for HBM memory stacks.

Slide 1 is an overview of what’s needed for a successful HBM-based design. Some of the points here are familiar – reduced design time and the need for silicon-proven IP, along with comprehensive silicon characterization. There are some new items as well. Interposer design with electrical, thermal and mechanical analysis. Thermal and mechanical analysis for a substrate is new and seems to be an important element of success for these kind of designs. Low cost is nothing new, but the need to manage inventory (i.e., memory) and the associated assembly of the complete bill of materials is new.

I found slide 2 quite interesting. It seems that eSilicon has been running test vehicles on 2.5D integration for about six years. That’s a lot of experiments. This slide summarizes a few of those experiments. The series of tests shown progress from simple tests on the substrate and package, to thermal analysis and then full system operation. Becoming proficient in these kinds of designs is definitely not a casual exercise.

Slide 3 is the obligatory marketing slide. It summarizes what eSilicon offers for interposer and package design and gives a nod to their willingness to be the “one chokeable neck” for product delivery. These designs look very challenging. If you’re thinking of diving into that end of the pool, I would give eSilicon a call, absolutely.

About eSilicon
eSilicon provides products and services to the global semiconductor industry. Our services include ASIC design services and the coordination of the global, outsourced manufacturing supply chain that implements those custom integrated circuits. We call this model semiconductor manufacturing services. We deliver the manufactured custom ICs in volume to our customers at a pre-negotiated price.

We also develop memory IP and I/O products, both off the shelf and custom. Our memories are optimized across the spectrum of performance, power, area, and yield to address your specific market requirements.

Our customers are semiconductor companies, integrated device manufacturers, original equipment manufacturers and wafer foundries that sell their products into a variety of end markets, including communications, computing, consumer, industrial and medical products.


What’s better than silicon-proven IP? Lab bench-proven!

What’s better than silicon-proven IP? Lab bench-proven!
by Tom Dillinger on 03-17-2017 at 12:00 pm

The SoC industry depends upon the availability of validated IP. SoC designs require a huge investment, and assume the external IP that is licensed from outside parties satisfies all functional and electrical specifications. To support that requirement, IP providers typically pursue a strategy to demonstrate their designs are silicon-proven — their IP is submitted as part of a pre-production shuttle tapeout to a specific foundry process node. The die from the shuttle wafer lots are returned, packaged, and the silicon IP is characterized. Yet, the question remains — is the IP truly suitable for use across a broad set of customer SoC applications and product environments?

I recently had the opportunity to review this question with Abhijit Abhyankar, Vice President of Silicon Engineering at Flex Logix, Inc., providers of embedded FGPA (eFPGA) IP. For silicon validation, they have the added complexity that the end functional application is not fixed, but rather defined in the field.

We talked about some of the deficiencies commonly present in current silicon-proven IP methodologies.

  • packaging technology for the silicon IP

The package parasitics strongly impact the measured performance of the shuttle IP die. “You can’t just add some general-purpose input receiver and output driver I/O cells (GPIO) to the shuttle testsite design, and expect to adequately characterize high-performance IP.”, Abhijit highlighted. “You have to add an architecture around the IP on the testsite, to provide stimulus and capture results at-speed, with a synchronous interface at the IP boundary.”

  • internal IP voltage

Hard IP designs include technical specifications for the required supply voltage at the IP power pins. The SoC customer is expected to provide a global supply distribution network that meets a maximum local voltage drop requirement. Applying a voltage to the shuttle package pins for validation does not reflect the local voltage present at the IP.

  • internal IP temperature

Similarly, the IP specification includes the temperature range over which functionality is validated. Specifically, this is the device junction temperature, which is a function of the ambient, the thermal resistance between package/die attach/substrate, and the IP switching activity.

The customers for the FlexLogix eFPGA IP span the gamut, from very low-power IoT end products to high-speed network communications to mil-aero (please refer to the recent DARPA announcement here). As a result, the environmental voltage and temperature extremes required by customer applications are pushing the technology, whether it be 40nm, 28nm, or 16nm.

Abhijit described the approach that was taken to develop their shuttle design. “We needed to develop a validation strategy for the eFPGA IP that enabled us to accurately measure performance, as well as local voltage and temperature. We collaborated with other IP partners to integrate sensors on the validation testsite. Performance validation necessitated integrating a precision PLL to provide an internal, programmable (low skew) clock distribution. SRAM arrays surround the eFPGA IP, to provide the source test data and capture responses.”

The architecture for the eFPGA characterization testsite is illustrated in the figure below.

Note the presence of several eFPGA IP blocks on the shuttle design, reflecting the various eFPGA array types to be validated with device threshold voltage combinations to address customer power/performance applications. Voltage and temperature sensors are included around the IP blocks.

The ability to measure internal performance, while monitoring local voltage and temperature, is necessary but not sufficient to properly characterize an embedded IP block. The validation strategy requires applying environmental extremes, as well. Abhijit continued, “We partnered with package and board design firms, to develop a unique physical testbench. The eFPGA validation package is socketed to a board, which includes a fixture to attach an external thermal forcing source system.” (Please refer to the figures below.)

The thermal forcing system enables characterization over the temperature extremes (and temperature cycles) to meet mil-aero and automotive specifications, which are measured directly on-die using the sensor IP.

We chatted briefly about the unique temperature inversion phenomenon at advanced process nodes, and thus the requirement to measure performance over the full temperature specification range.

Then, Abhijit blew me away with the following insight. “eFPGA IP is unique. Our customers are seeking to measure the performance of their specific algorithms, when programmed on the IP. We provide delay calculation and static timing analysis tool support, which predicts performance with high accuracy, using (corner-based) foundry PDF extraction models. Yet, the customers want to explicitly measure performance in silicon, at their facility, with their specific eFPGA netlist.”

The validation strategy that Flex Logix has pursued for analysis also directly enables their customers to share the same PVT characterization approach at the customer’s site (potentially using foundry shuttle split lots). The validation report from Flex Logix illustrates how the clocking, SRAM stimulus/capture, and voltage/temp sensors are used to measure internal IP performance. This strategy takes the notion of silicon-proven IP to the next level, where customers can readily conduct their own lab bench characterization procedures, on isolatable IP from a foundry test shuttle.

For eFPGA IP, I learned that lab bench-proven is a customer expectation. My discussion with the Flex Logix team got me thinking that their approach may indeed be required for other complex IP blocks at advanced process nodes, as well. IP providers may need to invest the time and resources to provide customers with the collateral to be able to pursue their own unique silicon validation methodologies.

For more info on Flex Logix embedded FPGA IP, please refer to the following link.

-chipguy


Aldec Swings for the Fences

Aldec Swings for the Fences
by Bernard Murphy on 03-17-2017 at 7:00 am

In today’s fast-moving technology markets, companies who are prepared to step up to opportunity can break out of traditional bounds to become players in bigger and fast-growing markets. It looks to me like Aldec is putting itself on that path. They have announced an end-to-end hardware/software co-verification solution which they showed at Embedded World in Nuremberg recently.

The solution starts with linkage to a QEMU ARM emulation linked directly to HDL running on the Aldec Riviera-PRO simulator. Being a techie myself, I’m guessing other techies are going to say “but that’s not a big deal – others have virtual prototypes linked to simulators”. But business breakthroughs are usually not predicated on major technical leaps. It’s more important that they target hot problems with workable solutions, most often integrated around existing capabilities. Aldec’s also has a unique advantage here in their design for FPGA focus.

The solution currently targets Xilinx Zynq SoCs with dual ARM Cortex A9. As you know if you read the Mentor survey on functional verification, advanced verification methods are becoming much more common on these complex FPGA SoCs, where traditional “burn and churn” verification approaches have become impractical. So logic simulation coupled with QEMU system emulation is a very practical solution to managing hardware/software co-development. Hardware breakpoints can be set in Riviera-PRO, software breakpoints can be set through QEMU and concurrent debug can be managed through GDB and Riviera-PRO.

At the show, Aldec provided insight into using the solution to model the ARM core running in QEMU, together with a MIPI CSI-2 solution running in the FPGA. But Aldec didn’t stop there. They also showed off three reference designs designed using this flow and built on their TySOM boards.

The first reference design targets multi-camera surround view for ADAS (automotive – advanced driver assistance systems). Camera inputs come from four First Sensor Blue Eagle systems, which must be processed simultaneously in real-time. A lot of this is handled in software running on the Zynq ARM cores but the computationally-intensive work, including edge detection, colorspace conversion and frame-merging, is handled in the FPGA. ADAS is one of the hottest areas in the market and likely to get hotter since Intel just acquired Mobileye.

The next reference design targets IoT gateways – also hot. Cloud interface, through protocols like MQTT, is handled by the processors. The gateway supports connection to edge devices using wireless and wired protocols including Bluetooth, ZigBee, Wi-Fi and USB.

Face detection for building security, device access and identifying evil-doers is also growing fast. The third reference design is targeted at this application, using similar capabilities to those on the ADAS board, but here managing real-time streaming video as 1280×720 at 30 frames per second, from an HDR-CMOS image sensor.

So yes, Aldec put together a solution combining their simulator with QEMU emulation and perhaps that wouldn’t justify a technical paper in DVCon. But business-wise they look like they are starting on a much bigger path. They’re enabling FPGA-based system prototype and build in some of the hottest areas in systems today and they make these solutions affordable for design teams with much more constrained budgets than are available to the leaders in these fields. And they provide reference boards with embedded development kits to get those teams started in ADAS, IoT gateway and face recognition systems. That looks to me like a swing for the fences.

You can read the press release HERE.

More articles by Bernard…


TSMC Talks About 22nm, 12nm, and 7nm EUV!

TSMC Talks About 22nm, 12nm, and 7nm EUV!
by Daniel Nenni on 03-16-2017 at 12:00 pm

The TSMC Symposium was jam-packed this year with both people and information. I had another 60 minutes of fame in the Solido booth where I signed 100 books, thank you to all who stopped by for a free book and a SemiWiki pen. SemiWiki bloggers Tom Dillinger and Tom Simon were also there so look for more TSMC Symposium blogs coming in the next few days. If you have specific questions ask them here and I will make sure you get answers.

Rick Cassidy, President, TSMC North America again kicked of the conference with a nice overview of the semiconductor business. In fact, TSMC shipped 5.8M (12” equiv) wafers in 2016 to more than 450 customers with 5,238 products. Approximately 71% of the resulting revenue went through Rick and the TSMC North American organization so congratulations to them on a job well done.

One of the reoccurring points made by Rick and the other TSMC executives is that TSMC does not compete with their customers which is the foundation of the pure-play foundry business and the key to the success of the fabless semiconductor industry, absolutely.


This year TSMC really focused on custom process platforms for key market segments of the semiconductor industry. I went into a bit more detail on this in my pre-symposium blog TSMC Design Platforms Driving Next-Gen Applications. That blog went viral with more than 10,000 views in one week so you may want to check it out.

There were three big announcements yesterday in my opinion:
1. 22nm ULP
2. 12nm FFC
3. 7nm EUV
Most of us had advanced knowledge of this but it was nice to hear more details in front of more than 1,000 TSMC customers. Again, this is an invitation only event with no recording or photography allowed so much more information is made available than open events or conferences.


TSMC formally introduced 22nm ULP (an optimized version of 28nm HPC+) and 12nm FFC (an optimized version of 16nm). 22nm ULP offers a 10% area reduction and either a 15% performance gain over 28nm or a 35% power reduction. TSMC also has 55nm ULP, 40nm ULP, and 28nm ULP all targeted at IoT and other low power and low cost applications. 12nm FFC offers a 10% performance gain or a 25% power reduction. 12nm also offers a 20% area reduction with 6T Libraries versus 7.5T or 9T.

TSMC 10nm is now fully qualified and in HVM at Giga Fabs 12 and 15. TSMC is scheduled to ship 400,000 wafers in 2017 so you can expect the next Apple iProducts to sport TSMC 10nm SoCs, definitely.

Other than that, 10nm was not discussed much because it is another short node like 20nm. Remember, TSMC introduced double patterning at 20nm then quickly followed with FinFETs at 16nm. This proved to be very a wise approach since the same fabs were used for both 20nm and 16nm which simplified the 16nm ramp. We will see the same with 10nm and 7nm. TSMC ramped 10nm without quad patterning and will add it with 7nm, again using the same fabs.

7nm was the focus of the technical discussions of course because it represents several firsts for our industry. 7nm will also represent the biggest market share for TSMC for one node, second being 28nm I believe. It would be easier to count the big semiconductor companies that are NOT using TSMC 7nm and the only two I can think of are Samsung and Intel.

In comparison to 16FF+, TSMC 7nm is said to offer a 3.3x density, 30% speed, and a 60% power improvement. TSMC will again offer multiple versions of 7nm for platform specific applications (Mobile, IoT, AI, and Automotive). The 7nm SRAM bit cell is .37x compared to 16nm which I believe will be the smallest SRAM bit cell in production so congratulations to the SRAM team in Hsinchu. 7nm will hit risk production in April and HVM in the first half of 2018, and yes, next year’s iProducts will sport TSMC 7nm SoCs.

The big shocker to me was that TSMC is still committed to introducing EUV at 7nm in 2019. Based on what I saw at the SPIE conference last month EUV would miss 7nm completely. This will be another first for the industry (EUV in production) so I can see the incentive but I highly doubt the ROI will be there at 7nm.

TSMC also stated that 5nm development is progressing according to plan with good SRAM yield. TSMC is still scheduling 5nm for 2020 but they did not say at what level EUV would be used. Probably because it depends on the EUV success at 7nm.

Also read: Top 10 Updates from the TSMC Technology Symposium, Part I


Six Reasons to Consider Using FPGA Prototyping for ASIC Designs

Six Reasons to Consider Using FPGA Prototyping for ASIC Designs
by Daniel Payne on 03-15-2017 at 12:00 pm

There’s no doubt that programmable logic in FPGAs have transformed our electronics industry for the better. If you do ASIC designs then there’s always the pressure of getting first silicon correct, with no functional or timing bugs, because bugs will cause expensive re-spins and delay time to market. ASIC designers on the leading edge of design complexity have been adopting an FPGA prototyping approach to improve their chances of first silicon success, and I wanted to list the top six reasons for using FPGA prototyping for ASIC designs:

[LIST=1]

  • Reduces risk
  • Shortens the design schedule
  • Enables early software development
  • Allows real time system verification
  • Boosts reliability
  • Increases design flexibility

    I’ll be attending and blogging about a webinar on this topic next week, Tuesday, March 21st, “The Role and Benefits of FPGA Prototyping in the ASIC Design Cycle“, from 8AM to 9AM PDT.


    Related blog – Webinar: FPGA Prototyping and ASIC Design

    About the Webinar
    This joint Open-Silicon and PRO DESIGN Electronic webinar, moderated by Bernard Murphy of SemiWiki, will address the benefits of FPGA-based prototyping in the ASIC design cycle, and the role it plays in significantly reducing the risk and schedules for specification-to-custom SoC (ASIC) development and the volume production ramp. Early software development and real time system verification, enabled by FPGA prototyping, offers a cost-efficient high-end solution that shortens process cycles, boosts reliability, increases design flexibility, and reduces risk and cost. The panelists will outline best practices to overcome technical design challenges encountered in FPGA prototype development, such as design partitioning, real-time interfaces, debug and design bring-up. They will also discuss the key technical advantages that FPGA-based prototyping offers, such as architectural exploration, IP development, acceleration of RTL verification, pre-silicon firmware and software development, proof of concept and demonstrations. They will also talk about its affect on performance, scalability, flexibility, modularity and connectivity.

    Who should attend this Webinar
    This webinar is ideal for hardware system architects, hardware designers, SoC designers, ASIC designers, and SoC firmware and software developers.

    Moderator

    Bernard Murphy – ModeratorBlogger
    SemiWiki

    Bernard is a blogger for SemiWiki, covering IP and SoC design. He has also written past blogs for EE Times and has contributed to Semiconductor Engineering as well. Prior to joining SemiWiki, Bernard served as CTO for Atrenta for 15 years.

    Speakers

    Philipp Ampletzer
    Director of Sales and Business Development
    PRO DESIGN Electronic GmbH

    Philipp serves as Director of Sales and Business Development for PRO DESIGN Electronic GmbH in Germany. He has been with the company for over ten years, where he started as a project manager.


    Sachin Jadhav
    Technical Lead, Systems and Software Engineering
    Open-Silicon

    Sachin serves as Technical Lead of Systems and Software Engineering for Open-Silicon, where he manages the ASIC prototyping collateral operations. He has ten years of specialized experience in ASICs, architecture, embedded systems, debugging, embedded software, device drivers, communications protocols, shell scripting and kernel.

    Related blog – Open-Silicon Update, 125M ASICs Shipped!

    Webinar Registration
    This webinar requires online registration here, so I hope to see all of you next week as we learn together about all of the benefits to FPGA prototyping for ASIC designs.


  • Webinar: CEVA on basestation design for 5G NR

    Webinar: CEVA on basestation design for 5G NR
    by Bernard Murphy on 03-15-2017 at 7:00 am

    Conventional wisdom is that 5G is still somewhere on the hype curve – expected to arrive someday but still not a near-term technology. As is often the case, conventional wisdom seems to be wrong. Coming out of this year’s Mobile World Congress in Barcelona, semiconductor and carrier heavyweights have committed to accelerate deployment of 5G NR (New Radio) towards large-scale trials starting in 2019. Looks like this is based on a fast-track version of specs to come very shortly from 3GPP; Verizon and some other telcos are resisting, but it is not clear that will stop the acceleration, so best to assume the clock is now ticking for product teams who thought they had a lot more time before jumping into this area.

    REGISTER HERE for Webinar on March 29[SUP]th[/SUP] at 9-10am PST

    5G NR is going to make basestation SoCs a lot more complex. They’re of course SDR-based, so software/hardware partitioning is going to get a lot more interesting and now you’ll be dealing with beamforming for massive MIMO support. That means an even bigger role for DSPs in any solution. And in case you were uncertain on this point, the solution has to be low power and low latency. Yes, you need to be looking at who can help you deliver 5G NR compliance, with IP and with expertise in this area. This Webinar is a good place to start.

    Summary
    3GPP is currently actively working on 5G New Radio (NR). IMT2020 is defining advanced technology for dramatically increasing network capacity and coverage, in order to answer the ever-increasing demand for higher data rate and traffic with much reduced end to end network latency. Such technology includes wider RF channels in licensed and unlicensed bands up to mmWaves, aggregation of large number of components carriers of various widths and very short TTI.

    The need for flexible 5G base station implementations requires novel SDR SoC architectures and SW/HW partitioning strategies optimized to solve the daunting challenges of Beamforming with Massive MIMO systems while maintaining very low latency and blazing fast data rate within a low power budget.

    Join CEVA experts to learn about:
    · Cellular 5G market overview
    · Introduction to 5G
    · 5G challenges
    · Impact on SDR architecture and SW/HW partitioning
    · CEVA’s solution for 5G baseband
    Target Audience
    Communication and system engineers targeting 5G segment
    Speakers

    Emmanuel Gresset
    Business Development Director, Wireless and Wireline Communications, CEVA


    Tomer Yablonka
    Senior Communication System Architect

    About CEVA, Inc.
    CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, advanced imaging, computer vision and deep learning for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi (802.11 a/b/g/n/ac up to 4×4) and serial storage (SATA and SAS). Visit us at www.ceva-dsp.com and follow us on Twitter, YouTube and LinkedIn.

    More articles by Bernard…


    Securing Your IoT System using ARM

    Securing Your IoT System using ARM
    by Daniel Payne on 03-14-2017 at 12:00 pm

    I’ll never forget reading about and experiencing the October 21, 2016 Distributed Denial of Service (DDoS) attacks which slowed and shut down a lot of the Internet. On that particular attack the target was to shut down the Domain Name System (DNS). Traffic for this massive DDoS attack came from IoT devices which were unsecured digital devices, like home routers and surveillance cameras. Hackers were able to infect these devices with malicious code to form a botnet.

    The good news is that semiconductor IP companies like ARM can provide you with a well thought out approach to make your IoT projects secure by design. Consider attending their next webinar, “How to implement a secure IoT system on ARMv8-M“, it’s planned for Wednesday, March 29, 2017 at two different time slots.

    Attacks on IoT devices are guaranteed – they will happen! Therefore,system security needs to be easy and fast to implement. With ARM’s newest embedded processors – the ARM Cortex-M23 and Cortex-M33 with TrustZone for ARMv8-M – developers can take advantage of hardware-enforced security. Now, system designers have the challenge to extend security throughout the whole system.

    Join this technical webinar with Applications Engineer, Ed Player and IoT Product Manager, Mike Eftimakis. They will showcase what hardware and software you need to design a secure IoT system on ARMv8-M with TrustZone. The webinar will deep dive into an actual IoT system and share some of the products and tools available, to create the most efficient, viable IoT system. During the webinar, ARM will share exclusive, exciting news, which will help you accelerate your next IoT design.

    Register for this webinar to learn:

    • Hardware design considerations for building security into an SoC.
    • Development techniques for generating secure software.
    • How ARM TrustZone technology can be used to establish trust and security within an IoT system.

    Registration is done online here, and I’ll be attending and blogging about it, so stay tuned.


    Prototype-Based Debug for Cloud Design

    Prototype-Based Debug for Cloud Design
    by Bernard Murphy on 03-14-2017 at 7:00 am

    Unless you’ve been in hibernation for a while, you probably know that a lot more chip design is happening in system companies these days. This isn’t just for science experiments; many of these designs are already being used in high-value applications. This development is captive – systems companies generally don’t want to get into selling chips – but there is enough value in their own needs for them to justify the design and manufacture of these parts.

    An important motivation is for differentiated enhancements in cloud hardware. Cloud services are a very hot and competitive area; one estimate puts Amazon Web Services, considered standalone, as the 6[SUP]th[/SUP] largest business in the US. Which is good news for cloud services providers and for those who sell hardware and software solutions to those providers. Climbing fast on this list is a Chinese company called Inspur. I’m guessing you’ve never heard of them. That may change; they’re a vertically integrated company, with offerings all the way from cloud services down to building and selling servers. In server sales, they rank 5[SUP]th[/SUP] worldwide and top in China. Which makes them very interested in anything that can improve QoS for cloud applications.

    Networking between servers is an especially hot domain for differentiation. Microsoft recently announced their work with Intel/Altera to optimize networking for Azure through software-defined networking on reconfigurable platforms. Inspur is working on their own routing control chip (details not available) and unsurprisingly they want to prototype it, presumably in-system, before they commit to silicon.

    Inspur chose S2C for their prototyping solution because they wanted to be able to inspect and validate correct behavior in operation, while making high volume/throughput packet transmissions. In particular, S2C’s Prodigy MDM (Multi-Debug Module) is being used to set trigger conditions and capture related packets for chip debug. Deep sampling depth supported by Prodigy MDM allows Inspur to grab as many packets as possible to then be analyzed for correctness.

    Inspur cited especially the strength and ease of debugging across a multi-FPGA prototype in the Prodigy solution. Large designs (and routing controllers tend to be large) are unlikely to fit on a single FPGA and may have to span to multiple boards. But from the designer’s point of view, that’s an implementation detail; they still want to observe and debug across the whole design, unimpeded by FPGA partition boundaries. Using traditional FPGA tools, you would have to debug each FPGA in isolation; problems spanning more than one FPGA become painful to trace to a root-cause. Fixes are even more challenging – a fix on one FPGA, made without a clear perspective on behavior in the rest of the design, may create a new problem on another FPGA.

    The Prodigy MDM solution addresses this fundamental problem in multi-FPGA debug by presenting a unified design view across 4 Prodigy logic modules (boards) simultaneously. When you setup probes and trigger conditions, they are based on the design and indifferent to partitioning. When you view waveforms, the same applies. The designer sees the design as a whole and can monitor and debug it as a whole – the FPGA implementation is transparent.

    Inspur also mentioned that deep tracing support was very important to speeding up their debug process. They needed to grab as many packets as possible when looking for potential problems and this can only be accomplished at reasonable speed if trace data can be buffered into sizeable on-board memory. To get at the details of those packets, Prodigy MDM supports many more probes on each FPGA than you are likely to need (and you can precompile up to 16k probes per FPGA, from which you can select and change candidates for tracing without needing to recompile). MDM can then store up to 16GB of probe traces on the MDM hardware, again a significant differentiator from traditional FPGA debug tools which offer limited internal memory to capture traces. Tracing supports speeds up to 40MHz and transfer to a host computer is accomplished through a high-speed Gigabit Ethernet interface.

    Compile, probe setup, run-time control and debug are all steered though the Prodigy Player Pro cockpit, so you have one unified interface for ease of use. You can learn more about Prodigy MDM HERE, the complete Prodigy line (including support for logic boards based on a variety of FPGA options) HERE and you can read the joint S2C/Inspur press release HERE.

    More articles by Bernard…


    Unlocking Access to SOC’s for IoT Edge Product Developers

    Unlocking Access to SOC’s for IoT Edge Product Developers
    by Tom Simon on 03-13-2017 at 4:00 pm

    In the wake of the many mega mergers and consolidation in the semiconductor and electronics space, it is easy to say that opportunities for smaller companies are shrinking. Indeed, quite the opposite might be true. The larger companies, like Broadcom, ARM, Qualcomm, Analog Devices, Microchip, Maxim and Infineon (to name a few) are cranking out building blocks that actually make it easier to make innovative new consumer and industrial products. This in turn has fueled a large growth in small nimble companies that are building products for health, home security, home automation, automotive, convenience, recreation and other consumer oriented applications.

    Most of these products are connected, contain a processor and have multiple sensors, user interface elements and actuators. Or in other words they fit into the category of Internet of Things edge devices. What is compelling about these products is that they are in many ways returning the power of product development to individuals and small teams. I always conjure up Edison, Bell or Tesla working in their shop in my mind when I think of today’s innovators.

    What does it take to build an IoT edge device product? It’s one thing to talk about designing a new product and another to pull everything together to make it happen. If, as like with many small teams, there are time and money constraints, it can be trickier than it sounds. A lot of teams opt for buying discrete chips and building a board to get to market. While it is a quicker path in some cases, it brings with it many limitations – larger foot print, higher unit costs, reliability issues, shorter battery life, etc. What is often needed for a long term competitive win in the marketplace is a custom SOC for the product.

    It used to be that for a small company or team, the dream of building a custom SOC was a bridge too far. Fortunately, it’s not just the hardware companies that are assembling powerful options for product development – Mentor Graphics has put together an array of technologies to facilitate the migration from a board based system level design to an SOC based design.

    Mentor comes at the problem with a unique set of resources that create a complete solution for every aspect of the design problem: digital IP, logic design and prototype tools, analog design tools, embedded OS and software development tools, and simulation solutions to enable component and system verification.

    Mentor is touting its “Rapid SoC Proof-of-Concept for Zero Cost” idea and it has some very interesting features. It starts with free access to IP models and integration tools for the ARM Cortex-M0. The M0 is an ideal choice because of its low power consumption and advanced 32 bit architecture. Along with this comes a low cost and no-hassle commercial licensing model when it is time to have it fabricated. Also for the glue logic, there is a $995 FPGA prototype board option that allows rapid prototyping on real hardware.

    Mentor is also going out with a 30 day free evaluation license for Tanner EDA tools to design and simulate a proof of concept SoC. The Tanner EDA tools provide a mixed signal front end solution with schematic capture, analog simulation and digital design entry and behavioral simulation. There is a mixed signal simulation capability for verification of the entire design.

    Software for the SOC can be developed with the ARM Keil MDK-Lite software development toolkit. This is part of the Cortex-M0 DesignStart package mentioned above. Once the proof of concept is fully developed and verified, it is easy to move to full physical implementation using the rest of the Mentor flow. Mentor has published a white paper providing much more detail on the entire process.

    It’s gratifying to see that there are feasible avenues for teams with great ideas to get through the complex development process required for delivering new products. I always harken back to Maker movement for the roots of the notion that development tools and building blocks can and should be readily available to those who want to build things. After all, who knows where the next Edison, Bell or Telsa is going to come from.


    Driver Assistance and Autonomous? Need ASIL D Ready Certified CPU!

    Driver Assistance and Autonomous? Need ASIL D Ready Certified CPU!
    by Eric Esteve on 03-13-2017 at 12:00 pm

    The automotive segment is moving from a kind of niche, filled with commodities and highly specialized low complexity IC, to an innovative and very dynamic segment attracting most of the big players, from Qualcomm to Nvidia or Intel. These chip makers are targeting automotive as they need to find new growth areas, and they have quickly adapted their application processor offer from mobile to automotive, more specifically to ADAS and autonomous vehicle.

    But this automotive segment is completely different from the consumer segment, as most of the applications are safety critical. That’s why you need to understand standards like ISO 26262 or ASIL. To target applications like ADAS, radar, or safety-critical sensors, even if you market a CPU IP and not and IC, you need to have invested well in advance to propose an ASIL D compliant core…


    Synopsys has made this investment several years ago for the ARC EM CPU IP family, targeting the most challenging automotive specification, the ISO 26262 ASIL D. Meeting this specification means that you must have less than 1% single point of failure in the entire system. For processors going into ASIL D certifiable chips, this translates into several stringent requirements, like:

    ·Caches and tightly coupled memories need error correction and detection
    ·Implement a redundant (or shadow core) running same code
    ·Insert logic to monitor and compare results from redundant cores
    ·Build extensive safety documentation for ISO 26262

    The next picture describes the implementation of pre-built ARC EM Safety Islands, verified and ASIL D Ready certified dual-core lockstep processors with integrated safety monitors. Interesting to notice, ARC EM cores with Safety Enhancement Package are the only processors in microcontroller class with ECC on memories and lockstep interface…

    In fact, if the application processor chip makers are targeting master processing for ADAS or even autonomous vehicle application, there will be plenty chips around this master to support sub-systems like Radar, Lidar or sensor fusion. But these sub-systems will also be part of the safety-critical system and they will need to comply withISO 26262 ASIL D specification. In this case, the chip maker will select the Lock Step implementation.

    If a customer is targeting an automotive application, but not safety critical, he may use the ARC EM IP core complying with the least stringent ASIL B specification and implement the Independent Dual Core mode.


    To reduce single points of failure, Synopsys has implemented tightly coupled interrupt controller and options such as MPU andmDMA to provide full redundancy.A CPU IP core is not just made of dedicated hardware, and the compiler is part of the IP vendor offer as it’s an essential piece supporting the project development. The fact that the MetaWare compiler is alsoASIL D Ready certified will significantly accelerate ISO 26262 compliant code development.

    The border between a microprocessor and a DSP is becoming blurry and the EM5D cores inside the EM5DSI support various DSP features like fixed point DSP, vector and single instruction/multiple data (SIMD) processing. They include a unified, single-cycle 32 x 32 MUL/MAC unit with 32-bit/64-bit accumulators. To deliver enhanced performance for filtering, FFT and other signal processing algorithms, the EM5D also features fractional support, rounding and non-rounding instructions, as well as divide, square root and fixed-point math functions. Vector and SIMD support provide greater processor efficiency by enabling multiple data values to be processed in a single operation.

    The best way to describe an IP may not be to list a set of features, but rather to look at the type of applications which can be supported by this IP. For example, the EM5DSI can be used to process Vision ADAS, Radar and Smart Sensors, and each of these applications can potentially be on the safety critical path.

    Vision processing is needed to help driver assistance and autonomous driving and the requirements are expected to move from ASIL B to ASIL D for the hardware supporting such safety critical application. This transition should be simplified when using the ARC EM Safety Islands, positively impacting the TTM.

    The ARC EM Safety Islands can provide front end processing for Radar application. If the Radar technology is used for driver assistance and autonomous driving, the ASIL D requirements will be met faster when implementing such ASIL D ready certified solution, including the compiler.

    Because many sensors are in the safety critical path for applications like braking or steering, the CPU core handling processing for smart sensors and controllers has to comply with the ISO 26262 ASIL D firm requirement. Selecting a certified IP solution will guarantee the compliance and will shorten the TTM.

    By Eric Esteve from IPnest

    The ARC EM4SI and EM5DSI Safety Islands and the MetaWare Development Toolkit for Safety are available now. The ARC EM6SI and EM7DSI Safety Islands will be available in Q2, 2017.

    More about DesignWare ARC Processor Solutions for Automotive Applications:

    ·ARC EM Safety Island IP
    ·Safety Option for ARC EM Processors