RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Nvidia Handles Data, Senate with Care

Nvidia Handles Data, Senate with Care
by Roger C. Lanctot on 07-04-2017 at 7:00 am

When speaking before the U.S. Senate or Congress one has to choose one’s words carefully. The temptation, when one is speaking before legislators and microphones and cameras, is to tell it like it is and speak truth to power. The reality is that the power of the legislators and the microphones and the cameras must be respected and, consequently, words must be chosen carefully.
Continue reading “Nvidia Handles Data, Senate with Care”


HW and SW Co-verification for Xilinx Zynq SoC FPGAs

HW and SW Co-verification for Xilinx Zynq SoC FPGAs
by Daniel Payne on 07-03-2017 at 12:00 pm

It constantly amazes me at how much FGPA companies like Xilinx have done to bring ARM-based CPUs into a programmable SoC along with FPGA glue logic. Xilinx offers the Zynq 7000 and Zynq UltraScale+ SoCs to systems designers as a way to quickly get their ideas into the marketplace. A side effect of all this programability and flexibility to design a system is the classic challenge of how to debug the HW and SW system before committing to a prototype or production.

You could use a traditional, sequential development flow where hardware designers code their RTL and verify using testbenches, simulation and BFM (Bus Functional Models). The software engineers would separately write applications and verify SW. Once the hardware is stable enough, then you could start to think about how the hardware and software integration should take place. A sequential development flow is going to take more time because of the number of iterations required, so this provides an impetus for a better approach.

The major point of a recent webinar from Aldec was to show a new co-simulation methodology that enables early communication between the hardware and software simulation environments. Here’s how co-simulation connects together the SW and HW worlds:

All of your programmable logic is modeled in RTL on the right-hand side within the HDL simulator named Riviera-PRO from Aldec. Your processor is emulated with the Open Source QEMU (Quick EMUlator) that supports the popular A9 and A53 series of ARM processors. Connecting the processing system with the programmable logic for co-simulation is the Aldec QEMU Bridge. Some of the benefits for using this co-simulation idea are:

  • HW and SW integration takes place quite early
  • Improved visibility during debug of HW in Riviera-Pro

    • Break points
    • Use of Data Flow / Code Coverage
    • Waveform inspection
  • Improved visibility during debug of SW in QEMU

    • Using the GDB debugger with both driver and kernel models
    • Setting break points

Related blog – Aldec Swings for the Fences

The requirements for running this type of co-simulation include using a Linux computer with Riviera-Pro 2016.10 or later, the Xilinx QEMU, Xilinx Vivado, SystemC, a co-simulation license, Zynq Linux distribution and a device tree. Here’s a more detailed view of the co-simulation and how it works:

Our webinar presenter Adam Taylor showed how you use QEMU to actually boot Linux and then configure a pulse width, viewing the hardware waveform results in Riviera-Pro:

A hardware break point was set to demonstrate how co-simulation could be interrupted when a particular RTL line was reached:

Software break points were also set using GDB. All of these features showed how to validate and debug what is happening between a hardware and software system. They even showed how you could use evaluation boards like the TySOM series of boards after your co-simulation work had validated the system. There are a slew of daughter boards with TySOM to fit the specific SoC, memory and interfaces that your system dictates.

Summary
Powerful and programmable SoCs from Xilinx that contain ARM cores along with FPGA fabric can be designed quickly and validated early by using a co-simulation approach from Aldec that connects their Riviera-Pro simulator to the QEMU emulator for Zynq processors. Co-simulation helps you uncover HW/SW bugs quicker and earlier in the development cycle.

View the entire 45 minute archived webinar here online.


Capture the Light with Integrated Photonics

Capture the Light with Integrated Photonics
by Mitch Heins on 07-03-2017 at 7:00 am


I wrote up a quick article in the weeks before the Design Automation Conference (DAC) letting readers know that Integrated Photonics were indeed coming to DAC again this year. As a follow up, I attended the DAC presentation, ‘Capture the Light. An Integrated Photonics Design Solution from Cadence, Lumerical and PhoeniX Software’, given by Twan Korthorst, CEO of PhoeniX Software, at the Cadence Theater. Twan was also supported by Jonas Flueckiger of Lumerical Solutions as both PhoeniX and Lumerical are part of the overall Electronic Photonic Design Automation (EPDA) flow that uses Cadence’s Virtuoso system as the cockpit for the flow. The presentation was well attended with all seats at the theater filled. That, in and of itself, is significant given that the presentation started promptly at 10:00a when the exhibit floor doors opened for the day.

Twan started his presentation by setting context for how and where integrated photonics are emerging onto the scene. Photonics have been used for decades in long haul communications but it is now getting a new life driven firstly by the mega data centers. Active optical cables (AOCs) with integrated photonic ICs (PICs) are supplanting copper cables for the longer reaches required of the mega data centers. Photonics are also key to low cost, low energy 100Gbps connections now being installed in these centers. Additionally, integrated photonics are also finding their way into high performance computing, aerospace and high-end RF applications as well as sensors (both biosensors and environmental sensors). The advent of the Internet-of-Things with huge numbers of sensors may in fact be the real long-term high-volume driver for integrated photonics. Photonics is also a key enabler for quantum computing which is just now starting to emerge.

Twan also took the audience through a brief tutorial of what is meant by integrated photonics. In a nut shell, integrated photonics entails guiding laser light through waveguides on a chip (something akin to fibers on a chip). Chip manufacturing for integrated photonics comes in many flavors including Indium Phosphide, Silicon, Silicon Nitride and variations on other III-V compounds. The big interest now is in the use of silicon as a medium as it has the promise of leveraging the extensive manufacturing infrastructure already in place from more than 3 decades of CMOS processing. Silicon enables high-index contrast waveguides in sub-micron dimensions with small bend-radii. This translates into the capability to integrate tens of thousands of what were once expensive discreet optical components onto a single low-cost chip.

As mentioned earlier, Cadence, Lumerical and PhoeniX are now fielding a complete front-to-back EPDA flow and as is done in electronic design, the companies are leveraging the concept of process design kits (PDKs) to enable designers to capture and connect their photonic circuit design using standard photonic building blocks and waveguides. At the simplest level, photonic components include lasers, electro-optic modulators, waveguides and photodetectors.

There are also components that are used to split, couple and switch light between waveguides as well as to multiplex/demultiplex multiple wavelengths of the light in to or out of waveguides. The new EPDA design flow enables co-design of electronics and photonics including schematic capture, co-simulation of electronics and photonics, schematic-driven layout of curvilinear photonic components and waveguides, back-annotation, design rule checking and GDSII generation complete with polygon discretization for different foundries.

The EPDA design flow is different from standard EDA flows in that it has added features to deal with the fact that photons do not behave the same as electrons. At 1550nm, the common wavelength used for the communications market, the design flow is akin to working with electronics that would be running at 193THz (that’s Tera-Hertz with a T). Photonics requires dedicated simulation routines that can accurately simulate the bidirectional propagation (including reflections) of the light while also comprehending that the light is multi-modal, in multiple frequency bands and accumulates phase shifts. In some cases, polarization is also comprehended and used. Photonics also requires smooth curvilinear bends in layout and as such requires specialized algorithms to deal with the generation of layout structures that will properly contain the modes of the light as it is manipulated throughout the circuit.

As part of the presentation, Twan also ran a series of video snippets that showed the design flow in action, including co-simulation of electronics and photonics using Cadence’s Analog Design Environment (ADE) working in conjunction with Cadence’s Spectre spice simulator and Lumerical’s INTERCONNECT photonic circuit simulator. The companies have done a great job of integrating the tools as the test bench, stimulus and resulting waveforms were a natural extension of how analog electronic simulation is done today in the Cadence ADE flow. Similarly, video snippets were also shown for the layout portion of the flow where Virtuoso seamlessly called PhoeniX Software’s OptoDesigner product in the background to produce and manipulate all-angle curvilinear layouts without ever having to leave the Virtuoso layout GUI.

A step that was added for this demonstration was the addition of new interfaces between Cadence Virtuoso / PhoeniX layout and Lumerical’s component level simulation tools. These simulations include mode solvers, beam propagation and finite-difference time-domain (FDTD) algorithms that are used to model specific individual photonic components. The output of these simulations is used to create abstracted compact models that are used for the circuit level simulations that would be too large for component level algorithms.

Twan rounded out the presentation by announcing a two-day photonics summit that will be held at the Cadence campus in San Jose in early September. The event will include one day of technical presentations discussing the challenges and progress towards implementing integrated photonic systems for a variety of end applications. The event’s second day will include hands-on training of the EPDA framework and including a preview of how to do system design that combines electronic and photonic die with a laser using a photonic interposer, all within in a single package.

It’s always good to see new technologies come along as it means growth and new opportunities for both companies and individuals. Seeing integrated photonics at DAC is yet another sign that this technology is here to stay.

See also:
Cadence Photonics Solutions Web Page
Lumerical Solutions Web Page
PhoeniX Software Web Page


Bosch to Build $1.1 Billion Fab for Automotive and IoT!

Bosch to Build $1.1 Billion Fab for Automotive and IoT!
by Daniel Nenni on 07-02-2017 at 7:00 am

There has been quite a bit of coverage on this already but Bosch building a fab in Dresden is a big deal so let me share my experience, observation, and opinion as us bloggers do. The $1.1B question of course is: Why didn’t Bosch invest in GlobalFoundries FD-SOI fabs in Dresden instead? Automotive and IoT is perfect for FD-SOI, right? What is the backstory here? Scoop it in the comments section or email me and I will make it worth your while!


Originally computer systems companies not only designed their own chips, they manufactured them too. My first employer Data General had a fab in Silicon Valley and had OEM customers in the semiconductor industry including two that I worked directly with: Perkin Elmer (E-Beam) and Calma (GDS Stations).

The first thing Data General did after hiring me was send me to the Design Automation Conference (#21DAC) which is how I ended up in EDA and IP. No regrets of course, especially when we went fabless which is documented in our book “Fabless: The Transformation of the Semiconductor Industry”. You can download the PDF version if you are a member.

Back then, systems companies ruled the semiconductor industry, then came the traditional semiconductor companies (IDMS: Intel, TI, Motorola, etc…) which were followed by fabless chip companies. In fact, Qualcomm started as a systems company and may be headed back that way. Intel is also into systems now with the acquisition of Mobileye which is following in the footsteps of IDM Systems company Samsung. The history of Apple, Samsung, and QCOM is in our book “Mobile Unleashed”. You can download the PDF version if you are a member.

Bottom line: We have now come full circle with systems companies like Apple, Huawei, Tesla, and others designing their own chips and driving the semiconductor industry. We also have internet companies such as Amazon, Google, Facebook, etc… designing chips for systems. Today these fabless systems companies buy more wafers, EDA tools, and IP than fabless chip companies, absolutely.

Taking things another step back to the future, Bosch is spending $1B+ on a new 12 inch fab. I’m not sure how many fabs they have built in total but they have a 6 inch and an 8 inch facility still in production for ASICs and MEMS that have produced billions of chips for automotive and other electronic systems including IoT (yes IoT). The new fab will be built in Dresden (versus Reutlingen), Germany and will start production in 2021. The target markets are automotive and the fastest growth market which is IoT (yes IoT).

In hindsight, more than 80,000 domains have hit SemiWiki.com and the majority of those are now fabless systems companies. We track: Technologies (FinFET, FD-SOI, Photonics), Vertical Markets (EDA, IP, Foundries, Mobile, IoT, Automotive, Security, and AI), Companies and Events. In addition to domains, Google also provides us with device types, location, age, and even gender. Age is trending down and female is trending up (double digits now) which mirrors what I saw at #54DAC. Geographic breakout is what you might expect with USA #1, Asia #2, and EU #3. Russia was actually #1 if you include hacking attempts… (too soon?)

It is interesting to note that more than 80% of our traffic is desktop and more than 40% is search which to me means people are reading SemiWiki.com at work versus while driving to work. From the time SemiWiki.com went live in 2011 to today more than 2,000,000 users have been recorded so that is a decent sample set. A user is a device not a person but if you do the math about 80% of the users are probably people.


3D NAND Myths and Realities

3D NAND Myths and Realities
by Scotten Jones on 06-30-2017 at 9:00 am

For many year 2D NAND drove lithography for the semiconductor industry with the smallest printed dimensions and yearly shrinks. As 2D NAND shrunk down to the mid-teens nodes, 16nm, 15nm and even 14nm, the cells became so small that there were only a few electrons in each cell and cross-talk issues made further shrinks very difficult and uneconomical.

Continue reading “3D NAND Myths and Realities”


Overcoming the Challenges of Creating Custom SoCs for IoT

Overcoming the Challenges of Creating Custom SoCs for IoT
by Mitch Heins on 06-30-2017 at 7:00 am

As the Internet of Things (IoT) opportunities continues to expand, companies are working hard to bring System-on-Chip (SoC) solutions to market in the hopes of garnering market share and revenue. However, it’s not as easy as it may first seem. Companies are running into a series of issues that stand between them and capturing the market.

I had the chance to sit in on a panel session at the 54[SUP]th[/SUP] Design Automation Conference (DAC) that tried to address how companies can overcome these challenges. The panel was chaired by Ed Sperling of Semiconductor Engineering and was held at the Mentor booth on the DAC exhibit floor. On the panel were Mike Eftimakis from ARM, Jeff Miller from the Mentor Tanner group and John Tinson of Sondrel. Ed started off the discussion asking the panelists what they felt the main challenges were for IoT SoC designers.

Mike Eftimakis responded that ARM sees three main challenges for SoCs targeted at IoT applications.

[LIST=1]

  • The first being cost, especially for edge devices whose number is expected to grow to over 50 billion devices by the year 2020.
  • The second is the ability to customize devices for specific end application markets. The number of end applications seems to be boundless and designers will want to try to address multiple different end applications with the same SoCs to amortize cost and increase revenue.
  • The third is the ability to cut down on the amount of data being sent from edge devices into the Cloud. Our existing internet infrastructure is already struggling to keep up and we are just dipping our toes into IoT applications. To keep from overwhelming the infrastructure ARM believes that more pre-processing of data will need to be done closer to or at the edge devices.

    Mike sees ARM helping designers meet these challenges by offering simple processors for edge devices that can be manufactured in high volumes and low costs. Having one of these processors in the edge device gives the SoC designer the ability to enable customization for derivative markets while at the same time doing sensor fusion and some analysis to cut down on the required amount of data to be sent to the Cloud.

    To be able to build these SoCs quickly and to be able to pivot with the changing IoT market, ARM also believes that having pre-verified design elements and subsystems will also enable designers to rapidly build prototypes that can be tested in the market. Also, by modularizing the SoC architecture, it would allow designers to quickly customize the same base SoC with different interfaces for different end applications.

    Jeff Miller of Mentor agreed with Mike’s assessment and suggested that another enabling technology is electronic design automation (EDA) software that can be used for analog and mixed signal designs. Jeff pointed out that almost all edge devices will be dealing with the real world which is in fact, analog. Edge devices will need to be able to integrate and possibly control analog signals while also analyzing and converting their data into digital representations that can easily be sent to the Cloud. Many of these devices will also be communicating to gateways using wireless technologies which means that the design and associated EDA tools will also need to comprehend RF technologies as well.

    John Tinson of Sondrel pointed out that designing these types of SoC is a daunting task as it cuts across multiple engineering domains. While this might be manageable for some of the larger enterprise-level SoC providers, it is not so easy for cash strapped start-up companies. Time to market will be key for these companies and Sondrel’s mission is to help these types of companies by providing design services to reduce their time to market. Sondrel brings with it a significant amount of experience in putting together SoCs which can help to reduce and mitigate risks for start-up companies and their investors.

    Ed Sperling pointed out that while pre-assembled and verified design blocks are useful, he wondered whether the IoT market is mature enough for designers to be able to know which blocks should be offered. He also questioned whether it was really feasible to reuse designs across such widely differing end use markets. Mike from ARM responded that they felt that there were definitely sub-segments to the IoT application space for which design reuse would certainly be possible and he made the point that indeed these would be the market segments that people should target to ensure they get a return on their investment. Jeff from Mentor agreed and pointed out that the key would be to start with a set of building blocks that were fundamental to all designs and then add more over time as more functions become integrated at the edge devices. A couple good example of these types of blocks would be security hardware for the edge devices and communications interfaces.

    Mike from ARM also pointed out that because IoT is still not well defined, it makes a lot of sense to build flexibility into your SoC designs so that you can pivot with the market when needed. An example of this would be the requirement to be able to do over-the-air updates to the firmware of the edge devices.

    Ed Sperling brought the group around to another hot topic for all things IoT based and that was the question of security. Ed pointed out that the threat surface is continually changing and asked the question of how should companies deal with this challenge. Mike from ARM suggested that designers think about segregating their design domains to have a clean, water tight separation between the critical functions of the device (boot up, firmware memory, communications and control) and the applications side of the device. The Cortex-M processors from ARM with built-in TrustZone hardware helps designers do this when using ARM-based processors in their designs.

    Jeff from Mentor agreed. He suggested what he called defense in depth, including making sure the design company is in control of the SoC as it moves through different stages of the ecosystem. Once the design has left your company to be fabricated, packaged and tested it can be vulnerable to hacking. Every stop along the way is a possible attack point and designers need to have test suites to ensure that the final packaged devices are not doing something that they were not intended to do. These kinds of checks must be engineered into the system before the SoC design is started.

    All in all, this was an excellent panel session. A job well done goes out to Ed, Mike, Jeff and John who covered a lot of information in a short amount of time (a lot of which I wasn’t even able to capture in this short article). Double kudos go to Mentor for sponsoring the event.

    See Also:
    ARM TrustZone
    Mentor/Tanner AMS Solution
    Sondrel Design Services


  • Memories for the Internet

    Memories for the Internet
    by Tom Simon on 06-29-2017 at 12:00 pm

    In 1969 the Internet was born at UCLA when a computer there sent a message to a computer at Stanford. By 1975, there were 57 computers on the ‘internet’. Interestingly in the early seventies I actually used the original Xerox Sigma 7 connected to the internet in Boelter Hall at UCLA. A similar vintage computer is now in this room commemorating that first internet message on October 29, 1969. Internet traffic has of course sky rocketed, with the major impetus coming from web usage. Statistics from back in 1991 showed global internet traffic of 100 GB per day. In 2016 is was 26,000 GB per second, and in 2020 it is estimated to be 105,800 GB per second.

    According to Cisco, in 2015 there were estimated to be 3 billion users, with 16.3 billion connected devices. Video is already 70% of all internet traffic, and it is expected to grow to 82% by 2020. The internet started out using Internet Protocol Version 4 (IPv4) around 1981. This familiar system uses 32 bits of addressing, providing for 4.3 billion unique addresses. Despite its surprisingly long run, IPv4 is running out of steam, even though it is still widely used.

    In the early 1990’s work began on IPv4’s replacement, IPv6. By 1996 RFC 1883 was approved which was the first in a series of RFC’s covering IPv6. IPv6 uses 128 bits and therefore provides an address space of 3.4×10^38 addresses. The protocol is not compatible with IPv4 and thus many devices need dual protocol processing capabilities. Additionally, many nodes must provide tunneling to permit interoperability.

    Wikipedia states that as of 2014 IPv4 was still used for 99% of worldwide web traffic. However, in June 2017 almost 20% of the users accessing Google did so using IPv6. Also, mobile networks have adopted it wholeheartedly. IPv6 growth is real and accelerating.

    What does all this mean for network switch designers? At DAC this year in Austin I had a chance to sit down with Lisa Minwel, Senior Marketing Director for eSilicon’s IP Business Unit. She told me that the growth in data rates, connected devices and address space – courtesy of IPv6 – are all creating an unprecedented need for optimized memory IP of all kinds.

    Data center chips can have total areas of over 400 mm^2, with over 900Mbs of embedded SRAM. Data centers require high clock rates, and low power to avoid cooling issues or thermal stress. eSilicon sees a wide palette of solutions for use by chip architects – among them are larger die, High Bandwidth Memory (HBM), TCAM, advanced FF nodes, dense multiport memory, high speed interfaces, and 2.5D and other complex packaging techniques.

    eSilicon marshals all these technologies to deliver some of the most complex data center chips available today. She talked about a chip they recently put into production that supports 3.6 Terabits per second, in 60 lanes of 28Gbps. There is over 40Mb of TCAM in this particular design.

    Indeed, for these packet handling chips, TCAM is the silver bullet. Despite how IPv6 optimized some aspects of packet inspection and routing, it still means larger and more complex searches. eSilicon has TCAM memory compilers that are proven at 28HPM, 16FF+GL, 16FF+LL and 14LPP. Lisa explained that the development and validation of their memory compilers can take over a year. As a result, eSilicon works with chip architects very early to discuss needs and options for future generations of chips in advance of their implementation. Lisa said this kind of interaction is highly beneficial because availability of specific memory configurations can create significant architectural advantages.

    Internet data growth is a given, so larger and faster data center chips are going to be a necessity. Memory IP, and related IP for data transfer, play a central role. Look to see SRAM continuing to be a major percentage of chip area. Also, expect to see special purpose memories, such as TCAM and multiport, to be major contributors to system level performance. For more information on IP building block technology offered by eSilcon, look at their website.


    DSRC: The Road to Ridiculous

    DSRC: The Road to Ridiculous
    by Roger C. Lanctot on 06-29-2017 at 7:00 am

    Stupid has a home and that home is in Macomb County, Michigan. It is here, we learn from The Detroit News, that General Motors Co. has decided to test the use of wireless technology in conjunction with roadside QR code signs to transmit vital traffic information to passing cars. Those messages will only be communicated to cars equipped with Wi-Fi-based Dedicated Short Range Communication (DSRC) technology currently being contemplated for mandated fitment in U.S. cars by the U.S. Department of Transportation beginning as soon as 2019 and currently only available in the MY17 Cadillac CTS.

    GM Testing Smart Road Tech with MDOT, Macomb County – The Detroit News

    Macomb County and GM are describing the technology as a safety feature in spite of the fact that it will introduce a distracting alert message into the dashboards of passing DSRC-equipped MY17 Cadillac CTS vehicles. The Detroit News tells us the first connected construction zone in the nation, on Interstate 75 in Oakland Country, will allow test cars to read roadside bar codes which communicate approaching lane closures. Additionally, reflective strips on workers’ safety vests contain information identifying them as people instead of traffic barrels, according to the Detroit News report.

    This technology is expected to speed the development self-driving cars by enabling vehicle to infrastructure and vehicle-to-vehicle communications. It’s worth noting that none of the leaders in autonomous vehicle development are currently exploring the use of DSRC.

    This non-cellular wireless tech qualifies the Michigan implementation as “smart” in the words of The Detroit News even though cellular technology is not being used to transmit the same information to traffic apps like Waze, Telenav Scout, HERE, TomTom, Google Maps or NNG’s iGo. For some reason the Michigan Department of Transportation and Macomb County believe that talking to cars in a specialized language using specialized and expensive hardware is “smart.”

    The multimillion dollar exercise in exclusivity raises many questions. The most important question is why the State and the County have seen fit to share what they claim to be potentially life-saving information solely over a private network accessible only to a single new car model instead of opening up the broadcast to all traffic-related communication platforms.

    This extraordinary feat of transportation exclusion extends beyond this highway work zone alert transmitting solution. The Detroit News tells us the state has established at least 100 miles of “connected” highway corridors with roadway sensors and plans for 350 more miles – all speaking in wireless electronic tongues – instead of cellular.

    The technology is also being used to communicate the signal phase and timing of traffic signals, though, again, only to an appropriately equipped MY17 Cadillac CTS or so-equipped test vehicles. This “smart” approach to connecting cars to infrastructure ignores the fact that no more than 200 traffic lights nationwide make use of DSRC technology while thousands of traffic lights are connected using cellular technology and are accessible using the ConnectedSignals Enlighten app which works on any smartphone and is integrated in the BMW Apps platform in most new BMW’s. Audi of America offers a similar solution via the embedded cellular connection.

    More importantly, for fixed communications opportunities between infrastructure and cars such as the Oakland County work zone scenario, cellular is the superior preferred solution. A growing chorus of states is rising up against the USDOT, which is insisting on the use of DSRC for most transportation project incorporating connectivity. That is some USDOT regulatory over-reach we can all do without.

    To be clear, there is nothing smart about sending valuable construction zone and traffic light information exclusively via a communication channel requiring expensive hardware with limited availability. The system as currently deployed is not even integrated with emergency responders and law enforcement, to say nothing of commercial vehicles.

    Were Macomb and Oakland counties and the State of Michigan to transmit the same information via cellular, the solution would not only be smart, but revolutionary. It would also align the State with the growing cadre of cities and states around the world that are sharing vital roadside traffic information over existing wireless networks for consumption via widely available consumer devices and in-vehicle integration platforms.

    In this context, the roadside QR codes are the latter day equivalent of the clever Burma Shave signs from the middle of last century. Modern networks and cloud service delivery platforms have enabled edge computing technology such that alerts regarding approaching highway hazards and traffic information can be communicated at sufficiently low latency to be useful to drivers without requiring any additional infrastructure.

    The onset of wireless technologies such as LTE Advanced Pro and 5G mean that collision avoidance applications will soon be enabled via embedded modems within five years – enabling direct communications without the network and at no cost. In this context, the creation of an expensive, dedicated network unsupported by any consumer device technology is a path to saving lives that is narrow indeed. Worse, it is a road to ridiculous and a waste of taxpayer dollars. There’s nothing smart about that.

    Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here:
    https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


    ARM, Infineon, Synopsys, SK Hynix talk AMS Simulation

    ARM, Infineon, Synopsys, SK Hynix talk AMS Simulation
    by Daniel Payne on 06-28-2017 at 12:00 pm

    Every SoC that connects to an analog sensor or device requires AMS (Analog Mixed-Signal) circuit simulation for design and verification, so this year at #54DAC the organizers at Synopsys hosted another informative AMS panel session over lunch time on Monday. What makes this kind of panel so refreshing is that the invited speakers are all users of EDA circuit simulators and responsible for AMS IP or chip design. The panel moderator was Farhad Hayat of Synopsys and he gave a brief overview of the SPICE and FastSPICE circuit simulators and how Custom Designer is being used for IC layout. The mantra for 2017 at Synopsys for AMS design is:

    • Physically aware IC design (early layout parasitics into SPICE)
    • Visually assisted IC layout (templates make you more productive)
    • Reliability aware (Monte Carlo simulations, EM and IR analysis)

    SK Hynix
    Sibaek Jung from the CAD Engineering Group was the first panelist to present on the challenges of DRAM design, and their company is #2 in the DRAM market behind #1 Samsung, and ahead of #3 Micron. SK Hynix also designs HBM (High Bandwidth Memory), NAND storage and CMOS image sensors.

    Circuit design challenges for DRAM include:

    • Coupling capacitors (198.1M parasitic capacitors)
    • Slower run times with SPICE
    • Simulation of an 8GB design is 2.3X slower than for a 4GB design
    • Power-up simulations can take up to 168 hours

    To meet these challenges they used the FineSim circuit simulator (acquired from Magma in November 2011) and traded off simulation speed versus accuracy using the ccmodel simulator setting. Running FineSim on 8 cores they saw an increase in speeds of 2-3.4X and were able to get long simulation run times down to a more manageable 17 hours. Even the power-up simulation run times used to take 4 days but now can be speeded up using partitioning and event control by 4X to 10.9X.

    ARM
    This company is most famous for their processor IP business and Tom Mahatdejkul revealed a technique called
    Large Scale Monte Carlo (LSMC) with the HSPICE circuit simulator.

    LSMC is a new feature in HSPICE to manage and dramatically reduce the amount of data created during Monte Carlo runs. At ARM they use this feature for circuit simulation with under 1million runs.

    The accuracy of LSMC versus non-LSMC are similar, and so are the run times, the big difference is the amount of disk space used up with the output files. They are reporting30,000X smaller total file size (3.3GB vs 112KB, 26 transistor, 68 nodes, 1 .meas statement) for a logic test cell.

    On a memory cell they saw the output file size with LSMC get reduced to just 707KB, versus the non-LSMC size of 2.8GB.

    Standard cell libraries are characterized with HSPICE across multiple PVT corners, so disk space is a big deal. ARMr really needs data efficiency for a Monte Carlo approach to be viable. So with LSMC they could characterize a 100 cell library using only 78.4MB of disk space, versus the previous approach which bloated out to 2.31TB disk usage.

    With LSMC they are able to run more cells under more corners than before.Now they can run 1M to 10M SPICE runs using LSMC.

    Measurements showed that they use the same amount of RAM with LSMC versus non-LSMC. LSMC provides statistical data results, but it is not saving as many data run points.

    Synopsys
    There’s an internal physical IP group at Synopsys and Marco Oliveira talked about their CAD flows and methodologies to support 2,700 engineers worldwide. Marco’s background includes A2D and D2A converters.

    For high yield design they need to simulate across multiple process corners, however Monte Carlo simulations just take too long, so instead they limit their sample size and extract the results. Their approach is called sigma scaling. In one example for an RX termination circuit they did 1,000 MonteCarlo runs with no scaling, then re-ran agin needing just 200 runs with sigma scaling.

    As a best practice they use sigma scaling with a factor up to 2, using only 200 simulation runs or more. This technique works with all of their circuit simulators: HSPICE, FineSim and CustomSim.

    Infineon
    Our final panelist was Haiko Morgenstern, an MS verification engineer from Infineon based in Munich, Germany. Their company is large with some 40,000 people and they have products in Mobility (Auto), Security (ID cards, NFC), and energy efficiency.

    One of their big challenges is how to verify MS designs. They have used UVM testbenches, real number modeling, and UPF for implementation and verification. For verification they are running MonteCarlo simulations with CustomSim and VCS AMS.

    On a recent ADC verification there were up to 1 million elements, so CustomSim handles this capacity and can simulate in about 30 minutes run time. The verification engineers can define which block uses which modeling abstraction (transistor, SPICE behavioral, Verilog-A, RTL).Variation block MC is now available in CustomSim runs.

    The output results have statistical values as text along with the typical waveforms.Infineon uses scripts to automate the simulation process, and use LSF to distribute jobs across a compute farm. They cando 200 MC runs overnight using LSF on a 1 million element netlist, but they haven’t tried sigma scaling yet.

    Summary
    Synopsys has a family of three circuit simulators: HSPICE, FineSim and CustomSim. The SPICE and FastSPICE market continues to be fiercely competitive, so to stay viable each vendor has to show constant improvement with each new release of their tools. HSPICE got started back in the 1980’s, and over the decades has been able to stay relevant amidst newer tools by adding new features and refactoring the code to be more efficient. FineSim was an early SPICE simulator to exploit parallelism, and CustomSim is the newest simulator offered by Synopsys in the FastSPICE space.


    TSMC Unveils More Details of Automotive Design Enablement Platform

    TSMC Unveils More Details of Automotive Design Enablement Platform
    by Mitch Heins on 06-28-2017 at 7:00 am

    At this year’s Design Automation Conference (DAC), TSMC unveiled more details about the design enablement platforms that were introduced at their 23[SUP]rd[/SUP] annual TSMC Technology Symposium earlier this year. I attended a presentation on TSMC’s Automotive Enablement Platform held at the Cadence Theater where TSMC’s Tom Quan gave a great overview of their status. Before diving into automotive, as a quick review, Tom updated us on all four of the segments covered by their enablement platforms, those being Mobile, High Performance Computing, Automotive and Internet of Things. Compound annual growth rate of wafer revenue from each of these areas was 7%, 10%, 12% and 25% respectively. Mobile consumes wafers from 28HPC+, 16FFC, 10nm and is now seeing some 7nm starts. HPC is in production at 16FF+ with newer designs targeting 7nm. IoT has the broadest breadth of wafer usage including 90nm, 55ULP, 40ULP, and 28HPC+ with 7nm ready for design starts.

    Automotive, the subject of Tom’s presentation, is ready for design starts using 16FFC process. Tom started his presentation by giving a quick overview of the different types of ICs now being used in the automotive space. The biggest driver of platform complexity comes from infotainment and the growing space of ADAS (Advanced Driver Assistance Services). ADAS alone has several categories of applications and associated ICs including using vision, radar and audio capabilities for detection, avoidance, varying degrees of autonomous driving features, voice recognition, natural language interfaces, vision enhancement, and the list goes on. Overlaid on the traditional areas of power-train, engine control, chassis and suspension, communications and infotainment are now safety and security. All these functions are represented by more than 40 customers who have done over 600 tape-outs to TSMC with more than 1 million 12 inch equivalent wafers worth of ICs being shipped.

    TSMC has put a tremendous amount of work into capturing this market building upon their successful Open Innovation Platform, better known to many of us as TSMC OIP. The whole idea of OIP is to bring together the thinking of customers and partners to enable an ecosystem that speeds time-to-market and ultimately shortening time-to-money for all involved. TSMC OIP boasts over 16 years of collaboration with more than 100 ecosystem partners and spans 13 technology generations that includes over 14,000 IPs, 8200+ tech files and 270 PDKs for 90+ EDA tools. The enablement platforms build on this foundational work ensuring that all of the right building blocks and tools are in place to enable designs in a given end market – in this case automotive.

    As an example, and since TSMC was presenting at the Cadence Theater, we can look at the collaboration between TSMC and Cadence. Their collaboration in automotive started in 2015 with a focus on identifying needs and solutions to ensure conformance with the two main standards in this space which are AEC-Q100 and ISO-26262. Functional safety was a key area of collaboration and Cadence and TSMC started by training their engineers on functional safety requirements for the automotive space. Within the last two years, Cadence alone has trained over 100 engineers, many of which have been officially certified by an outside agency. Together, TSMC and Cadence have engaged with customers doing automotive ICs and IPs and as a result, Cadence developed a portfolio of interface IPs in TSMC’s 16FFC process supporting those customers. Many of these IP already meet AEC-Q100 requirements for Grade 2 temp range and Cadence has committed to qualify their controller IPs to be ISO 26262 ASIL-ready.

    With respect to design tools and flows, in the second half of 2016, TSMC and Cadence worked to define a methodology for fault injection simulation and functional safety campaign management. In that time frame Cadence gained ISO 26262 tool compliance on 30+ tools in analog-mixed-signal, digital verification and front-end digital implementation and signoff flows. This work has also now prompted the collaboration to work on ‘reliability-centric’ design flows for 16nm and below including features such as aging simulations, self-heating, electro-migration analysis, FIT (failures in time) rate calculations and yield simulations.

    TSMC wraps this effort up under another TSMC umbrella called TSMC9000. TSMC9000 and associated programs for TSMC Library and IP are quality management programs that aim to provide customers with a consistent, simple way to review a set of minimum quality requirements for libraries and IP designed for TSMC process technologies. The TSMC9000 team monitors ongoing IP quality and their requirements are documented and constantly revised to keep IP quality requirements up-to-date. TSMC IP Alliance members submit required data to TSMC for assessments. Assessment results are posted online so that customers can see the results and scores and understand the IP confidence level and/or risk of using a given IP. Having these assessment results readily available can significantly shorten design lead time and lower total cost of ownership for automotive IC and systems providers.

    TSMC9000A (A for automotive) is based on requirements from ISO 26262 and AEC-Q100 to cover IP quality, reliability and safety assessment. It includes automotive grade IP at the 16FFC node targeted to automotive ADAS and Infotainment applications. Most of the current automotive IP has completed technology qualification for AEC-Q100 grade 1 up to 150[SUP]0[/SUP] C (Tj) and have been re-qualified with automotive-specific DRC/DRM decks. These IP are also ISO 26262 ASIL ready including safety manuals, FMEA/FMEDA, and ASIL B(D) certification.

    In summary, TSMC’s automotive design enablement platform on 16FFC is ready to go. It will be interesting to see by the next DAC how far this platform has progressed both in terms of content and usage as the world progresses towards autonomous self-driving vehicles.

    See also:
    TSMC Design Platforms Driving Next-Gen Applications