RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Making IP Reuse and SoC Integration Easier

Making IP Reuse and SoC Integration Easier
by Daniel Payne on 07-31-2014 at 2:00 pm

The last graphics chip that I worked on at Intel was functionally simulated with only a tiny display size of 16×16 pixels, because that size allowed a complete regression test to be simulated overnight. Our team designed three major IP blocks: Display Processor, Graphics Processor and Bus Interface Unit. We wanted to also integrate an 80186 core, but our IP team in Japan couldn’t meet the schedule, so we had to be less ambitious and rely upon an external CPU. Needless to say, when silicon came back the only display size that worked properly was 16×16 pixels.

Today we have SoCs that contain hundreds of IP blocks and cores that can come from other departments, other divisions or even bought as commercial IP. So the challenge still remains the same – how do I know that I’ve done enough functional verification, integrated all the IP properly, and uncovered enough corner-case bugs?

Assertions are one technology that promise to meet those challenges, and an assertion is a logic statement that defines the intended behavior of signals in a design. You can either write an assertion manually based upon your own understanding, or have a tool automate the creation of some assertions to assist your verification efforts. Atrentahas an EDA tool called BugScope that can automatically create assertions for an RTL design by reading in your functional stimulus and RTL source:

Methodology for Assertion Reuse in SoC – MARS

Users of BugScope can run a few different apps that provide assertion synthesis.


BugScope Assertion Synthesis Applications

One app has the acronym MARS – Methodology for Assertion Reuse in SoC. With MARS an engineer can verify that integrating each new IP block at the SoC level is correct by:

[LIST=1]

  • Verifying that the IP is configured properly at the SoC level
  • Any design issues are flagged at the SoC level
  • Measuring coverage that may have been missed at the IP level

    So assertion synthesis complements the traditional functional simulation practice in the goal to verify that an SoC is both functionally correct and ready for tape out. With assertion synthesis you can actually measure the progress towards your verification goals. Your digital design flow with assertion synthesis added looks like:


    Assertion Synthesis Flow

    Design engineers create RTL source code and then add test stimulus as inputs to the assertion synthesis tool.

    In MARS, once the properties are generated, they represent the known good state space of the design. BugScope then synthesizes assertions which create an executable specification for use in SoC integration. Note that no property classification was required at all. In fact, the entire MARS flow is fully automated.

    During IP development, output from assertion synthesis are properties that will always be true for that stimulus. If you see lots of new coverage properties when you scan through them, it may mean that your existing stimulus has some holes that verification runs have not covered. Some of these properties may also indicate that you’ve found a new design bug.

    IP designers review the properties for obvious bugs and coverage holes – not full blown classification since that’s difficult to do and time consuming. They would do this in combination with the Progressive App, which provides continuous, instant verification status.

    Once the IP team is ready to hand RTL to the SoC team, they generate assertions. You then bind these assertions to your favorite HDL simulator, emulator or even a formal verification tool. In simulation and emulation, these act like an executable specification. Once again, when you simulate or emulate and an assertion is triggered, you have just found either a specification error, an IP-level coverage hole, or a new bug to fix.

    With all of this feedback a verification engineer can then write new stimulus to fill any coverage holes, provide debug info to hardware and software engineers to fix specification errors, and ultimately debug info to fix corner-case bugs.

    Live Webinar

    Learn more about automated assertion-based verification methodologies for IP-SoC integration at the next live webinar on Thursday, August 7th where Larry Vivolo will be presenting. There’s a brief registration process, and during the one hour webinar you can ask questions at the end to see how this methodology might help out on the verification quality of your present or next IC design.


  • Ensuring ESD Integrity

    Ensuring ESD Integrity
    by Daniel Payne on 07-31-2014 at 10:00 am

    Electro Static Discharge (ESD) is a fact of life for IC designs and has been ever since electronics were first created and then started failing because of sudden, large currents flowing through the design caused by human, processing or machine contact. It’s just too expensive to layout an IC today, fabricate it, test for ESD compliance, and then iterate when the layout fails. Fortunately for us, EDA vendors have developed tools that are able to analyze an IC layout for ESD compliance prior to tape out that can:

    • Find missing or under-sized vias
    • Detect curent crowding in metal layers and vias
    • Pinpoint excessive bus resistance that can lead to voltage stress over protected devices
    • Flag wrong or missing ESD protection devices
    • Calculate ESD device burnout
    • Simulate if a protected device has reached burnout
    • Report if oxide breakdown is being caused by voltage stress
    • Alert if a protected device triggers before a parallel ESD device
    • Shows if there’s an imbalanced current distribution over the fingers of an ESD device

    One vendor that I talked with at DACthis year was Magwel, and I recently had a chance to get an update on their new EDA tool for ESD analysis called ESDi. This tool runs on the chip-level IC layout by extracting a netlist with layout parasitics, and running a simulation-based ESD verification. The static simulations use both the Human Body Model (HBM) and the Machine Model (MM) on all of the pad-to-pad combinations:


    Human Body Model (HBM)


    Machine Model (MM)

    There’s another ESD model called the Charged Device Model (CDM) that is planned for introduction later in the year, so stay tuned for that.


    Charged Device Model (CDM)

    For each specific IC technology there are Transmission Line Pulse (TLP) techniques used to create table models for ESD events, so you don’t have to do your own modeling. Your IC design can contain multiple power domains and the pads can be grouped. During ESD simulation the currents inside of each ESD cell are calculated, so that you can read reports on things like:

    • Bus resistances
    • Voltage stress in core circuits
    • Electro-migration violations
    • Device burnout

    Extracting a netlist for all of the power and ground nets is computationally challenging, so the ESDi tool uses multi-threading to help deliver results in a reasonable amount of time.


    Current Density Report

    The actual path that an ESD discharge event may travel inside of an IC is kind of tricky thing to uncover, so this tool is smart enough to trace multiple, parallel paths which include parasitic Bipolar devices, self-protecting devices, plus stacked or parallel ESD devices:


    ESD Path Tracing

    To be thorough, ESDi will automatically find and then simulate all current paths using the TLP models. Triggering of multiple parallel power clamps is accurately simulated in ESDi taking into account IR-drops in path resistances and trigger voltages. Some competing tools assume that only 1 power clamp will be triggered at any given time, which often leads to overestimation of path resistances and false positives. The latter need to be manually checked and possibly waved, which is a time consuming and error-prone process.

    You can also have different ESD specifications used for each test run. The GUI helps make it easy to learn the tool for both viewing results plus design debug tasks.

    Actual ESD tool results from a mixed-signal FPGA chip IO ring using 83 pads designed in a 40nm technology show just how fast the ESD analysis is:

    • Extract a netlist from layout in 10 minutes
    • Run 4,160 HBM tests and Electro-Migration tests in 30 minutes
    • Using 4 threads on an i7 processor with 8GB of RAM

    A mixed-signal FPGA chip, IO ring

    You can now analyze your IC layout for ESD compliance prior to tape out and know that it will pass HBM and MM testing by using an EDA tool like ESDi from Magwel. This approach will eliminate silicon re-spins and keep your customers happy because they won’t be returning failed parts.


    How Much Power to Allocate to IoT Connectivity?

    How Much Power to Allocate to IoT Connectivity?
    by Eric Esteve on 07-31-2014 at 8:28 am

    Ensigma is in fact the low-power radio processing unit (RPU) architecture, completing Imagination Technologies (IMG) port-folio, the well-known graphic processing unit (GPU) PowerVR family and MIPS CPU core products. If we take a look at the block diagram “Ensigma Series4 Explorer RPU”, we can easily identifies the Radio frequency (RF) interfaces, namely Bluetooth, WiFi and FM. Dedicated WiFi, FM or Bluetooth IC are available for years as standalone chips (ASSP), but the great innovation is to build a multi-standard modem, MAC and Modulation and Coding Processors (MCP). Such an approach allows defining a block of functions (everything integrated into the grey area), namely the Ensigma Explorer RPU IP, to be integrated into a larger System-on-Chip (SoC). By bringing this functionality on-chip, customers can minimize power consumption, silicon footprint and system cost compared with conventional solutions. Ensigma Series4 “Explorer” RPU is the high-performance solution for implementing all of the connectivity and broadcast reception requirements of tomorrow’s complex SoCs and advanced chipsets.

    If the goal is to design for new generation of wearables, IoT and other connected devices where ultra-low power consumption and low cost points are key, Ensigma Series4 ‘Whisper’ low-power radio processing unit (RPU) architecture is the right solution, fully optimized for such applications.

    Tomorrow’s IoT, wearables and other connected embedded devices demand more than the evolution of existing connectivity chipsets; they require solutions designed specifically for these emerging product categories. According with Tony King-Smith, EVP marketing, Imagination: “Many devices claiming to address IoT today are in fact repurposed from other markets such as mobile or embedded, but don’t fully address the extreme low power and cost requirements that this market will demand.” If you now take a look at the Ensigma “Whisper” RPU, you immediately realize that ‘Whisper’ RPUs has been fully optimized to support next-generation ultra-low power, cost-sensitive devices, offering configurability, power-efficiency, reusability and processing-efficiency:

    • Configurable: the ‘Whisper’ architecture supports a broad range of lower bitrate connectivity standards needed in IoT and other small-footprint connected devices
    • Power-efficient: ‘Whisper’ RPUs feature highly optimized, configurable hardware with Imagination’s PowerGearing for Ensigma that optimizes both static and dynamic power consumption in the cores
    • Reuse: ‘Whisper’ and Explorer RPUs share a common API, customers can reuse software across the platforms
    • Processing-efficient: the tightly coupled modem/processor in ‘Whisper’ can eliminate the host processor or alleviate its load, for the smallest embedded system size and lowest power consumption

    Last but not least, even if Ensigma “Whisper” RPU has just been launched, customer can rely on Silicon proven architecture, as Ensigma RPUs are shipping today in numerous low-power audio and connectivity devices.

    IoT is a very popular acronym, so popular that every chip maker or IP vendor probably has his own definition of the related applications! Imagination consider that Ensigma ‘Whisper’ RPUs should allow customers to create a “whole new generation of life-enhancing and efficiency-focused devices targeting markets as diverse as eHealth, energy, agriculture, security and many other areas.” In fact, whatever the final application, one of the common requirements of an IoT device is connectivity. If Ensigma Series4 Explorer RPU supports more than thirty different deployable standards available today, it’s clear that an IoT dedicated RPU like Whisper will have to only support lower bitrate connectivity standards such as Wi-Fi, Bluetooth Classic, Bluetooth Smart, NFC, GNSS and other existing or emerging low-power connectivity technologies.

    In fact, consumer IoT will be a heterogeneous combination of devices, sensors, and computing power, integrated together with the main objective of ultra-low power consumption. What is the best approach for minimizing power consumption while offering efficient connectivity and computing power? Integration! Even if for development cost consideration, it’s unlikely that such SoC target advanced technologies like 28nm of even 14nmFF, integrating RPU and CPU functions and on-chip memory on more mature technology node will be more power efficient than building a system based on too many single IC.
    Integrating Ensigma Serie4 ‘Whisper’ RPU IP in a larger System on Chip, along with specific logic block for differentiation, CPU core, on-chip memory and ROM is a good strategy for keeping a low Silicon footprint as well as low cost, supporting low bitrate connectivity standard and keeping power consumption at the lowest level, as required by the consumer IoT applications.

    Availability:Imagination will start to roll out a line of Ensigma ‘Whisper’ cores in Q4 2014. Contact info@imgtec.com for more information.

    Imagination RPU Blog: Ensigma

    From Eric Esteve from IPNEST

    More Articles by Eric Esteve…..


    EUV Pellicles

    EUV Pellicles
    by Paul McLellan on 07-31-2014 at 8:01 am

    Shakespeare reckoned that a man went through seven stages in his life.All the world’s a stage, And all the men and women merely players. They have their exits and their entrances, And one man in his time plays many parts, His acts being seven ages.

    Well, an EUV mask seems to only go through three main stages:

    [LIST=1]

  • the blank supplier manufactures and inspects the blank. This is a silicon-molybdenum multi-layer Bragg interference mirror
  • The mask shop patterns the mask, cleans and repairs and inspects it
  • The wafer fab stores it, puts it in the scanner, exposes wafers, and cleans it from time to time.


    I’ve written before about one of the other big problems for EUV beyond the light-source power issue, and that is the fact that currently EUV masks don’t have a pellicle. Just in case you have no idea what that means here is the quick explanation. In refractive masks (traditional and immersion lithography, the light passes through the mask) the mask has a thin transparent layer over the top of the mask known as a pelllicle. The stepper focuses its 193nm light so that the mask itself is the focal plane. If any contamination gets on to the mask it sits on top of the pellicle and is not in the focal plane, and unless it is gross contamination, it will not print. EUV masks are reflective (mirrors). There are lots of challenges to developing a pellicle system for EUV, largely being driven by ASML like all things EUV. One challenge is that almost everything absorbs EUV so you can’t just make a pellicle out of some random material that is transparent to normal wavelength light.

    How important is having a pellicle? ASML reported that users of the NXE3300 (ASML’s current EUV scanner) say:

    • Required for high-volume manufacturing (HVM)
    • Essential
    • Must for production
    • High level of interest

    At SEMICON there was a half-day meeting of the Industry EUV Pellicle Working Group. I didn’t attend it but I have been passed several of the presentations.

    The current state of the art is pSI pellicle with SiN caps on both sides in a frame, with 80% transmission. The table below has more details, accurate as of July 2014 for prototype pellicles.

    And this second table shows the status for HVM. Note that for some reason the targets and the status are in the other order from the prototype status table.

    Of course there is a lot more to bringing pellicles into HVM that requires a whole ecosystem, including inpection, cleaning, testing and so on. This table shows who the participants are in the various areas.

    At the end of the workshop, participants were asked when the predicted EUV insertion would take place. Most people said 2017. And on being asked what they thought the biggest problems were they picked light source, mask defectivity and resists in that order. So maybe everyone expects the pellicle issue to become a non-issue by then.


    More articles by Paul McLellan…


  • Then, Python walked in for verification

    Then, Python walked in for verification
    by Don Dingee on 07-31-2014 at 12:00 am

    Go ahead – type “open source” into the SemiWiki search box. Lots of recent articles on the IoT, not so many on EDA tools. Change takes a while. It has only been about five years since the Big Three plus Aldec sat down at the same table to work on UVM. Since then, Aldec has also gotten behind OS-VVM, and is now linked to a relatively new open source verification project: Cocotb.

    While technologists debate the suitability of C/C++ and Java in application development, universities are churning out new programmers at an increasing rate, spurred on by the learn-to-code movement and MOOCs and a perceived need to create billions of things. According to a recent study, what language are most schools now teaching for introductory programming?

    Python. (Yes, it is named in honor of those funny guys.) Like any other language, it isn’t made for everything, and as purists will point out its interpretive nature means code isn’t lightning fast. In an age of faster microprocessors and SoCs, that is becoming less of a factor in getting results. For web-based application programming, or host-based code like we’d find in a typical EDA verification platform, it’s solid.

    This is not just admiring a silly walk; it’s about solving problems with software. Given its straightforward syntax and lack of compilation, programmers are more productive in Python, meaning they produce more quality lines of code more quickly. When software programmers are in high demand and short supply, applications will tend to be written in what they are most comfortable with. For the next generation, Python will hit the mainstream.

    If the idea of Python in verification sounds vaguely familiar, it has been tried several times before – Tom Sheffler walks through the highlights of a decade of prior attempts at using Verilog and Python together. ScriptEDA, Scriptsim, APVM and Oroboro, and PyHVL were all research attempts to make Verilog callbacks and scheduling less messy. Chris Higgs describes how things are still messy:


    Cocotb, which stands for Coroutine Cosimulation Testbench, has entered the stage. With a modern Python implementation, learning from the prior attempts and much more knowledge about the gap between Verilog and SystemVerilog and UVM, Cocotb is an all-new look at the verification problem. (Did I mention it’s open source, on GitHub?)

    Wait, we just had a new look in UVM, right? How many UVM programmers do you know? A few hundred, maybe, a couple thousand spread across the industry? On the premise that “everybody knows Python” (not true for earlier attempts), Higgs has redesigned the testbench for verification by introducing Python as the abstraction for interaction with a simulator. It could be a short walk from millions of programmers to faster code to more chip designs that work.


    Higgs details a real-world example of an IP block for layer 2 / layer 3 packet parsing, about 500 lines of RTL that took about a week to design and synthesize. To verify it, he spent 1 month and 5000 lines in UVM. The same verification effort in Cocotb took less than 500 lines of code and 1 day of effort – and claims to have found bugs missed by UVM. That is an indicator of both improved productivity and the risks of super specialized knowledge required to drive other HDLs effectively.

    How does Aldec enter this picture? One of the simulators Cocotb supports is Riviera-PRO. A personal opinion: as a smaller player, Aldec has been more open to open source initiatives as a way to drive new innovation and create value faster. Also, Cocotb claims to be well suited for FPGA designs, a key area for Aldec which should draw even more interest.

    I’ve spared a lot of the technical details to avoid stealing the punchline. For your consideration, a recent webinar Aldec hosted with Chris Higgs describing Cocotb:

    High-level thinking: Using Python for rapid verification of VHDL and Verilog designs

    What will be interesting to watch here is how the dynamics of open source and the popularity and productivity of Python fare versus the existing field of HDL choices in verification tools. What do you think about the concept of Cocotb?


    Pizza con Questa

    Pizza con Questa
    by Paul McLellan on 07-30-2014 at 11:01 am

    Last week I went to a lunch and learn at Mentor about their Questa formal product given by Kurt Takara. Like everyone else these days, Questa is packaged as a number of Apps for doing different tasks. Formal verification is different from other EDA tools in that different approaches can be used for different sub-tasks. There are three outcomes: it succeeds in proving correctness (you are done), it succeeds in finding a bug (you have to fix it) or it doesn’t complete. But if one approach either succeeds or fails you don’t need to try other approaches. Indeed, you can extend this to products from different vendors, and if one vendor succeeds in proving something correct you don’t need to try it in another tool.

    Kurt structured his talk as a top-10 list. Verification is a big deal. Mentor funded a study that discovered that the ratio of verification engineers to design engineers is now 1-1. But on top of that, the design engineers spend half their time on verification too. So that comes to about 75% of effort going on verification.


    The Apps in Questa can be split up into groups. There are automatic formal Apps such as clock-domain crossing checks. There are traditional formal Apps such as protocol checking. There are assertion-based verification Apps such as bug-hunting. And then there are some specialized Apps (that were not covered in the lunch and learn) for things like cache-coherency checking.

    The top 10 list was:
    [LIST=1]

  • Clock domain crossing (CDC)

    • this checks that when different clock domains interact that appropriate precautions have been taken to handle the asynchronous nature of the signals. An important additional issue is that in modern designs where IEEE 1801 (UPF) may contain the power policy, the power intent could introduce CDC paths that need to be checked
  • AutoCheck

    • this is push-button formal for debugging, requiring no assertions or testbench. it runs multiple engines and explores the design state space looking for corner case bugs such as unreachable states or transitions in state-machines
  • X-check

    • simulation doesn’t handle X values very well, and is often pessimistic (it transmits an X value when the value is well defined) or optimistic (it transmits a value when it should propagate an X). X-check finds these problem areas
  • CoverCheck

    • this App automates code coverage checks, discovering what is coverable and highlighting areas that are not coverable (there is no sequence of inputs that can reach that line of code) where waivers need to be granted
  • PropGen

    • automation of assertion-based verification. Creates assertions to check design implementation and finds coverage holes, comparing results to design intent. For IP blocks it can create self-checking blocks that detect misuse of the IP at the SoC level
  • Connectivity checking

    • connectivity checking seems like something where formal might be a sledgehammer to crack a nut, but with designs having 1000s of I/Os, often heavily multiplexed, just checking that everything is hooked up right using simulation is tedious and error-prone. The formal approach can catch not just the obvious errors but also corner cases that simulation would probably miss
  • Control and status register (CSR) checking

    • reads the RTL and the register description file and automatically creates assertions for CSR verification (and then checks them), for example ensuring that a read-only register is never written
  • Interface protocol compliance checking

    • checking that bus protocols (for example a bus bridge from processor to an AMBA bus in an ARM-based design) are correctly implemented
  • IP block design assurance

    • for blocks where it makes sense, automatically checking the integrity of the outputs from the block based on the inputs, basically that the block meets its high level requirements
  • Bug hunting

    • formal is not always the first thing that comes to mind post-silicon, but it is a good weapon in the search for a bug that previous verification (such as simulation) obviously missed. The problem is probably deep and an obscure corner case that simulation never exercised. Capturing the bug as an assertion and then unleashing Questa to find what’s wrong often quickly produces a waveform that exhibits the problem


    But the proof of the pudding (formally proved of course) is in the eating. As NamDo Kim of Samsung says (and also presented in papers at the last two DVCons):We have successfully deployed the wide range of Questa applications from fully automatic formal to property checking to improve productivity and design quality.


    More articles by Paul McLellan…


  • Wipe that smile off your device

    Wipe that smile off your device
    by Don Dingee on 07-30-2014 at 8:00 am

    Privacy is a tough enough question when using a device – but what about when we’re done with it? In a world of two year service agreements with device upgrades and things being attached to long-life property like cars and homes, your data could fall into the hands of the next owner way too easily.

    “Oh, it’s OK, I wiped the phone with a factory reset.”


    image courtesy droid-life.com

    Well, that’s comforting. Until you read the recent story about what security software firm Avast found when they bought 20 phones on eBay and started sniffing around. (Let’s keep in mind, Avast is in the business of selling security software, so raising alarm is somewhat self-serving – but the facts remain in this scenario.) I’ll spare the gory details of what they discovered, but uncovering very personal information the previous owners thought was gone was relatively easy.

    Deleting a file, whether on a hard drive or in a flash file system, usually doesn’t actually get rid of the data immediately. The delete metaphor typically goes into the file directory, removes the file name, and invalidates the pointer to where the file was stored – effectively saying that space is now free for use by other data. If your storage system is relatively full, some application will likely come along and overwrite some or all of the deleted content soon.

    Until that happens, the file is still out there for the taking, unless something else is done. There are secure wipe routines available, which typically go over free storage space and rewrite a sequence of something like all 1s followed by all 0s to positively erase latent data. If a user loads such an application, there is a downside: it can take a very long time, and the operating system is pretty much consumed while the app beats on the storage system. But, the data is now gone.

    Some devices or applications encrypt hard drives or flash storage – oh, say an Apple device running iOS 5 or later. The Apple version of factory reset on a later generation device supporting hardware encryption takes out the encryption key, making it a lot harder to use any data found. Those phones Avast pilfered for their study? All Android, where security is a variable; an OEM could certainly take steps to secure a device better.

    This raises a question for the Internet of Things: how secure is a device when it changes hands? If you decide to leave your Nest thermostat in your house when you move, are your personal settings and Wi-Fi secure passwords really gone – or just invalidated? How about that car you just traded in, especially if it uses an OEM in-dash system based on QNX Auto or Microsoft SYNC? No disparagement or rumor-mongering intended; these particular systems may or may not actually implement flash encryption or secure overwrite at the device – if someone has more details on how device reset is handled for these examples, I’d welcome a comment.

    The point is, you don’t want to be designing an IoT device that leaves data to be easily found after the new owner takes possession. Nothing is absolutely secure, but it seems with hardware encryption relatively easy to implement in SoCs and MCUs these days, personal info should be encrypted, and the key reestablished as the first step when a new service account is set up.

    If that new encryption key is stored in non-volatile memory with emulated multiple time programmability (eMTP), such as Sidense SiPROM, initiating an account with a new secure key becomes a customer service advantage – for both the new owner and the previous one. One advantage of storing encryption keys in Sidense 1T-OTP is they are virtually impossible to decipher by reading bit-cell states, meaning both old and new keys are more secure against reverse engineering efforts.

    Again, nothing is absolutely safe. As the Apple versus Android experience in the Avast study shows, encryption provides a layer of protection that blocks the vast majority of simple access attempts. NVM IP adds value to the encryption strategy. We are not going to stop the selfie-on-Snapchat phenomenon, but we should be able to keep the next device owner from seeing it after the fact.


    How to Trim Automotive Sensor?

    How to Trim Automotive Sensor?
    by Eric Esteve on 07-30-2014 at 5:21 am

    The electronic content in automotive is exploding, the market for automotive electronics systems is expected to grow from $170 billion in 2011 to $266 billion by 2016 (Strategy Analytics). When you seat in a brand new car, you immediately see the difference with a ten or even five years old vehicle, as you can exercise MP3 music readers, GPS or smartphone-like touch screen, all of these functions called infotainment. But there is a segment representing an even strongly growing market, namely automotive sensing system market. When driving a car, you certainly activate sensor based functions. Sensors are by nature analog functions, at the edge of the real world, even if the processing will be supported by digital processor or MCU later in the data flow, and a sensor will need to be calibrated. As we can see below, this calibration will have to be done multiple times during the sensor life cycle, and the related data safely stored. Then, the ideal location is within a Non Volatile Memory (NVM). In fact you need a small memory size, from 128 bits to 2K bits, and to optimize the system real estate, power consumption and overall cost, using a NVM IP instance integrated into the main IC is the best option.

    This automotive sensing system market is attractive, due to the strong growth rate expected for the next five years, but this market is also very demanding in term of quality standard, data retention, test coverage… and cost. According with John Koeter, vice president of marketing for IP and prototyping at Synopsys, “Designers developing automotive ICs increasingly expect NVM IP providers to support the Grade 0 temperature range (-40 to +150C) and AEC-Q100 standards while reducing IP area and cutting test times.”

    Synopsys DesignWare AEON Trim NVM IP is available in standard 180-nanometer (nm) 5V CMOS and Bipolar CMOS DMOS (BCD) processes without a need for additional masks or process steps. In addition, faster programming times reduce NVM test times by 3X compared to alternative NVM solutions, enabling designers to reduce production test times and minimize test costs for automotive and industrial ICs.

    I recommend you to listen to this webinar: Designing with Non-Volatile Memory for High-Volume Automotive ICs, as it will give you the opportunity to go very deep inside the NVM technology and better understand the relation between data retention requirements and test strategy, as well as to understand how stringent is the automotive AEC-Q100 quality standard. There is also a good white paper available on this topic: Developing High-Reliability Reprogrammable NVM IP for Automotive Applications.

    Just take a look at the automotive sensing system market evolution, the growth rate is impressive: from 20 million units in 2013 up to 150 million in 2020!

    Coming back to DesignWare NVM IP cell, it’s important to notice that the architecture is not based on One Time Programmable (OTP) or even a few-time programmable (FTP) cell, but really a Multiple time programmable NVM offering endurance up to 10k cycles for 128 bits to 2k bits instances and over 15 years of data retention in automotive applications. Such a VNM IP is to be integrated into a chip targeting 180-nanometer (nm) 5V CMOS and Bipolar CMOS DMOS (BCD) processes and no mask adder, it’s important to notice that the IP operates from a single core supply only (1.8V), thanks to the integrated charge pump. For the chip designer, this NVM IP can be integrated as a black box (no need to be a NVM expert), offering 3X improvement in NVM test time due to faster programming and 75% improvement in area, allowing keeping the overall cost as low as required by the automotive market.

    Did you know that more than 500 million ICs with DesignWare NVM IP ship each year? The company is committed to delivering high quality and high reliability IP through extensive test and qualification methodology developed over more than 10 years in NVM, attacking now the sensing system market with this AEON Trim NVM IP.

    From Eric Esteve from IPNEST

    More Articles by Eric Esteve…..


    High Tech Headwinds and Project/People Management

    High Tech Headwinds and Project/People Management
    by Peter Gasperini on 07-30-2014 at 4:00 am

    In previous posts, we discussed the growing set of challenges and threats faced by the semiconductor industry. From saturating & stagnant systems markets to the gears starting to seize up in that engine of growth we’ve been calling Moore’s Law, chip revenues are – with the exception of memory price boosts from supply shortages – stalling more or less across the board.

    The industry is responding with its usual can-do attitude on the technology front, with a great deal of work going into new memory capabilities such as ReRAM and MRAM, 2.5D packaging & test, and calls for fundamental process improvements such as FD-SOI and monolithic 3D-IC. All of these are being pursued in hopes of adding several more years of viability for silicon on the 3P (price, performance and power) improvement curve and pushing off the inevitable moment when the technology simply runs out of steam.

    It is, of course, vital that the industry pursue these enhancements while awaiting the arrival of whatever it is that will replace silicon. Companies must take every step they possibly can to reduce the rate of price erosion on their offerings. Yet this is only one aspect of what needs to change in the chip industry. The way high tech companies are organized needs to be completely overhauled.

    And each man stands with his face in the light
    Of his own drawn sword,
    Ready to do what a hero can.
    – Elizabeth Barrett Browning

    Consider the matter from the following perspective. All enhancements, innovations, upgrades, feature improvements, customer support, product maintenance and new product development originate from the working stiffs at a semiconductor enterprise. Stated differently: the source of value-add in any chip company is, ultimately, its people.
    That source of wealth and prosperity is already under severe stress in every High Tech company, whether in systems, software or semiconductors. Quite a few firms since 2008 have had to reduce staff and shrink operations thru closing or selling off divisions. This week alone, Microsoft – which has been doing rather well over the years despite chronically negative press coverage – announced it was cutting 18,000 employees.

    This leaves company workforces, already squeezed for maximum productivity, in a critically overburdened condition. The difference between enterprises that survive this interim period of technology transition in a relatively healthy state and those which struggle or wither will be determined by the strength, skill and sagacity of their management. Put another way, firms that can utilize their constrained and limited resources most effectively will be in a position to dominate when the new technology that replaces silicon becomes available.

    Change is not made without inconvenience, even from worse to better.– Richard Hooker

    Which brings us to an important question: how do technology firms organize their people, projects and programs nowadays? All of us have been called into meetings with members of a Task Force, Core Team or what have you. Someone produces a Gantt chart and passes copies around, either on paper or digitally. Someone also takes meeting notes to distribute, and there is a spreadsheet generated for things like budgets. Thus, the most common tools employed include scheduling software along with a couple of selections from MS Office. If the team is meeting to develop a new product, they will use these tools to create a project plan that includes a request for resources and a timeline commitment. After the project goes thru a review and approval cycle, Executive management will then assign personnel as available, pull in the schedule by at least 20% and drive the team to execute.

    Of course, this almost never works – features get cut along the way and deadlines get missed. It’s a ludicrous state of affairs with which we have all learned to cope. Nonetheless, such inefficiencies will no longer be tolerable in this period where price-protecting feature improvements will be harder to come by and have a much shorter shelf life. Thus, the tools and methodologies for developing new products and enhancing existing offerings will need to evolve and improve dramatically.

    High Tech is a realm of engineers – people who improve on & fix broken old things and develop new things. Ever true to their nature, there are some engineers out there now who are creating offerings to fix the problems of current product development tools and overcoming their inadequacies.

    These new tools capture a company’s entire institutional history of product development in order to specifically identify all of its characteristics – including inefficiencies, skillsets, impediments, strengths and weaknesses. From such a comprehensive database, a much more sophisticated method of developing product, project and resource planning becomes possible – one where teams can be assembled with a more appropriate mix of abilities for a given project, programs can be scheduled in detail with much higher probabilities of meeting milestones, and organizational frictions can be much more precisely diagnosed and consequently reduced across the board.

    Whereas the previous approach has been based on a fundamentally seat-of-the-pants assessment, the new tools and their employment methods are quantitative. This allows a company to become smarter about assigning people with the right skills, training and hiring, planning for resource availability, shoring up the organization’s deficiencies, determining accurate schedules and then meeting those schedules.

    There are many more details regarding this new resource and program/project management methodology. The issue is explored in depth at http://vigilfuturi.blogspot.com/2014/07/high-tech-people-management-in-hard.html .


    Enterprise IP Management – A Whole New Gamut in Semiconductor Space

    Enterprise IP Management – A Whole New Gamut in Semiconductor Space
    by Pawan Fangaria on 07-29-2014 at 8:00 pm

    The world of IPs in the semiconductor landscape has completely changed the semiconductor design scenario, specifically the fabless design space. Today IPs are key components of any large semiconductor design, in the same way as auto ancillaries in auto design. It’s just the beginning, in the days to come we will see SoCs just as a collection of IPs interconnected together. All design activities will concentrate towards creating robust, high quality IPs and SoC providers will concentrate on selecting and integrating best IPs. This hypothesis is supported by rapid growth in IP business and its providers across the world; design IPs and even verification IPs. Does it look like a simple task managing IPs amid ever growing volume of IPs? It’s no longer a ‘design data management only’ task. It’s just a tip of the iceberg; a much larger problem to solve is hidden beneath. If managed well, it can provide rich dividends to the SoC providers; else they can incur heavy losses.

    This reminds me about Dassault’ssemiconductor strategy I bloggedabout a few months ago which moves semiconductor space from productivity to profitability. And ‘Enterprise IP Management’ is a key thrust area in that strategy. Use of IPs can definitely increase productivity to a large extent; these days there are VIPs to accelerate design verification as well. However that may not necessarily translate into profitability. So what needs to be done?

    It was a pleasant occasion when I came across a video interviewof Michael Munsey, Director Semiconductor Strategy, Dassault Systems organized by EDACafein 51[SUP]st[/SUP] DAC. Although Michael talked about four major strategies, viz. Design Collaboration, Enterprise IP Management, Requirements Driven Verification and Manufacturing Collaboration in the context of silicon thinking experience for profitability, Enterprise IP Management, an area which is about to explode, was the one which caught my major attention to know about some insights into this strategy.

    Michael rightly says that Dassault’s emphasis is on tying the engineering domain with the business domain to bring the value up at the business level, i.e. make it profitable for all parties in the value chain. While it’s essential for IPs to be robust with production quality, it’s a bigger challenge to select and use right IPs in right designs to augment that value in totality. Amid a large volume of IPs with companies (suppliers as well as consumers), criteria to select, use and mange those IPs are abound. Some of the must criteria are quality, validation process, legacy carried by an IP, its past usage, security, safety, licensing and reliability. With the increased use of IPs in SoC, third party IPs are increasing, making the problem domain more critical; one must look at the authenticity of an IP, has it been partnered with someone, who has used it, what is the guarantee of quality and so on.

    So, what is the IP solution Dassault is working on to provide at the Enterprise level? Although this space will see continuous enrichment from Dassault with the progress of IP world, the key capabilities to manage the whole process of IP selection, validation, and use (both from internal as well as external sources) for SoC integration include areas such as –

    IP Cataloging – This includes operations such as stocking (both from internal as well as third party), indexing, searching, attaching information tags which may include a host of information about the IP, track record of its usage and defect resolution, the third party and so on. The SoC integrator can easily select an IP with appropriate criteria and properties which can add desired value to the design up the design chain.

    IP Governance – This includes the process of creating IPs, its deliverables, the validation process, qualification criteria for third party IPs, licensing, authentication of third parties etc. The standards to pass the gate for use must be in place for both internal as well as external IPs.

    Issue & Defect Tracking – This is a complete system for tracking issues and defects and the process of change management for their resolution. It’s linked with the cataloging for propagating necessary tracking information up the design chain.

    IP Security – With the growth of IPs across the world, IP security has become one of the most important criteria for the success of semiconductor designs. In order to secure IPs and their authenticated use, a proper access control mechanism for them must be in place. Also, with the use of IPs, the propagation of their security tags up the design chain must be done to secure the design as well.

    IP Sourcing – Although some aspects of third party IPs from registered vendors are controlled in IP Cataloging and IP Governance, a whole set of process is required for IP sourcing from multiple vendors from across the geography. The selection of vendors itself can be a critical task. Add to it comparison of various properties, quality, cost and other criteria in selection of proper sourcing for IPs.

    There can be many other areas to manage such as royalties on IPs, metadata management across large enterprises and others. Dassault enables a new platform based design management style which can be modular with respect to a particular design, its management with IPs used, cataloging, sourcing and procurement, governance, security and different variants of the design itself. With ENOVIA DesignSync and IP Management solutions, it creates a single environment to manage internal and external IPs as well as links to design and processes.

    Stay tuned to hear more on the profitable strategies for semiconductor design solution. Dassault has been successful in deploying such strategies in other industries such as mechanical and industrial. Now it’s time for semiconductor industry where Dassault has large install base in top semiconductor companies.

    More Articles by Pawan Fangaria…..