NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Semicap Thoughts: ASML AMAT INTEL SAMSUNG TSMC MICRON

Semicap Thoughts: ASML AMAT INTEL SAMSUNG TSMC MICRON
by Robert Maire on 07-27-2017 at 12:00 pm

ASML reported results in line and slightly ahead of expectations which helped push ASML and the other semicap stocks back to their original valuations prior to the two step pull back that lasted about a month. We are now back to relatively high, record valuations not seen or ever seen previously (at least for a long time) by many companies.

Rather than the negative performance we saw with Micron’s earnings it seems the market is more receptive and less cautious in the current earnings cycle.

Memory, both NAND & DRAM continue to be the primary incremental spenders that are powering the current strong up cycle.

While we remain bullish on the strength of the current cycle and think it has legs we remain on guard for any signs of weakness in the memory market that could end the party

Separately, Applied announced the transition of its CFO from Bob Halliday to Dan Durn, formerly CFO of NXP. Its as good a time as any to make the change as things are about as good as it gets in the industry and Applied is humming.

Thanks for the memories…
Looking at ASML and all that we heard at Semicon, its very, very clear that memory and its associated spending is making all the difference in the world to the semicap space. TSMC’s spending will be down by 50% in H2 and Intel isn’t lighting any fires in spending although we may see an uptick once the 10NM process is locked down in Israel. Samsung remains the spender du jour with memory being their primary target.

Memory always makes us nervous as DRAM spending seems to start and stop without much warning slowing DRAM has been the cause of prior cyclical down turns.

Good Memories and Bad Memories….NAND & DRAM

In our view, which we have talked about many times, NAND remains an almost perfectly price elastic market with almost infinite demand while DRAM is a finite, somewhat price inelastic market with limited demand. Demand for NAND always seems to soak up supply while DRAM has without doubt gotten into oversupply situations that take a while to wring out excess capacity.

We get more nervous when ASML talks about more DRAM business but seems to have a less clear idea of the length or depth of that demand.

Semicap companies don’t have a lot of choice in the matter. They have to sell equipment while the sun shines, wether its to logic, memory or foundry and no one is going to turn down business because its DRAM business. But as the amount of DRAM increases , so does the exposure to the most volatile end market.

Investors seem resilient and receptive
Given the bounce back we have seen in the stocks, the pull back now seems like just a passing, bad dream. The speed of the bounce back seems to indicate the underlying support for the stocks and the long term positive semiconductor story.

Intel – Lost in translation?

We will be very curious as to Intel’s report and how it relates to 10NM. We haven’t heard a lot about 10NM progress. We were the first to report on the delays in 10NM process, and we thought that things had gotten nailed down but something seems to have happened on the way to Intel’s fab in Israel. Perhaps the recipe got lost in translation.

What we would like to hear is that Intel has worked out the bugs and will ramp to volume and ramp capex but we don’t think so…..perhaps we will be pleasantly surprised.

Samsung wants to triple foundry business

Samsung has publicly said it wants to expand its foundry business over the next 5 years to close the gap with TSMC. We think at least part of the reason that Samsung has been leading the EUV parade is to try to gain any sort of process advantage over TSMC to try to win foundry business. We don’t think its going to help Samsung a lot and we think Samsung is doing a bit of wishful thinking about foundry upside as many customers will still be concerned about using a potential competitor as a foundry….and that starts with the biggest semiconductor buyer of all….Apple.

The stocks
As long as CEOs and CFOs don’t say anything stupid on their conference calls we will likely have a pretty good earnings season given the tone in the stocks and ASML’s lead. Business remains super strong and the long range radar is also in good shape. We are more positive exiting Semicon as the negative tone from investors has also vaporized.

We would expect most to report a beat and raise. Sub suppliers in the space seem to be doing best out of the group.

Applied passes the CFO torch

Bob Halliday, AMAT’s CFO is transitioning out to an eventual retirement and he is going out on a very high note at a great time. Following in his footsteps will be hard but Applied got about the best we could imagine. Dan Durn has experience from NXP & Freescale, Global Foundries and ATIC. Naval Academy with control systems and Columbia business school. Sounds like someone from central casting. Can’t get much better than that CV. The cherry on top is an M&A stint at Goldman, the training ground for much of the financial world. We think this could certainly help AMAT work on some inorganic growth which will be difficult after the TEL fail and KLAM kollapse.

We wish both Bob and Dan great success in their new respective roles with the knowledge that both will likely excel given their track records and the AMAT surroundings.


A Functional Safety Primer for FPGA – and the Rest of Us

A Functional Safety Primer for FPGA – and the Rest of Us
by Bernard Murphy on 07-27-2017 at 7:00 am

Once in a while I come across a vendor-developed webinar which is so generally useful it deserves to be shared beyond the confines of sponsored sites. I don’t consider this spamming – if you choose you can ignore the vendor-specific part of the webinar and still derive significant value from the rest. In this instance, the topic is design for functional safety, particularly as applied to FPGA, design and how these techniques can be implemented using Synopsys’ Synplify Premier. In fact, in this very detailed review I saw little that isn’t of equal relevance in ASIC design, though typically implemented using different tools and methodologies.


The webinar kicks off with a review of standards requiring functional safety, refreshingly going beyond ISO 26262 to mention IEC 61508 for industrial applications, IEC 60601 for medical applications and DO-254 for avionics and even applications in datacenters which must support very high up-times (to 99.999%). The webinar speaker (Paul Owens, Sr. TME in the Synopsys Verification Group) then setup the context for the discussion by noting that each of these standards measure functional safety through assessment of risks with or without mitigation or detection.

You probably think “yeah, yeah, triplication or ECC and stuff like that”. In fact, ISO 26262 doesn’t specify what safety mechanisms to implement – you are free to invent custom methods if you want, but there are widely-accepted approaches which Paul spells out in detail. He mentions as one example the commonly used dual-core lock-step computing where two CPUs perform the same calculation in parallel and results are compared to detect faults.

Paul addresses design principally for two classes of fault in this webinar – stuck-at faults caused by physical damage (perhaps through electromigration) and soft errors initiated say by radiation or EMI which cause a transient in a signal (single event transient) which may, in turn, cause transition of a register into an unexpected state (single event upset). Each mechanism Paul describes is intended to mitigate or at least detect logic errors caused by these types of problem.


He kicks off with safety techniques for finite-state machines (FSM). FSMs are particularly sensitive to soft error problems, since an incorrect bit flip can send the FSM in an unexpected direction, causing all kinds of issues downstream. Paul describes two recovery mechanisms: “safe” recovery where a detected error takes the state machine back to the reset state and “safe case” where detection takes the FSM back to a default state (In Synplify Premier, this also ensures the default state is not optimized out in synthesis – you would need to guide similar behavior in other tools).

It is also possible to support error correction in FSMs where state encoding is such that the distance between current state and next state is 3 units. In this case single-bit errors can be detected and corrected without needing to restart the FSM or recover from a default state.


Now for memories. FPGA and IP vendors provide ECC (error-correcting code) RAMs and of course you can use those IPs. In some cases you may choose to use triple-modular redundancy (TMR) for RAMs that do not support error correction in the configurations you need. (TMR triplicates a function and follows it with majority voter logic; this allows two correctly operating functions to override one function with an error.) Also, something that was new to me, you can use error detection to trigger “RAM scrubbing”, a technique commonly used on configuration RAMs to force a re-program of that memory.

IOs are as prone to faults as any other part of the circuit and mitigation in some cases requires triplication. This is implemented through Distributed TMR (DTMR). TMR comes in multiple flavors – local, block, distributed and global. Here’s one useful reference).


TMR can be used on individual registers and register banks and it can also be applied to blocks of logic in the design, but here there are wrinkles. This is again viewed as distributed TMR but usage differs for cyclic, non-cyclic and some other approaches. In non-cyclic cases, there’s no feedback path from internal registers in the block; in this case triplication is straightforward. In cyclic cases, where there are internal feedback loops, those loops can (optionally) be broken to insert voter logic to limit accumulation of errors, in addition to triplicating the blocks and following those structures with majority voter logic.

Finally, there’s a physical constraint you may not have considered in TMR. Radiation-induced soft-errors can trigger not just the initial error but also secondary ionization (a neutron bumps an ion out of the lattice, which in turn bumps more ions out of place, potentially causing multiple distributed transients). If you neatly place your TMR device/blocks together in the floorplan, you increase the likelihood that 2 or more of them may trigger on the same event, completely undermining all that careful effort. So triplicates should be physically separated in the floorplan. How far depends on free-ion path lengths in secondary ionization.

Naturally many of these methods can be implemented in FPGA designs using Synplify Premier; Paul calls out commands and shows generated logic examples during the course of the webinar to illustrate each case. But whether or not you are an FPGA designer, I recommend you set aside some time for personal skills improvement by watching the webinar HERE.


Connected Car as Data Access Battleground

Connected Car as Data Access Battleground
by Roger C. Lanctot on 07-26-2017 at 12:00 pm

The big news over the weekend was accusations flying in Europe over German car company anti-competitive collusion. While it’s true that European companies of all sorts are able and inclined to legally share information in a manner that is foreign to U.S.-based companies, proving anti-competitive activity will be a high bar for German authorities to clear. One also has to wonder at the motivation for German regulators to blow the whistle, should anti-competitive behavior be identified.

A less visible clash has broken out within the wider industry over ownership and access to vehicle data. The main German automotive industry organization, the Verband der Automobilindustrie (VDA), recently released guidelines for data collection and sharing based on two principles:

[LIST=1]

  • Each OEM has the role of a system administrator and takes the responsibility for the safe and secure transfer of vehicle generated data from the vehicle to a standardized and maintained business-to-business (B2B) OEM interface.
  • Third parties can access vehicle data directly over the OEM B2B interface or via neutral server(s) which gather data from the OEM servers. Behind the neutral server providers can dock any services.

    A third, less publicized corollary, is that car companies are using the release of this data policy to justify the eventual restriction of access to the on-board diagnostic port (OBDI/II) while the vehicle is in operation. Car makers and their suppliers are motivated by both cybersecurity concerns and control of customer engagement during real-time operation of the vehicle.

    The initiative reflects the increasing emphasis on vehicle connectivity for data gathering via the embedded modem. Car makers want to control the information related to vehicle performance and diagnostics – especially real-time operational information essential to usage-based insurance and the development of automated vehicle technology.

    Car makers and their suppliers are saying: “If you want vehicle data, you must come to us.” It is the dawn of the vehicle data broker.

    Not much data is collected today. The expectation, though, is that much larger amounts of data will be collected in the future and car makers want to control the marketplace for that information.

    The two simple VDA principles cloak a complex scheme via which German car makers and their suppliers are seeking to short circuit European Union privacy concerns by creating the infrastructure for data sharing. The precise nature of this infrastructure was made clearer by an announcement released by BMW at the end of May that BMW will disclose its data collecting activities to its customers and give those customers the ability to control what data is collected and what data is shared and with whom.

    This program is called BMW CarData: https://www.bmw.com/en/topics/fascination-bmw/connected-drive/bmw-cardata.html Access to the program is only currently live in Europe and is managed by BMW, possibly with the assistance of IBM. Daimler is expected to offer a similar program developed in cooperation with Israeli startup Otonomo. Essential to both programs will be the participation and support of third parties, of which there is none at the outset. Insurance companies offering usage-based insurance are candidates to be early participants.

    The VDA-led neutral server strategy was quickly adopted by the European Automobile Manufacturers’ Association (ACEA) and the European Association of Automotive Suppliers (CLEPA) – marking a Europe-wide effort to control and monetize vehicle data with customer consent. Almost as quickly, the effort was publicly opposed by a large coalition of aftermarket service providers (dealers, repair shops, leasing companies, and insurance companies) for what was considered to be its less-than-fully-transparent approach to data sharing.

    Statement of the Coalition for interoperable data access:

    http://www.leaseurope.org/uploads/documents/press-releases/pr161212-Coalition%20for%20Interoperable%20Data%20Access.pdf

    “The undersigned call upon the European institutions to create a robust regulatory framework for an interoperable, standardized, secure and safe digital in-vehicle telematics platform as intended by the eCall mandate, to maintain true consumer choice, independent entrepreneurship, competition and innovation for all services ‘around the car.’”

    What all parties involved seem to forget is the fact that European car makers have demonstrated their willingness and ability to alter or mask car data – as revealed by the calamitous diesel scandal still unfolding in Europe and ensnaring a growing number of senior executives across multiple auto makers. One can hardly blame the aftermarket coalition for opposing the VDA/CLEPA/ACEA data sharing initiative given this lack of trust.

    These organizations are seeking direct access to vehicle data so that they may engage directly with vehicle owners without the interference of the auto maker. The neutral server approach is intended to allow the vehicle owner the ability to direct the automaker to share vehicle data with chosen service providers.

    This assumes that the auto maker has established data sharing connections with those service providers and still allows the auto maker to control the interaction. The principle behind the VDA approach is that only data currently being collected will be shared. The suggestion from VDA is that not much data is actually being collected.

    But it comes back to trust. Service providers will want data from individual vehicles and some data gathered by car makers is only available as an aggregate and may not be collected at sufficient frequency to fulfill the third party organization’s needs.

    Third parties want complete transparency – presumably opted into by the consumer. Delivering such transparency will require auto makers to open up their wireless access to the vehicle. Unlike the smartphone market, where such access is routine, wireless data plans for cars are usually limited due to cost and data access is limited due to cybersecurity and customer access concerns.

    Third parties in Europe may feel that the eCall mandate – as a public safety requirement – justifies enabling third party access to vehicle data. Thanks to the dormant SIM profile created by the GSMA, most eCall modules are not being designed to transmit vehicle data under any circumstances other than a vehicle crash.

    The kind of customer control being pursued by the aftermarket includes having roadside assistance provided by a car club or insurance company rather than the auto maker, or sharing vehicle location data with Waze, TomTom or INRIX instead of HERE. Make no mistake, that is a radical departure from current practice.

    The VDA/CLEPA/ACEA proposition is clearly a compromise to empower consumers and enable access to data already being gathered. But the situation has even more severe implications when one considers BMW’s collaboration with IBM. Imagine a Watson-infused neutral server volunteering information to drivers or car buyers based on freely accessible vehicle data tied to an automated speech engine.

    Car makers might be uncomfortable about a car able to give voice to unbiased vehicle information on-demand regarding service history, performance, driving behavior of the owner, outstanding recalls, and unperformed service. “Hey, Watson, what can you tell me about the current owner of this vehicle?”

    It’s clear that more transparency regarding vehicle data is better than less. But no one is prepared to open the door to full transparency. If you have any doubt, imagine marketers, regulators and law enforcement with unbridled access to vehicle data. Does anyone really want that? Is there anything we can do to stop it?

    So, the neutral server is a flawed but important step forward in data transparency and customer empowerment. But much work remains to be done. Full freedom and flexibility will only arrive when the customer takes full control of their vehicle data. The VDA proposition is clearly a defensive gesture intended to delay or forestall complete consumer control of vehicle data while fostering the creation of open data exchanges capable of enabling new applications, business models and vehicle use cases.

    Mobility service providers will ultimately manage vehicle data across a broad range of transportation service providers including insurance companies, service stations, parking garages, tolling authorities, rental car companies and public transportation authorities. It remains to be seen how many car makers will successfully make this transition. Increasing access to vehicle data is an important first step.


  • CEO Interview: Chris Henderson of Semitracks

    CEO Interview: Chris Henderson of Semitracks
    by Daniel Nenni on 07-26-2017 at 7:00 am

    In looking at the SemiWiki analytics over the last six years it is clear that the average age of our readers is trending down. (Yes, Google knows how old we are). The 25-35 age group now represents our largest readership and that is supported by the conferences I have attended recently. At the Design Automation Conference last month I met with Chris Henderson and we talked about the next generation of semiconductor professionals and how to better prepare them for the future.

    Chris is the founder and President of Semitracks, Inc. Semitracks is a privately-held company that provides training solutions for the semiconductor industry, its suppliers, and users. Chris started Semitracks in 2001 and began operating it full-time in 2004. Semitracks is probably best known for their push into online training. They are the only company I know of to offer a comprehensive library of training courses for product realization (manufacturing engineers, product engineers, test engineers, etc.). This topic is for the greater good, absolutely, so please spend some time here.

    Chris, why did you decide to start Semitracks?
    One interest that I have had for many years is developing methods and techniques to facilitate learning about semiconductor technology. I was trained as an electrical engineer, with specialization in VLSI design back in the 1980s. I did my thesis work on the electrical behavior of defects, and how to test for them. I spent quite a bit of time early in my career at Honeywell training failure analysts on how to analyze failed ICs. Our field is extremely broad, and also requires deep knowledge in one’s specialty area. However, that is not sufficient. Interactions between disciplines (for example, wafer fab manufacturing and packaging) require that one understands adjacent disciplines with some competency. The problem is how to provide that knowledge. Searching Google is useful, but the information is not available in a format that lends itself to quick learning. Searching through IEEE Xplore is useful for deep knowledge, but it is not effective for understanding context or background on a topic. In a sense, you really want a learning tool that can provide information that is 10 feet wide and 10 feet deep, as opposed to a mile wide and an inch deep, or a mile deep and an inch wide. Nobody was addressing this problem, so I decided to launch Semitracks and see if we could.

    But isn’t the best way to learn sitting with an instructor?
    I realized that some training would need to be face-to-face. A face-to-face interactive training session cannot be beat in terms of comprehension and satisfaction. However, this form of training is very expensive and doesn’t lend itself well to today’s fast-paced environment. That’s where online training can help. This approach can be done at a much lower cost point, and be provided directly to the scientist, engineer or technician at the time when he/she needs the training. Online training may not be as compelling as face-to-face training, but this environment is continually improving. Better bandwidth, better intelligence in web applications, and virtual/augmented reality will all help make online training more compelling. Our industry will continue to require knowledgeable engineers, but many of the universities have moved on from semiconductor technology to newer “more exciting” topics, like cyber security, AI, and biotechnology. More generally, if our education system is going to provide value to today’s employees, it must move towards increased use of online training. Education is becoming much too expensive for the average student, and the value proposition is decreasing.

    How does your Online Training System work?
    We use an open source Learning Management System (LMS) known as Moodle. We can house presentations, videos, documents, quizzes, tests, and interactive applications within the environment to help people learn using 3 modalities (seeing, hearing, and doing). We have customized the software extensively to allow us to create multiple customer systems that can share common content if need be, while allowing for customized content that is customer-specific. We have also created customized publishing routines that allow us to seamlessly publish new content into as many of the systems as need be. This flexibility also allows us to develop custom training solutions for our customers. For example, we are currently developing a custom solution for the Silicon Integration Initiative (Si2) to help them provide training for their clients on the OpenAccess standard and OAscripting.

    Where do you see this field going in the future?
    Right now, we still do about 75% of our courses as face-to-face courses. Many individuals and some companies still prefer this method. Increasingly though, we are getting requests to do more online training. Companies are beginning to realize that face-to-face training is expensive and difficult to scale, whereas online training can provide an adequate alternative, even though it is not perfect. I believe that 10 years from now, the ratio might be reversed, with 75% of training online, and 25% face-to-face. We believe we’re at the front end of this revolution, and that training will change radically over the next decade – it must if we want our industry to continue to move forward.

    Also Read:

    CEO Interview: Stanley Hyduke, founder and CEO of Aldec

    CEO Interview: Vincent Markus of Menta

    CEO Interview: Alan Rogers of Analog Bits


    Emulation makes the What to See List

    Emulation makes the What to See List
    by Daniel Payne on 07-25-2017 at 12:00 pm

    The analysts at Gary Smith EDA produce an annual What To See List for DAC and I quickly noticed that all three major EDA vendors were included on the list for the specific category of emulation. The big problem that emulation addresses is the ability to run in hardware an early version of your SoC so that software developers can get access and run lots of code or boot an OS, plus the hardware team can more quickly verify and debug the performance of their system to see that it will meet all specifications prior to first silicon. On the Wednesday at DAC I was able to watch a booth discussion at Mentor where Samsung, ARM and Starblaze engineers talked about their emulation experiences, moderated by Lauro Rizzatti.

    The three panelists were:

    • Rob Kaye, ARM
    • Nasr Ullah, Samsung
    • Bruce Cheng, Starblaze

    Q&A

    Q: There is In-Circuit Emulation mode and Virtual deployment mode for emulation. Which emulation use model have you tried and why?

    A: Samsung – we are using emulation in three areas: Performance, are the projections correct? Did our implementation meet the requirements? Can we add a late feature? Our team needs to verify that the quality of service is sufficient so that they can make a phone call along with streaming video on a mobile device. Emulation helps us with meeting the power constraints, knowing that the chip runs cool enough and that the current levels fit the battery.

    A: Starblaze – we need to run our real application firmware on the SSD controller. We have a PCIe connection in order to match the speed of our host. The NAND interface is affected by aging. We’re using 100% virtual peripherals with the VirtuaLAB toolkit, where PCIe is virtual and a simulator connects to the emulator through DPI (Direct Programming Interface).

    The PCIe PHY in our ASIC is modeled with a virtual PCIe PHY in emulation. On the NAND interface we use fast memory models running on the host, making simulation fast. This is a pure virtual approach and allows us to change configurations, or system performance

    A: ARM – both our IP validation and IP development teams use emulation. Our SW group does early development with virtual prototypes, the move key parts into emulation to run SW at a fully-timed level. Software-driven validation is another approach used where virtual and emulation combine to allow C code to run on the CPU model while debugging drivers and we develop validation flows for parts on the emulator.

    Related blog – Listening to Veloce Customers: Emulation is Thriving

    Q: Describe a verification challenge and how emulation came to the rescue.

    A: Samsung – we had our design running in emulation, after four days of running Android and apps the performance slowly declined. Emulation uncovered a counter bug where the counter didn’t reset. Simulation would never have found that bug.

    A: Starblaze – an application in our ASIC would run for five days then crash. We ran the modified application in our emulator to trace and debug the problem. Prototypes have been built with FPGAs, however they provide little visibility inside to help with debug and recompile times can be 10 hours or longer, so with emulation we have more convenient debugging.

    A: ARM – running GPU benchmarks is sped up with emulation.

    Q: Any recommendations to emulation users or vendors?

    A: Samsung – start your emulation as early as possible. Have verification engineers start early with emulation in order to find and fix more issues.

    A: Starblaze – emulation vendors need to make them run faster, because we’re not as fast as the ASIC yet. Keep improving and adding debug tools like VirtuaLAB PCIe. Would love a NAND or CPU analyzer.

    A: ARM – we want context switching between virtual and emulator and then run to some point in the simulator or emulator.

    Related blog – The Rise of Transaction-Based Emulation

    Q: Will emulation be here in five years and will it be different?

    A: Samsung – I like what ARM asked about context switching. We can run high-level simulation and mix/match it with emulation. We need the ability to switch between lower design details and the highest abstraction layer.

    A: Starblaze – Our customers use a behavioral model of the chip to run firmware, it’s an SSD simulator. We want to link the SSD simulator to the emulator and be able to switch context. Verification is moving to higher levels of abstraction, so emulation has to follow that trend.

    A: ARM – we see emulation being used in SW/HW integration for SW regression testing. Embedded software has grown in size, even Android uses about 20 billion instructions. Emulation lets us run lots of these tests of our software for both function and performance. More of our software development teams will use emulation in the future.

    Related blog – Mentor Plays for Keeps in Emulation

    Summary
    After the panel questions concluded I did ask my own question, “Are you using other vendor emulators?”

    Both Samsung and ARM replied yes, while Starblaze only uses the Mentor emulator.

    Perhaps it is time for your group to consider using an emulator on the next SoC project and start getting the same kind of benefits that ARM, Samsung and Starblaze reported seeing in their design and verification flows.


    Virtualizing ICE

    Virtualizing ICE
    by Bernard Murphy on 07-25-2017 at 7:00 am

    The defining characteristic of In-Circuit-Emulation (ICE) has been that the emulator is connected to real circuitry – a storage device perhaps, and PCIe or Ethernet interfaces. The advantage is that you can test your emulated model against real traffic and responses, rather than an interface model which may not fully capture the scope of real behavior. These connections are made through (hardware) speed bridges which adapt emulator performance to the connected device. And therein lies (at times) a problem. Hardware connections aren’t easy to virtualize, which can at times impede flexibility for multi-interface and multi-user virtual operation.


    A case of particular interest, where a different approach can be useful, arises when the “circuit” can be modeled by one or more host workstations; where say multiple GPUs modeled on the emulator may be communicating through multiple PCIe channels with host CPU(s). Cadence now supports this option through Virtual Bridge Adapters for PCIe. This is a software adapter, allowing OS and user applications on a host to establish a protocol connection to the hardware model running on the emulator. As is common in these cases, one or more transactors running on the emulator manage transactions between the emulator and the host.

    I wrote about this concept earlier in a piece on transaction-based emulation, but of course a general principle is one thing – a fully-realized PCIe interface based on this principle is another. This style of modeling comes with multiple advantages: low-level software can be developed/debugged against pre-silicon design models, this style can support multiple users running virtualized jobs on the emulator and users can model multiple PCIe interfaces to their emulator model. Also, and this is a critical advantage, the adapter provides a fully static solution. Clocks can be stopped to enable debug/state dump or to insert faults without the interface timing-out, something which would be much more challenging with a real hardware interface.


    Frank Schirrmeister pointed out how this fills a verification IP hole in the development flow. In IP and subsystem development, you’ll validate protocol compliance against simulation VIPs or accelerated equivalents running on an emulator. When you want high confidence that your design behaves correctly in a real system handling real traffic, you’ll use an ICE configuration with speed-bridges. In-between there’s a place for virtual emulation using virtual bridge adapters. In the early stages of system development, there’s a need to validate low-level software (e.g. drivers) for those external systems, before you’re ready to move to full ICE with external motherboards and chipsets. Modeling using virtual bridge adapters provides a way to support this.

    Frank offered two customer case-studies in support of this use model. Mellanox talked at CDNLive in Israel about using virtual adapters and speed bridges in a hybrid mode for in-circuit-acceleration (ICA). They indicated that this provides the best of both worlds – speed and fidelity in the stable part of the system circuit and flexibility/adaptability in software development and debug for evolving components.

    Nvidia provided a more detailed view of how they see the role of ICE and virtual bridging. First for them there is no question that (hardware-based) ICE is the ultimate reference test platform. They find it to be the fastest verification environment, proven and ideal for software validation and it has the flexibility and fidelity to test against real-world conditions, notably including errata (something that might be difficult to fully cover in a virtual model). However, applying only this approach is becoming more challenging in the development phase as they must deal with an increasing number of PCIe ports, more GPUs and more complex GPU/CPU traffic, along with a need to support new and proprietary protocols.

    For Nvidia, virtual bridge adapters provide help in emulation modeling for these needs. Adding more PCIe ports becomes trivial since they are virtual. They can also provide adapters for their own proprietary protocols and support both earlier versions and the latest revisions. As mentioned above, the ability to stop the clock greatly enhances ease of debug while in development. At the same time Nvidia were quick to point out that virtual-bridge and speed-bridge solutions are complimentary. Speed bridges give higher performance and ensure traffic fidelity. Virtual bridges provide greater flexibility earlier in the development cycle. Together these fill critical and complementary needs.

    The big emulation providers have at times promoted ICE over virtualization or vice-versa; perhaps unsurprisingly the best solution now looks a combination of both solutions. As always, customers have the final say. You can watch Nvidia’s comments on the Palladium-based solutions HERE.


    Seeking Autonomy

    Seeking Autonomy
    by Tom Simon on 07-24-2017 at 12:00 pm

    I’d wager that if I mention autonomous vehicles, the first thing that you would think of would be autonomous cars. The truth is that we will see many other kinds autonomous vehicles in the years ahead. Their applications will range from package delivery to saving lives on the battlefield. Of course, to some extent they are already used on the battle field for less than benevolent purposes.


    We have heard that Amazon is experimenting with the idea of package delivery using drones. Their Prime Air service is already delivering packages within 30 minutes in England. Their website features several videos of the service operating today. They have even gone so far as to watermark the videos with “Not Simulated”. The drones they are using fly presently only in clear weather and at an altitude of 400 feet.

    Another fascinating application for autonomous vehicles in battlefield rescue. The army has been working on this for over a decade. The system most talked about is called BEAR for Battlefield Extraction Assist Robot. While these are not going to be fully autonomous they will be able to receive commands and execute them with some degree of autonomy. An interesting human engineering aspect of the rescue robots is that it was discovered that solders felt vastly more comfortable with robots that look less mechanical and more lifelike. The prototypes have faces and appendages that look like arms. Yet the propulsion mechanisms are hardly anthropomorphic and are highly optimized for moving over rough or uneven terrain.

    The final category of autonomous vehicles I want to touch upon is flying cars. In March it was reported that Dubai is planning to offer autonomous airborne taxis service. They are going to be using the Ehang 184 which is being developed in China. It can carry one person with 8 rotors on 4 arms, thus ”184”. Pilotless air flight raises many questions about safety and practicality. Nevertheless, it seems that we are headed in that direction and it is only a matter of time. In congested urban areas, autonomous flying taxis would be highly sought after.

    I am sure in reading above about the potential applications for autonomous vehicles that reliability and safety are the two things that immediately come to mind. It’s not hard to image many potential sources of errors, causes of failures and other factors that could cause safety and reliability issues. Sidense has been talking about security and reliability for autonomous systems for quite a while. Their non-volatile memory (NVM) can help contribute to improved reliability and safety in a number of ways. It’s important to understand the role that NVM can play in these systems.

    NVM is used to store boot code, encryption keys, trim information, unique identifiers and many other sorts of critical information. System design requirements often dictate constraints on area, power, process technology, and durability. Instead of adding off chip NAND flash or resorting to exotic processes for storing mission critical information, Sidense One Time Programmable (OTP) NVM uses minimal real estate and can be implemented inside SOC’s on standard planar and FinFET processes. It also offers impressive durability due to its uniquely designed 1-T bitcell. In fact, it can tolerate extremely harsh operating environments.

    Data stored with Sidense OTP NVM is extremely secure. There is no way to physically examine the silicon to determine its contents. The write operation causes atomic level disruption to the oxide layer that is impossible to detect through mechanical or visual means. Reverse engineering is thwarted by numerous features that defeat techniques like side channel attacks or other electronic hacking.

    Designers of autonomous systems are pushed to meet multiple and potentially mutually exclusive design goals. At every step in the design process conflicting criteria and objectives need to be balanced. It’s good that for many NVM needs in these systems there is a robust, reliable, secure and low overhead solution. Sidense works with foundries to develop comprehensive qualification reports and information to ensure that their technology works well within spec. If you want to learn more about how Sidense OTP NVM can be applied to demanding applications like autonomous vehicles, I recommend looking at their published article on their website.


    The Transformation of Silvaco!

    The Transformation of Silvaco!
    by Daniel Nenni on 07-24-2017 at 7:00 am

    Founded in 1984, Silvaco is now the largest privately held EDA company with a rich history including a recent transformation that is worth a blog if not a book. Coincidently, I started my career in Silicon Valley in 1984 and have had many dealings with Silvaco over the years including a personal relationship with Silvaco founder Ivan Pesic. The transformation I am speaking of started when David Dutton became CEO in 2014 and covers the last three years. They joined SemiWiki in 2013 so we have had a front row seat.

    You can see a brief history of Silvaco on SemiWiki HERE. Interestingly, the views on this blog are comparable to the views on the brief history of Cadence, Synopsys, and Mentor blogs. You can also read the CEO interview we did in January with David HERE. This was also a well-read blog.

    David and I are on the same page with the transformation EDA is currently undergoing. Semiconductor design is getting harder with each new node and with fabless systems companies in the mix, time-to-market pressures continue to compress the design cycle forcing EDA customers to focus on a much smaller number of vendors. It is called a “fewer throats to choke when something goes wrong” strategy. Given that, take a look at the acquisition spree Silvaco has gone on in the last two years:

    Silvaco to Acquire SoC Solutions
    (June 16th, 2017)

    Silvaco Accelerates Characterization Business with Agreement to Acquire Paripath
    (June 14th, 2017)

    Silvaco Enters IP Market With Acquisition of IPextreme
    (June 3rd, 2016)

    Silvaco Group Acquires edXact for SPICE Simulation Speed-up
    (June 2nd, 2016)

    Silvaco Extends SPICE Product Portfolio to Address Advanced Variation-Aware Design with Acquisition of Infiniscale
    (December 15th, 2015)

    Silvaco Acquires Invarian to Accelerate Adoption of Concurrent Power-Voltage-Thermal Analysis
    (March 19th, 2015)

    This month they launched a worldwide series of SURGE events. SURGE stands for Silvaco UserRs Global Events which shows the company’s commitment to expanding their customer base.

    “Our inaugural Silvaco UseRs Global Event, SURGE, in Hsinchu Taiwan exceeded our expectations with strong attendance and user participation. The keynote speech on PixelLED Development by Dr. Charles Li, CEO of Playnitride, was well received by the audience showing the challenges of leading-edge LED display design. The power of bringing our technology experts to our users is a further step in Silvaco’s commitment to provide solutions to our customers for their ever-increasing challenges in display and semiconductor design. We are looking forward to hosting SURGE’s worldwide user base throughout 2017 and to building them stronger in the years ahead.”David Dutton, CEO of Silvaco.

    These types of gatherings are what makes EDA great, the ability to collaborate directly with the people who use the tools, absolutely. Be sure and check the schedule and attend the one closest to you. I will be at the one in Silicon Valley and it would be a pleasure to meet you!

    About Silvaco, Inc.

    Silvaco, Inc. is a leading EDA provider of software tools used for process and device development and for analog/mixed-signal, power IC and memory design. Silvaco delivers a full TCAD-to-sign-off flow for vertical markets including: displays, power electronics, optical devices, radiation and soft error reliability and advanced CMOS process and IP development. For over 30 years, Silvaco has enabled its customers to bring superior products to market with reduced cost and in the shortest time. The company is headquartered in Santa Clara, California and has a global presence with offices located in North America, Europe, Japan and Asia.


    Webinar: Ansys on Multi-Physics PDN Optimization for 16/7nm

    Webinar: Ansys on Multi-Physics PDN Optimization for 16/7nm
    by Bernard Murphy on 07-22-2017 at 12:00 pm

    On the off-chance you missed my previous pieces on this topic, at these dimensions conventional margin-based analysis becomes unreasonably pessimistic and it is necessary to analyze multiple dimensions together. People who build aircraft engines, turbines and other complex systems have known this for quite a long time. You can’t analyze fluid dynamics, temperature and mechanical factors separately against margins on the other factors, at least not if you to want to build competitive solutions.


    REGISTER NOW for this webinar at 8:00am PDT on August 3rd

    Guess what – we now have a similar problem; important dimensions for semiconductor design are somewhat different but, at 16nm and below, just as multi-faceted, as design teams are already finding in significant deltas between margin-based analyses and multi-physics analyses. The margin-based approach analyzes timing, for example, with margins on operating voltage. But increased power-noise sensitivity as operating voltages get closer to threshold voltage (as they do in these advanced technologies) can cause nominally safe critical paths to fail both thanks to increased path-delay and clock jitter.

    Margining this away becomes impractical – why should the whole PDN pay for one unusually large power dip in one use-case in one part of the circuit? Conversely, how do you know you didn’t miss that power dip in one otherwise unremarkable simulation while building your margins?

    Ansys will talk about their SeaScape-based approach through big data analytics and elastic compute technology to enable multi-physics analysis and solve this problem the right way. Big data and elastic compute is an emerging wave in design. You might want to check it out.

    REGISTER NOW for this webinar at 8:00am PDT on August 3rd

    Ansys Summary
    Next-generation automotive, mobile and high-performance computing systems demand the use of 16/7nm SoCs that are bigger, faster and more complex than ever. For these SoCs, the margins are smaller, schedules are tighter and costs are higher. Faster convergence with exhaustive coverage is imperative for on-time silicon success. The growing interdepencies among various multiphysics attributes such as timing, power and thermal properties in N16/N7 designs poses significant challenges for design closure. Existing solutions are not architected to solve for such a multidimensional optimization problem.

    Join us for this webinar to learn how to maximize design coverage and accelerate convergence for SoC power signoff using the latest ANSYS SeaScape platform in big data systems. With unparalleled scalability across hundreds of cores using big data techniques, SeaScape helps you sign off on 1 billion+ instance designs within a few hours on commodity hardware. You will also learn how you can leverage multivariable analytics to achieve significantly better signoff confidence and drive meaningful design optimization.


    Does Elon Musk Hate Artificial Intelligence?

    Does Elon Musk Hate Artificial Intelligence?
    by Matthew Rosenquist on 07-22-2017 at 7:00 am

    Elon Musk, the tech billionaire and CEO of Tesla, was quoted as saying Artificial Intelligence (AI) is the “Greatest Risk We Face as a Civilization”. He recently met with the National Governor’s Association and advocated for government involvement and regulations. This seems to be a far cry from the government-should-leave-the-market-alone position high-tech firms normally advocate. At first glance, it seems awkward. The head of Tesla, who has aggressively invested in AI for self-driving cars, is worried about AI and wants bureaucratic regulation?

    Is Musk driven by unwarranted fear or possibly taking this brash position as part of a marketing stunt? What is he actually saying? Well, I think he is being rational.

    Translating Technology Fear
    Mr. Musk is a brilliant technologist, engineer, and visionary (I am a fan of his work). I have never sat down and had a chat with him, but from what I have understand, his concerns seem informed and grounded, as they would for any technology that has great power. AI will bring tremendous value and will extend computing beyond just analysis of data, to manifest in the manipulation of the physical world. Autonomous transportation is a great example where AI will enable vehicles to eventually be in total control. Therefore, life-safety of passengers and pedestrians will be in the balance.

    History teaches many lessons. Alfred Nobel’s invention was revolutionary in fueling the global industrial and economic revolutions. It was designed to accelerate the mining of resources and building of infrastructure while improving the safety during transport and use. Ultimately, to Nobel’s displeasure, it was also used as the preferred compound for destruction and taking lives in wars across the globe.

    More recently, advances in genetics emerged with the potential of medical breakthroughs and sweeping cures for afflictions that cause massive suffering. But again, such power could be misused and result in unintended consequences (destruction of our species, ravaging planetary ecosystems, etc.). Scientists and visionaries spoke up over a decade ago to support controls that throttled certain types of research. Such regulations and oversight has given the world time to understand certain ramifications and be more cautious as it moved forward with research.

    Race to Destruction

    Business competition is fierce and the race for innovation often casts aside safety. Government involvement can slow down the process, to allow more attention to avoid catastrophes and for society to debate the right level of ethical standards.

    There was little need to argue for the regulations to be enacted to control the research and development of chemical, biological, and nuclear weapons. It was obvious. Nobody wants their neighbor to be brewing anthrax in their bathtub. But for cases where the risks are not apparent and potentially obscured by the great benefits, it becomes more problematic. Marie Curie, the famed chemist made great advances to modern medicine, with little regulatory oversight, and ultimately died from her discoveries. Nowadays, we don’t want just anyone playing around with radioactive isotopes. There is government oversight. The same is true for much of the medical and pharmaceutical world where research has boundaries to keep the population safe.

    Artificial Intelligence, aside from science fiction movies where computers become self-aware and attempt to destroy mankind, is vague. It can encompass so much, but still be difficult to describe exactly what it can and cannot do. This is where technology visionaries play a role. Some have a keen insight to see the risks. Elon Musk, Stephen Hawking, and Bill Gates have also discussed publicly their concerns for runaway AI.

    “AI’s a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, it’s too late,”– Elon Musk


    Innovation and Caution

    I believe Musk wants to raise awareness and establish guard-rails to make sure innovation does not recklessly run-away at the detriment of safety, security, and privacy. He is not saying AI is inherently bad. It is just a tool. One which can be used benevolently or with malice, and runs the risk of mistakenly being wielded in ways that create severe unintended consequences. Therefore, his message to legislators is that we must respect the power and move with more forethought as we improve our world.

    Interested in more? Follow me on LinkedIn, Twitter (@Matt_Rosenquist), Information Security Strategy, and Steemit to hear insights and what is going on in cybersecurity.