NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Communication with Smart, Connected Devices and AI

Communication with Smart, Connected Devices and AI
by Daniel Payne on 04-12-2017 at 12:00 pm

I’ve lived and worked in Silicon Valley for 13 years, but since 1995 I’ve been in the Silicon Rainforest (aka Oregon) where the world’s number one semiconductor company Intel, has a large presence, along with dozens of smaller high-tech firms. In the past year I’ve started to attend events organized by the SEMI Pacific Northwest Chapter. On April 21st they are presenting an interesting breakfast forum entitled, “The Future of Communication: from Smart & Connected Devices to Artificial Intelligence and Beyond”. I’ll be attending and blogging about this forum, so stay tuned for my April blog.

Here’s what to expect in this SEMI breakfast forum:

When: Friday, April 21st, 2017 starting at 7:30AM

Where: Qorvo, 2300 NE Brookwood Pkwy, Hillsboro, OR 97124

Thanks in large part to recent advances in semiconductor technology, the world is on the verge of an unprecedented volume of information exchange that promises to reshape our future. From Smart Cars to Smart cities, to Artificial Intelligence and Beyond, the so called 4th industrial revolution will provide us with the means to create new methods of communication with unprecedented capability.

Please join us for the SEMI breakfast forum to hear our distinguished guest speakers explore both the technology breakthroughs required to realize this future and the potential changes that it portends. It is also a great event to network with leaders in the local community.

Agenda
[TABLE] style=”width: 500px”
|-
| 07:30 – 08:00
| Breakfast, Check In
|-
| 08:00 – 08:10
| Moderator Welcome
|-
| 08:10 – 08:45
| Keynote Speaker: Glen Riley, General Manager Filter Solutions Business Unit, Qorvo
|-
| 08:45 – 09:10
| Claire Troadec, Activity Leader for RF Devices and Technologies, Yole Développement
|-
| 09:10 – 09:35
| Rob Topol, General Manager, 5G Business and Technology, Intel Corporation
|-
| 09:35 – 10:00
| Networking Break
|-
| 10:00 – 10:15
| Startup Companies CEO Panel: Moderated by Jon Maroney, Partner, Oregon Angel Fund
|-
| 10:15 – 10:25
| Mounir Shita Entrepreneur, CEO & Founder, Kimera System
|-
| 10:25 – 10:35
| Eimar Boesjes, CEO, Moonshadow Mobile
|-
| 10:35 – 10:45
| Stephen A. Ridley, CEO/CTO, Founder,Senr.io
|-
| 10:45 – 11:30
| Startup Companies CEO Panel Discussion Moderated by Jon Maroney, Oregon Angel Fund, Kimera, Moonshadow and Senr.io
|-

Registration
Pricing by April 14th is $55 for SEMI members and $75 for non-members. Register online here.

Sponsors





Synchronizing Collaboration

Synchronizing Collaboration
by Bernard Murphy on 04-12-2017 at 7:00 am

Much though some of us might wish otherwise, distributed development teams are here to stay. Modern SoC design requires strength and depth in expertise in too many domains to effectively source from one site; competitive multi-national businesses have learned they can very effectively leverage remote sites by building centers of expertise to service company-wide needs. Multi-national operations aren’t going to go away, which means we need to get better at multi-site and multi-time-zone development.


Years of experience have shown that multi-site development can be effective but requires rather more management overhead than you might expect and much more care in ensuring that intent is very carefully communicated and cross-checked at each sync-up. The problem is never in broad expectations – it’s most commonly in the details, especially around implicit assumptions we think we all share. I’ve done this for 20 years so I have some experience of what can go wrong.

The most common way to synchronize understanding and flush out those implicit assumptions is crude and painful, but generally effective. Lots of early and late-night group meetings, generating mountains of status and update documents but you do enough of it and maybe no balls get dropped. It works (mostly) but it’s not very efficient (witness continued schedule overruns), pulling many people into meetings to which any one participant might contribute to 10% or less of the discussion. There must be a better way – and there is for implementation teams.

Implementation is an area that particularly lends itself to distributed development thanks to deep expertise needed (among others) in each of timing analysis, placement and power distribution network design. Distributing tasks like these between different sites is common today, especially on large designs (billions of gates). But of course while each domain is specialized, these objectives are very interdependent. Getting and keeping all these team on the same page the old-fashioned way is what spawns all the meetings, PowerPoints and spreadsheets.

Which is kind of ironic. In this 21[SUP]st[/SUP] century Web-based, instant-access world we’re building the very latest in electronic systems using 20[SUP]th[/SUP]-century management methods. Pinpoint from Consensia aims to change that, particularly in implementation management. You may have heard about this tool when it was first developed by Tuscany Design; the organization is now part of the Dassault ENOVIA PLM group. They have an impressive list of customers, though the only one I can find publicly cited is Qualcomm, who have been using the tool for many years (an impressive reference in its own right).


Pinpoint isn’t replacing any of your favorite/process-of-record tools for implementation. Each of those continues to play its full role in whichever design center-of-expertise has responsibility for that function. What Pinpoint provides is effectively a real-time consolidated web-based view of results and status across a variety of disciplines. This starts with a dashboard view across all monitored projects. You can drill down into a project to see progress on metrics and trends by run versions, across covered analyses.

This alone provides an important sanity check and management tool for how the design is progressing. We all know that the average project consumes 30-40% of (actual) schedule at around 90% complete (funny how that happens). Much of that time is spent diagnosing problems and negotiating suggested fixes, many of which require tradeoffs across domains. The first synchronization time-saver is that you can all look at the Pinpoint status page without needing to first build presentations; which blocks are in good shape and which are struggling is immediately obvious. In fact, if your block is doing well, you may not need to turn up to the meeting at all, a second time-saver for at least some of you. Not that you don’t love extra meetings.


All that’s great to reduce management overhead, but can it help get the job done as well? Yes it can. From the dashboard reports, block teams can drill down from a current or earlier analysis to connectivity-aware layout views overlaid with IR-drop heat maps and critical paths. And they can filter paths to display, based on all the usual criteria. But instead of one tool expert sharing screens from a process-of-record tool with others who aren’t expert in that tool, all teams with an investment in the problem can look at and experiment with the data, before, during and after the meeting.

Now you have a basis for productive conference call between smaller hands-on teams. You’re all seeing the same thing, you can debate real-time how to triage the analysis results and what to do next. You can look back at previous runs to see if a suggested fix made the problem better or worse. You converge faster because you’re all working from the same page (literally). You’re synchronizing real-time on fixing problems, without needing to get to a larger meeting.

If you’re at a small company, all working around the same table, this probably isn’t for you. But if you’re building big designs across multiple time-zones, think a moment on why Qualcomm and other big companies are using this software. For the managers, it saves time and money (~$1M in allocated headcount cost in one project, just by pulling in the release date), for the workers it saves time, reduces time spent in soul-sucking meetings, and lets you spend more time wrapping up your part of the design and spending quality time with your family. You can learn more from this Consensia Webinar.

Also Read

Behind the 3DEXPERIENCE for silicon

Latest Pinpoint release tackles DRC and trend lines

Sustainability, Semiconductor Companies and Software Companies


Calibre Can Calculate Chip Yields Correlated to Compromised SRAM Cells

Calibre Can Calculate Chip Yields Correlated to Compromised SRAM Cells
by Tom Simon on 04-11-2017 at 12:00 pm

It seems like I have written a lot about SRAM lately. Let’s face it SRAM is important – it often represents large percentages of the area on SOC’s. As such, SRAM yield plays a major role in determining overall chip yields. SRAM is vulnerable to defect related failures, which unlike variation effects are not Gaussian in nature. Fabrication defects are discrete events and random in nature. As a consequence of this they follow Poisson distributions. So, modeling them is distinctly different than things like process tilt or variation. While, as is the case with other parts of the design, modeling is important, it is especially important for SRAM. If there is a likelihood of a failure, replacement SRAM units can be allocated to serve in its stead.

But how much redundancy should be provisioned? If none is provided, a single failure will render the chip useless. This is in effect will double the cost of the part – a new part is needed to replace the failed device. At the other extreme, if 100% redundancy is provided, we again are looking at nearly double the cost per part. So, where is the happy medium?

The rate of failures depends on what happens in so called Critical Areas – where defects that are known and their rate is established, can cause failures. The presence of a failure depends on the size of the defect. Some are too small to cause harm, others are so massive that they render the entire chip inoperative. Usually the foundry as extensive data on the kinds and sizes of defects that cause recoverable issues with SRAM.

Of course, if we are talking about a defect that causes a power to ground short or a malfunction in a sense amp, we are going to have a hard time managing that. For the other class of failures – a row or column failure, alternative resources can be mapped to take their place. A great many papers have been written on techniques for implementing these replacements. However, design teams still faces a judgement call as to just how much redundancy to implement.

Fortunately, Mentor offers an option in Calibre called the YieldAnalyzer tool that can help translate defect density data into yield projections for a specific design. It starts by taking the defect density information for each layer to calculate the average number of failures. Calibre YieldAnalyzer uses yield models to then calculate yield. There are special cases, such as vias, where a single defect may not alter connectivity due to the large number of duplicate elements in a structure like a via array.

Calibre YieldAnalyzer must also be aware of the specific defects associated with row or column failures for each memory block. This is usually layer dependent, and is specified in a configuration file. The tool uses information on available repair resources. Of course, these resources are also subject to failures, so a second order calculation is needed to determine the availability of the actually functioning repair resources.

Because of how Calibre YieldAnalyzer works, it is possible to easily perform what-if analysis to zero in on the optimal amount if repair resources. As was mentioned at the outset, due to the large area of SRAM and the expense of adding repair resources, it is desirable to find the optimal balance between too many and too few.

It’s easy to think of Calibre as a rule checking program, however its capabilities have expanded well into the area of DFM. Helping to provide assistance with optimizing repair resources goes way beyond physical checking and encompasses sophisticated statistical analysis. Mentor has a white paper on their website that goes into much more detail about the process and algorithms used to provide these results.


SPIE 2017: Irresistible Materials EUV Photoresist

SPIE 2017: Irresistible Materials EUV Photoresist
by Scotten Jones on 04-11-2017 at 7:00 am

Irresistible Materials (IM) is a spin-out of the University of Birmingham in the United Kingdom that has been doing research on Photoresist and Spin-On Carbon hard masks for 10 years, most recently with Nano-C on chemistry development. IM has developed a unique EUV photoresist and they are now looking for partners to help bring it to commercialization.
Continue reading “SPIE 2017: Irresistible Materials EUV Photoresist”


TSMC Design Enablement Update

TSMC Design Enablement Update
by Tom Dillinger on 04-10-2017 at 12:00 pm

A couple of recent semiwiki articles reviewed highlights of the annual TSMC Technical Symposium recently held in Santa Clara (links here, here, and here). One of the captivating sessions at every symposium is the status of the Design Enablement for emerging technologies, presented at this year’s event by Suk Lee, Senior Director at TSMC. In the broadest sense, design enablement refers to both EDA tools and design IP, developed specifically for the new process node.

TSMC focuses on early engagement with EDA vendors, to ensure the requisite tool features for a new process node are available and qualified, on a schedule that supports “early adopter” customers. As the prior semiwiki articles have mentioned, N10 tapeouts will be ramping quickly in 2017, with N12FFC and N7 soon to follow. So, it was no surprise that the EDA tool status that Suk presented for these nodes was green, usually for multiple EDA vendors (e.g., 3 or 4).

The unique part of Suk’s presentation is the description of key EDA tool requirements presented by the new process node. These offer insights into the additional complexities and design characteristics introduced. Here are some of the new features that struck me as particularly interesting.

stacked vias and via pillars

There are two characteristics of each new process node that are always troublesome for designers, and for the optimization algorithms applied during physical implementation. The scaling of metal and via pitches (for the lowest metal layers) results in increased sheet and via resistance. Correspondingly, this scaling also exacerbates reliability concerns due to electromigration — this issue is magnified due to the increased local current density associated with FinFET logic circuits.

SoC designs at these new nodes need an efficient method to utilize the upper level layers in the overall metallization stack, for reduced RC delay and/or improved electromigration robustness. Suk presented two options that are being recommended for N7 — stacked vias and via pillars. Design rules enabling stacked vias are leveraged by the TSMC Mobile platform, while the expectation is that the High-Performance Computing (HPC) platform designs will need to regularly use via pillars. A via pillar is depicted in the figure below.

Suk highlighted some of the unique EDA tool algorithms needed, to support the prevalent use of via pillars:

  • physical synthesis, clocktree synthesis, APR

Physical implementation algorithms need to assess where via pillars are needed — there is a significant interconnect timing improvement versus route track blockage tradeoff assessment required.

  • parasitic extraction, static timing analysis, EM, and I*R

The via pillar is a unique geometry. RC extraction tools need to translate this topology into a suitable model for subsequent electrical analysis (EM, I*R), specifically how the current will spread throughout the pillar. EDA vendors have addressed this via design insertion and analysis requirement for N7 — this is fully green.

One area that has me curious that Suk didn’t mention is the yield impact of using via pillars. Commonly, yield enhancement algorithms are exercised near the end of physical implementation, often by attempting to add redundant vias where feasible — perhaps, a via pillar insertion strategy will evolve as a new DFM/DFY option.

“cut metal” masks and coloring
Advanced process nodes have replaced traditional metal interconnect lithographic patterning with spacer-based mandrels and cuts, to realize more aggressive pitch dimensions. The drawn metal layout shapes are translated into drastically different mask implementations, involving the addition of: mandrel shapes (for spacer-based damascene metal etching); “cut masks”; and, metal/cut decomposition color assignment (associated with multi-patterning and successive litho-etch steps). There are optimizations available to reduce the need for multi-patterning of cuts, by adjusting the cut spacing through the addition of metal extensions — the figure below illustrates a simple example.

(From: “ILP-based co-optimization of cut mask layout, dummy fill, and timing for sub-14nm BEOL technology”, Han, et al., Proc. SPIE, October, 2015. Note the metal extensions added to align cuts.)

TSMC has worked with EDA vendors to optimize metal and cut mask generation, and multi-patterning decomposition. Flows impacted include physical implementation, LVS, and extraction. Suk’s presentation also briefly mentions that ECO flows with cut metal and metal extensions needed to be updated, as well.

dual pitch BEOL
At the symposium, TSMC introduced an aggressive technology roadmap, including the new N12FFC offering. This technology is intended to offer a migration path for existing 16FF+/16FFC designs.

N12FFC includes an improved metal pitch on lower levels, as compared to N16. Logic blocks would be re-implemented with a 6T cell library, from TSMC’s Foundation IP for N12FFC. Other hard IP would be re-characterized, without new layout. As a result, EDA vendors need to support dual-pitch back-end-of-line (BEOL) IP pin and routing implementations, integrating both new 12FFC and existing 16FFC blocks.

Suk highlighted that the Design Enablement team at TSMC is also introducing technology model support (and qualified EDA tools) to address the reliability challenges of new process nodes, especially the more stringent targets of automotive applications — e.g., advanced electromigration analysis rules, advanced (self-heat) thermal models for local die temperature calculations, device parameter end-of-life drift due to BTI and HCI mechanisms.

The close collaboration between TSMC and the EDA tool developers is fundamental to early customer adoption for emerging technologies. Each new node introduces physical implementation and electrical analysis challenges to conquer. It will be interesting to see what new EDA tool and flow capabilities the N5 process node will require.

-chipguy


Webinar: Chip-Package-System Design for ADAS

Webinar: Chip-Package-System Design for ADAS
by Bernard Murphy on 04-10-2017 at 7:00 am

When thinking of ADAS from an embedded system perspective, it is tempting to imagine that system can be designed to some agreed margins without needing to worry too much about the details of the car environment and larger environment outside the car. But that’s no longer practical (or acceptable) for ADAS or autonomous systems. The complexity of control challenges and environmental interference in the car and outside the car (see e.g. my earlier blog on 5G) require that modeling for design at the total system level begin well before component implementation (and perhaps even architecture) is locked down.

REGISTER HERE for the Webinar, either 6am PDT and 1pm PDT (both April 20[SUP]th[/SUP])

The way to get there is through comprehensive driving-scenario simulations, conducted with a system-level behavioral model of an autonomous or semi-automated vehicle. This model includes all sensors, antennas, control systems, drive systems and vehicle body, placed in situ in a virtual driving environment of roads, buildings, pedestrians, road-signs, etc. In this simulated environment, thousands of driving scenarios can be evaluated rapidly, to test whether the vehicle’s sensors, control algorithms, and drive systems perform as expected under situations.

REGISTER HERE for the Webinar, either 6am PDT and 1pm PDT (both April 20[SUP]th[/SUP])

Sensors, antennas and electronics are the brains behind today’s intelligent Advanced Driver Assistance Systems (ADAS). Advances in integrated antenna design, image sensing and integrated circuit design are quickly transforming automotive vehicles into autonomous vehicles. These advances are also helping build cheaper, safer and more intelligent ADAS systems. As the design of these ADAS systems becomes more complex, though, design engineers must rigorously simulate multiple components and systems for functionality, reliability and safety.

About the Presenters

Larry Williams
Larry is Director of Product Management at ANSYS’s Electronics Business Unit. He is responsible for the strategic direction of the company’s electrical and electronics products, including the High Frequency Structure Simulator (HFSS) finite element simulator and is an expert with over 20 years experience in the application of electromagnetic field simulation to the design of antennas, microwave components, and high-speed electronics.

Jerome Toublanc
Jerome is a Business Development Manager for ANSYS Semiconductor Business unit. He has over 15 years of experience in SoC Power Integrity and Reliability challenges for large range of technologies, from RTL to GDSII level, and from Chip, Package to System level.

Arvind Shanmugavel
Arvind is senior director of application engineering at ANSYS.


The Driver in the Driverless Car

The Driver in the Driverless Car
by Vivek Wadhwa on 04-09-2017 at 8:00 am

What is the likelihood that the people building Uber’s self-driving technologies did not know that their software was highly imperfect and could endanger lives if the cars were let loose on public streets? Or that employees of Theranos did not know that their equipment would produce inaccurate diagnostics?

San Francisco has had some close calls with the self-driving Uber vehicles, though no damage has resulted. But Theranos did negatively affect the lives of tens of thousands of people. Should the Uber and Theranos employees who remained silent share the burden of guilt? I would argue that they should and that anyone who stays silent when they see wrongdoing is complicit in the injustice.

I know that I am taking a strong stand, and that employees have to worry about their livelihoods and families; that they may believe that they don’t have the power to change anything, it being the job of the CEO to make the difficult decisions. And, yes, I know that these examples are extreme.

But as technology advances, its reach and power grow exponentially. Even its creators don’t understand the use cases and long-term impacts of their products. What makes it worse is that CEOs are responsible to shareholders and obsess over making money, and workers are responsible to their employers. Who is watching out for humanity itself?

As I explain in my forthcoming book, The Driver in the Driverless Car, technologies are advancing exponentially. Our smartphones are already more powerful than the supercomputers of yesteryear. By 2023, at computers’ present rate of advancement, the iPhone 11 or 12’s ability to process and store information will exceed that of the human brain (I am not kidding).

This growth applies not just to smartphones and PCs but to every technology, including sensors, networks, artificial intelligence, synthetic biology, and robotics.

We could, within two or three decades, be in an era of abundance, in which we live long and healthy lives, have unlimited clean energy and education, and have our most basic wants and needs met. Because of these advances it is becoming possible to solve the grand challenges of humanity: hunger, disease, education, and energy.

Yet these advances have a potential dark side. As easily as we can edit genes, we can create killer viruses and alter the human germ line. Self-driving cars can bring mobility to the blind, but they can also take lives. And we could lose whatever is left of our privacy as connected devices take over our homes.

This is why we all need to learn to see the big picture and to understand our responsibilities. We need to be aware of our technologies’ potential for misuse and to build safeguards. We need to speak up when we see wrongdoing and to document the risks.

In my free LinkedIn Learning course, I share the key lessons that product managers, developers, and designers must pay attention to, and I explore their roles and responsibilities—for instance, what responsibility Facebook employees have for use of their technologies to spread fake news and disrupt elections.

In The Driver in the Driverless Car, I go much further and discuss why this is the most amazing—and scary—period in human history. I illustrate a broad range of technologies and discuss their value to society and mankind. I ask you to consider whether they have the potential to benefit everyone equally, the balance between their risks and potential rewards, and whether they more strongly promote autonomy or dependence. It is fairness and equality that are at the heart of these questions. Many technologies are going to disrupt present-day industries, causing our lives to change for the better and for the worse. Just one consequence of this will be the loss of tens of millions of jobs. If we manage that loss equitably and ease the transition and pain for the people who are most affected and least prepared, we can get to the utopian world of the TV series Star Trek. The alternative is the dystopia of Hollywood’s Mad Max.

It is the choices we make that will determine the outcome—beginning with the choices we make at work.

You can also follow on Twitter: @wadhwa and visit my website: www.wadhwa.com


The Fate of Autonomous

The Fate of Autonomous
by Roger C. Lanctot on 04-09-2017 at 7:00 am

The latest installment in the “Fast and Furious” franchise will debut bringing the concept of remote control of cars into the mainstream. Suffice it to say that remote control plays a major role in the script.

This will only be the latest chapter of a long-running effort to demonize autonomous vehicle technology in mass media – preceded just this week by an Uber self-driving car being upended by a human driven vehicle in Arizona leading to the temporary suspension of Uber’s testing program.

– “Uber Halts Self-Driving Car Tests After Arizona Crash”


“Fast and Furious” is all about humans driving cars, though the movies glamorize human driving for illicit purposes. So I guess the virtues of both human and machine driving are equally disparaged in the films.

About 10 years ago, a now-retired BMW executive told me that car makers arguably bear a responsibility to seize control of their vehicles remotely if 1) they have the technology capable of doing so and 2) the vehicle is being used with ill intent, the driver is incapacitated or the vehicle malfunctioning. The terror attack in London last week highlighted just such a scenario.

Khalid Masood deliberately drove his rented SUV into pedestrians on London Bridge before crashing the car and proceeding to attempt to enter Parliament before being killed by responding officers. The entire incident consumed 82 seconds and took five lives, according to press reports.

Khalid Masood’s rented SUV post-crash. SOURCE: BBC


There are a number of important implications here for safety systems and autonomous vehicle operation. Police officers in the UK and elsewhere in the world note that some of the latest safety systems such as collision avoidance actually prevent police officers from using so-called “pit” maneuvers to disable fleeing felons. Noting Masood’s path down the sidewalk on London Bridge, one can envision a future world where driving onto the sidewalk was rendered impossible by safety systems or autonomous driving technology.

Straying onto the sidewalk might also be prevented or corrected remotely in the future. From human-piloted rovers on the moon mankind has proceeded to remotely-piloted rovers on Mars. The same technology was demonstrated by Nissan at the CES show in Las Vegas in January and by Ericsson at Mobile World Congress in Barcelona.

Described by Nissan as “teleoperation,” executives from Nissan’s Sunnyvale Tech Center demonstrated the company’s Seamless Autonomous Mobility platform – remotely operating a car using LTE wireless connectivity. The application demonstrated by Nissan was remotely taking control of a car that is experiencing an unexpected and perhaps dangerous event such as an incapacitated driver.

– Nissan Uses Rover Tech to Remotely Oversee Autonomous Car

The Nissan demonstration was compelling, but it highlighted the limitations of remote human operation of an autonomous vehicle. Taking control remotely can be as terrifying as re-taking control locally in the car. More often than not, events are occurring too rapidly for a human to respond – as in the case of the over-turned Uber vehicle in Arizona.

The point is that remote operation used in conjunction with autonomous driving technology and advanced safety systems can prevent crashes and criminal activity. Hyundai recently had the opportunity to show off its BlueLink immobilization application to prevent a vehicle theft in Atlanta.

– “Atlanta Police Make Quick Arrest Thanks to Technology in Grandmother’s Stolen Car”

Autonomous technology and remote control are introducing profound changes in how humans interact with their machines. Nowhere is this profound shift more pronounced than in the Tesla Motors’ Model S’s equipped with Autopilot 2.0.

Owners of the Model S that upgraded from Autopilot 1.0 to 2.0 saw significant changes in the operation of their vehicles – losing access to the function in certain areas (geo-fencing of the feature) and experiencing new speed restrictions. Over time, as the vehicles were able to take advantage of machine learning and further software upgrades some performance has been restored.

At the same time, though, Autopilot in the Model S now requires the operator to periodically put his or her hands on the steering wheel thanks to software updates. Failure to comply with this requirement results in temporary loss of access to the Autopilot mode.

In essence, humans have been teaching cars how to drive for the past 10 years and now cars are returning the favor. In the future, if you cannot obey the rules of the road or at least the rules for operating your particular motor vehicle you may lose the privilege of operating that car at least temporarily.

The horrific incident that occurred in London last week might well have been prevented either by appropriately tuned safety systems designed to prevent the car from leaving the roadway or from a vigilant remote monitoring system capable of taking control or immobilizing the errant vehicle. The “Fate of the Furious” may demonstrate the terroristic potential of massive remote vehicle operation, but the reality is that technology is ultimately mankind’s friend if developed and deployed appropriately.

With a little luck and some clever algorithms we humans will come to view the arrival of autonomous driving as the onset of a helping hand rather than robots gone wild. In the end we’re less concerned with criminal activity and terror and more interested in the ability of autonomy to make every day driving more pleasing.

Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here:

https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


14nm 16nm 10nm and 7nm – What we know now

14nm 16nm 10nm and 7nm – What we know now
by Scotten Jones on 04-07-2017 at 7:00 am

Last week Intel held a manufacturing day where they revealed a lot of information about their 10nm process for the first time and information on competitor processes continues to slowly come out as well. I thought it would be useful to summarize what we know now, especially since some of what Intel announced was different than what I previously forecast.
Continue reading “14nm 16nm 10nm and 7nm – What we know now”


The Rise of Transaction-Based Emulation

The Rise of Transaction-Based Emulation
by Bernard Murphy on 04-06-2017 at 12:00 pm

One serious challenge to the early promise of accelerating verification through emulation was that, while in theory the emulator could run very fast, options for driving and responding to that fast model were less than ideal. You could use in-circuit emulation (ICE), connecting the emulation to real hardware and allowing you to run fast (with a little help in synchronizing rates between the emulation and the external hardware). But these setups took time, often considerable time, and had poor reliability; at least one connection would go bad every few hours and could take hours to track down.


Alternatively, you could connect to a (software-based) simulator testbench, running on a PC, but that dragged overall performance down to little better than running the whole testbench + DUT (device under test) simulation on the PC. For non-experts in the domain, testbenches mostly run on a PC rather than the emulator, because emulators are designed to deal with synthesizable models, while most testbenches contain logic too complex to be synthesizable. Also emulators are expensive, so even if a testbench can be made synthesizable, there’s a tradeoff between cost and speed.

ICE for all its problems was the only usable option but was limited by cost and value in cases where setup might take nearly as much time as getting first silicon samples. More recently, ICE has improved dramatically in usability and reliability and remains popular for live in-circuit testing. Approaches to software-based testing have also improved dramatically and are also popular where virtualized testing is considered a benefit, and that’s the subject of this blog. (I should note in passing that there are strong and differing views among emulation experts on the relative merits of virtual and ICE-based approaches. I’ll leave that debate to the protagonists.)

There are two primary reasons that simulation-based testbenches are slow – low level (down to signal-level, cycle-accurate) modeling in the testbench and massive amounts of signal-level communication between the testbench and the DUT. Back in the dawn of simulation time, the first problem wasn’t a big deal. Most of the simulation activity was in the DUT and the testbench accounted for a small overhead. But emulation (in principle) reduces DUT time by several orders of magnitude, so time spent in the testbench (and PLI interfaces) becomes dominant. Overall, you get some speed-up but it falls far short of those orders of magnitude you expected.

This problem becomes much worse when you think of the testbenches we build today. These are much more complex thanks to high levels of behavioral modeling and assertion/coverage properties. Now it is common to expect 50-90% of activity in the testbench (that’s why debugging testbenches has become so important); as a result traditional approaches to co-simulation with an emulator show hardly any improvement over pure simulation speeds.
One way to fix this problem is to move up the level of abstraction for the testbench to C/C++. This is a popular trend, especially in software-driven/system level testing where translating tests to SV/UVM may become challenging and arguably redundant. (SV/UVM still plays a role as a bridge to emulation.) Now testbench overhead can drop down to a very small percentage, delivering much more of that promised emulation speedup to total verification performance.

But you still must deal with all that signal communication between testbench and emulator. Now the botteleneck is defined by thousands of signals, each demanding synchronized handling of cycle-accurate state changes. That signal interface complexity also must be abstracted to get maximum return from the testbench/emulator linkage. That’s where the second important innovation comes in – a transaction-based interface. Instead of communicating signal changes, you communicate multi-cycle transactions; this alone could allow for some level of speed-up.

But what really makes the transaction-based interface fly is a clever way to implement communication through the Standard Co-Emulation Modeling Interface (SCE-MI). SCE-MI is an Accellera-defined standard based on the Direct Programming Interface (DPI) extension to the SV standard. This defines a mechanism to communicate directly and portably (without PLI) between an abstracted testbench and an emulator.

The clever part is splitting communication into two functions, one running on the emulator and the other on the PC. On the emulator, you have a synthesizable component assembling and disassembling transactions. On one side, it’s communicating with all those signals from the DUT and can run at emulator speed because it’s synthesized into emulator function primitives. On the other side, it communicates transactions to a proxy function running on the PC.

Now you have fast performance on the emulator, fast (because greatly compressed) communication between PC and emulator, and fast performance in the testbench. All of which makes it possible to rise closer to the theoretical performance that the emulator can offer. It took a bunch of work and a couple of standards but the payback is obvious. What’s more, tests you build should be portable across emulation platforms. Pretty impressive. Mentor has a white-paper which gives more details.