webinar banner2025 (1)

2018 Semiconductor Forecasts Revised Up

2018 Semiconductor Forecasts Revised Up
by Bill Jewell on 06-14-2018 at 12:00 pm

Forecasts for 2018 semiconductor market growth have been revised upward from earlier in the year. In January to February 2018, projections ranged from 5.9% from Mike Cowan to our 12% at Semiconductor Intelligence. Forecasts released in March to June range from Gartner’s 11.8% to our 16%. The latest forecasts average an increase of 5.4 percentage points from the previous forecasts.


What has changed in the last few months? One factor is the first quarter 2018 semiconductor market was not as weak as expected. The first quarter is the seasonally weakest and is usually down from fourth quarter of the previous year. In the last five years, the first quarter has been down four times, three times declining 5% or more. 1Q 2018 declined only 2.4% from 4Q 2017.

The memory market accounted for most of the 21.6% semiconductor market growth in 2017. Memory grew 61.5% while the rest of the market grew 9.9%. The memory market remains healthy in 2018. The three major memory companies, (Samsung, SK Hynix and Micron Technology) all report a continued strong DRAM market. The three companies see some softening in the market for NAND Flash, but do not expect significant declines.

The major semiconductor companies are cautious in their revenue guidance for 2Q 2018. Guidance ranges from a decline of 1.2% from Qualcomm to an increase of 16% from MediaTek (bouncing back from a 17.8% decline in 1Q 2018). The high end of guidance ranges from Broadcom’s 2.2% to MediaTek’s 20%.


The overall outlook for key electronic equipment has not changed significantly from early in 2018 to recently. In January, Gartner projected the combined unit growth of PCs and Tablets in 2018 would be 0%. In April, Gartner revised the number up slightly to 0.5% – not a significant change. In December 2017, IDC expected smartphone units would increase 1.2% in 2018. In May, IDC revised it to a decline of 0.2%. Again, not very significant. The International Monetary Fund (IMF) has kept its projected 2018 GDP forecast at 3.9% growth, unchanged from January to April.


The increased optimism for the semiconductor market in 2018 can primarily be attributed to the memory market continuing strong, not weakening as many expected. There is still the possibility the memory market could collapse in the second half of 2018. However, we at Semiconductor Intelligence believe the most likely scenario is a softening of the memory market, not a collapse. Moderate quarter-to-quarter growth for 2Q 2018 to 4Q 2018 supports our 16% annual forecast.


Foundry Partnership Simplifies Design for Reliability

Foundry Partnership Simplifies Design for Reliability
by Bernard Murphy on 06-14-2018 at 7:00 am

This builds on a couple of topics I have covered for quite a while from an analysis point of view – integrity and reliability. The power distribution network and some other networks like clock trees are particularly susceptible to both IR-drop and electromigration (EM) problems. The first can lead to intermittent timing failures, the second to permanent damage to the circuit. There are a number of products to support analysis for power integrity and EM risks, but then what? You need to modify your design to mitigate those risks; the analysis tools won’t do that for you.

In both cases the fix is to reduce resistance in the relevant part of the network. Within a layer you can widen the interconnect but at points where the network crosses between layers, you need to add more vias (or via stacks) – more points of contact between layers → lower resistance → lower IR-drop and EM. But this is a very design, location and use-case-specific optimization not commonly found in implementation build tools, so implementation teams have often built their own Calibre-based applications to handle adding these vias.

Providing the ability to create your own apps is part of the value-add of the Calibre family, particularly through PERC (programmable electrical rule-check), which provides the infrastructure to scan the design to find named nets and where these cross between layers, flagging where vias must be added. The more sophisticated teams have probably also automated, to some level, adding those vias, perhaps through Calibre-YieldEnhancer.

But as always there’s a challenge. When processes were simpler, adding vias was relatively straightforward, but in advanced processes (e.g. 7nm) ensuring DRC and LVS correctness in changes becomes a lot more complex. Managing this complexity has become an iterative flow: automate via additions as well as you can, re-run DRC and LVS, then fix violations as required. Do-able but this can become very painful when you may need to implement changes at many nodes across large networks (such as in a power distribution network).

Of course a better way would be to make the edits correct by construction but that requires a very good understanding of the process and of course the tools. The Calibre team already has an app, PowerVia, to do this and have partnered with GLOBALFOUNDRIES to add rules in support of their GF7LP process (GF have plans to extend it to other processes), a capability which is now a part of their Manufacturing Analysis and Scoring (MASplus) and which they presented at the recent Mentor U2U user-group conference.

Analysis starts with target nets for enhancement – ports (like power and ground nets) and other user-selected nets. The flow uses PERC to identify nets not already labeled in the layout, along with where those nets cross layers, then YieldEnhancer/PowerVia adds DRC/LVS clean vias at those intersections to the extent they can be added (it will not remove existing user/tool-added vias). The GF flow adds as many vias as possible (consistent with not increasing area) at each of these intersections. I would guess the reasoning is that even if this might be overkill in some cases, when it comes to reliability overkill is not so bad.

How big a deal is this? GF showed benchmark info in which they boosted vias up to 120% over the starting point and the design was DRC/LVS clean, so no need for iteration. That’s a lot of vias if you had to add them manually, and still a lot of work if you had to iterate DRC/LVS following less than perfect fixes. The GF speaker (Nishant Shah) added a few more details in Q&A. The flow is GDS-based, so should be thought of today as a finishing step, not designed to take back (yet) to P&R. The app requires you to define which nets to address, though it will assume port nets by default. And the flow is designed primarily to address IR-drop and EM on high current-density nets. GF handles via-doubling for reliability in other nets in a different flow.

In discussion with Jeff Wilson (Director of Marketing for the DFM solution in Calibre) and Matt Hogan (PMM for Calibre design solutions), their enthusiasm for this type of solution – foundry-based apps on top of Calibre – was clear. They see design companies struggling to handle the workload added by these kinds of reliability enhancements. While they can script these solutions themselves using PERC and YieldEnhancer, the effort required from implementation and CAD teams – to develop and prove out scripts and to iterate to get to DRC/LVS clean – is becoming intolerable. A better solution is a foundry-sponsored app, building on the same platform, allowing for designer control on which nets to address, and then automating fixes correct by construction in one pass.

For more information on Calibre YieldEnhancer and Calibre PERC (where they handle a lot of other interesting checks – ESD, EOS, voltage-dependent DRC rules for multi-power/voltage domains and more), stop by Mentor’s DAC booth (#2621) June 25-28. You can view a full list of their booth and conference sessions HERE.


Looking Ahead: What is Next for IoT

Looking Ahead: What is Next for IoT
by Ahmed Banafa on 06-13-2018 at 12:00 pm

Over the past several years, the number of devices connected via Internet of Things (IoT) has grown exponentially, and it is expected that number will only continue to grow. By 2020, 50 billion connected devices are predicted to exist, thanks to the many new smart devices that have become standard tools for people and businesses to manage many of their daily tasks.

Smart connected devices boost customer’s engagement, increase visibility, and streamline communications, especially with new human-machine interfaces like Voice User Interface (VUI) the favorite interface for new digital assistants like HomePod, Alexa and Google Assistant for a good reason—80 percent of our daily communications is conducted via speech.

In the future, IoT will continue to advance at an extraordinarily rapid pace, with remarkable growth in many directions. The ultimate goal is to have a smart and completely secure IoT system, however many obstacles will need to be overcome before that goal can become a reality.

IoT and Blockchain convergence
The current centralized architecture of IoT is one of the main reasons for the vulnerability of IoT networks. With billions of devices connected and more to be added, IoT is a big target for cyber attacks, which makes security extremely important.

Blockchain offers new hope for IoT security for several reasons. First, blockchain is public, everyone participating in the network of nodes of the blockchain network can see the blocks and the transactions stored and approves them, although users can still have private keys to control transactions. Second, blockchain is decentralized, so there is no single authority that can approve the transactions eliminating Single Point of Failure (SPOF) weakness. Third and most importantly, it’s secure—the database can only be extended and previous records cannot be changed.

In the coming years manufactures will recognize the benefits of having blockchain technology embedded in all devices and compete for labels like “Blockchain Certified.”

IoT investments on the rise
IoT’s undisputable impact has and will continue to lure more startup venture capitalists towards highly innovative projects in hardware, software and services. Spending on IoT will hit 1.4 trillion dollars by 2021 according to the International Data Corporation (IDC).

IoT is one of the few markets that has the interest of the emerging as well as the traditional venture capitalists. The spread of smart devices and the increase dependency of customers to do many of their daily tasks using them, will add to the excitement of investing in IoT startups. Customers will be waiting for the next big innovation in IoT—such as smart mirrors that will analysis your face and call your doctor if you look sick, smart ATM machine that will incorporate smart security cameras, smart forks that will tell you how to eat and what to eat, and smart beds that will turn off the lights when everyone is sleeping.

Fog computing & IoT
Fog computing is a technology that distributed the load of processing and moved it closer to the edge of the network (sensors in case of IoT). The benefits of using fog computing are very attractive to IoT solution providers. Some of these benefits allow users minimize latency, conserve network bandwidth, operate reliably with quick decisions, collect and secure a wide range of data, and move data to the best place for processing with better analysis and insights of local data. Microsoft just announced a $5 billion investment in IoT, including fog/edge computing.

AI & IoT will work closely
AI will help IoT data analysis in the following areas: data preparation, data discovery, visualization of streaming data, time series accuracy of data, predictive and advance analytics,and real-time geospatial and location (logistical data). Here are a few examples.

Data Preparation: Defining pools of data and cleaning them, which will take us to concepts like Dark Data and Data Lakes.

Data Discovery: Finding useful data in defined pools of data.

Visualization of Streaming Data: On-the-fly dealing with streaming data by defining, discovering data, and visualizing it in smart ways to make it easy for the decision-making process to take place without delay.

Time Series Accuracy of Data: Keeping the level of confidence in data collected high with high accuracy and integrity of data

Predictive and Advance Analytics: Making decisions based on data collected, discovered and analyzed.

Real-Time Geospatial and Location (Logistical Data): Maintaining the flow of data smoothly and under control.

Standardization battle will continue
Standardization is one of the biggest challenges facing growth of IoT—it’s a battle among industry leaders who would like to dominate the market at an early stage. Digital assistant devices, including HomePod, Alexa, and Google Assistant, are the future hubs for the next phase of smart devices, and companies are trying to establish “their hubs” with consumers, to make it easier for them to keep adding devices with less struggle and no frustrations.

But what we have now is a case of fragmentation, without a strong push by organizations like IEEE or government regulations to have common standards for IoT devices.

One possible solution is to have a limited number of devices dominating the market, allowing customers to select one and stick to it for any additional connected devices, similar to the case of operating systems we have now have with Windows, Mac and Linux for example, where there are no cross-platform standards.

To understand the difficulty of standardization, we need to deal with all three categories in the standardization process: Platform, Connectivity, and Applications. In the case of platform, we deal with UX/UI and analytic tools, while connectivity deals with customer’s contact points with devices, and last, applications are the home of the applications which control, collect and analyze data.

All three categories are inter-related and we need them all, missing one will break that model and stall the standardization process.

IoT skills shortage
The need for more IoT skilled staff is rising, including a growing need for those with AI, big data analytics and blockchain skills.

Universities cannot keep up with the demand, so to deal with such shortage, companies have established internal training programs to build their own teams, upgrading the skills of their own engineering teams and training new talents. This trend will continue, representing an opportunity for new engineers and a challenge for companies.

Original article was published on R&D Magazine : https://www.rdmag.com/article/2018/05/looking-ahead-whats-next-iot

Ahmed Banafa Named No. 1 Top VoiceTo Follow in Tech by LinkedIn in 2016

Read more articles at IoT Trends by Ahmed Banafa


Mentor Emulation Platform Now available on Amazon Web Services

Mentor Emulation Platform Now available on Amazon Web Services
by Daniel Nenni on 06-13-2018 at 7:00 am

Emulation is a hotly contested EDA market segment (which is being won by Mentor) and EDA in the Cloud is a trending topic so putting the two together is a very big deal, absolutely.

The following is a quick email Q&A with Jean-Marie Brunet, Director of Marketing, Emulation Division, Mentor, a Siemens Business. If you have other questions for Jean-Marie let me know and I will get them answered. It has been an honor to work with the Mentor emulation team on our ebook and other topics and I can tell you from personal experience that they are consummate emulation professionals.

Mentor introduced Veloce Strato last year, and earlier this spring announced expanded footprint choices and configuration options. How does today’s news of Veloce as first hardware emulation available in the cloud fit into your larger strategy for this product line?
When we launched the Veloce Strato platform in 2017, we focused on customers’ current and future needs for greater capacity. Those needs for more capacity drive the Veloce roadmap to stay a step ahead of the size of designs our customers are building. We also began to communicate a strategy to deliver the best cost of ownership available. The Veloce StratoT family is about capacity scaling and options. So the announcement of Veloce on the Cloud is an extension to our strategy to add another option to our portfolio of offerings. Customers are asking for capacity on the cloud to be available. They really do not care what precise HW configuration is made available. They care about capacity, uptime, latency and use models. We will decide to put whatever Veloce HW is required to enable and grow the Veloce on the cloud business.

Why emulation on the cloud? What kind of demand do you see, and what about it do you think will be attractive to customers?
If we are able to put an emulator on the cloud, we can put pretty much anything on the cloud! We started with the emulator because logistically it is by far the most difficult piece of equipment to put on the cloud. The demand is currently limited to companies that do not have IT infrastructure access, so we are essentially at the early stages. The cloud-based access to Veloce capacity is eliminating a big barrier of entry related to infrastructure challenges. The attractive aspect is the scalable access to capacity on demand.

Why Amazon Web Services?
We started with AWS because of the scale of their deployment. Their Latency zone aspect was key to this enablement. The main thing to consider for an emulator to be on the cloud is that the design database should not be too far from the box, otherwise you are crippled by the latency challenge. Working in partnership with AWS we addressed this very efficiently.

Security is a major concern for any customer looking at cloud services, and this seems especially true with the sensitive nature of hardware emulation. What measures have you taken to address security?
It was our concern as well when we started this adventure over a year ago. We spent a considerable amount of time on this topic. I recommend that customers meet with AWS and look at the security measures they take. It is very impressive. Another way to look at this is by understanding what companies are already using AWS. Just the names of current AWS customers makes it obvious that security is handled seriously. And, as is always the case with security, everyone should do their own homework…

Is there anything specific or inherent to Veloce that makes it particularly amenable to a cloud use models?
Emulation on the cloud is all about virtualization. Using emulation in a virtual mode opens the door to the creation of many different emulator use models, or Apps, where each App targets very specific verification challenges. Apps are growing the emulation user community by simply bringing more challenges and the associated engineers to the list of tasks emulation can accomplish. This expanding user base for emulation, outside the traditional use model of simulation acceleration, means increased reliance on emulation resources. The Veloce emulator is the clear leader in virtualization use models so being on the cloud is a perfect fit for our technology.

Mentor Veloce hardware emulation platform now available on Amazon Web Services


ISO 26262 Traceability Requirements for Automotive Electronics Design

ISO 26262 Traceability Requirements for Automotive Electronics Design
by Daniel Payne on 06-12-2018 at 12:00 pm

Reading the many articles on SemiWiki and other publications we find experts talking about the automotive market, mostly because it’s in growth mode, has large volumes and vehicles consume more semiconductors every year. OK, that’s on the plus side, but what about functional safety for automotive electronics? Every time that an autonomous car has an accident or a fatality it makes front page news on CNN and across social media, so we’re keen to understand how safety is supposed to protect us from driving mishaps. The automotive industry has already published a functional safety standard known as ISO 26262, which is a necessary first step, and for us in the design community we need to be aware that this standard mandates traceability requirements.

An automotive system starts out in the conceptual phase with a set of written requirements, then as the system is refined into hardware, software and firmware we need to be able to trace and document how every requirement gets designed, implemented and tested. Because of time to market and complexity, there are no companies using a purely manual system for traceability of requirements, instead each team is using some form of software automation.

So the ISO 26262 standard covers the following range of activities which are part of a 10-part guildeline:

  • Conceptual phase
  • System level design
  • Design
  • Verification
  • Testing

Everything that a team does in these five activities must be traced back to a requirement. Here’s a snapshot of what the 10-part ISO 26262 guideline looks like:

Design Challenges
Some IC design teams can have hundreds of engineers assigned to a single project, and across such teams there is a variety of software being used, like:

  • Data management systems (Perforce) – tracking design files
  • Bug tracking (Bugzilla)
  • Verification plans and continuous integration (Jenkins)
  • Workspace management tools
  • IC point design tools (Cadence, Mentor, Synopsys, Ansys, etc.)

Semiconductor IP is purchased, re-used and created for teams designing an SoC, so this all has to be managed and tracked. Here’s a simplified work flow for IC design teams, and the large blue arrow pointing back to the left means that all this data needs to be traced back to requirements:

Modern IC designs can consume Terabytes of design, verification and test data across thousands to millions of files, which makes compliance with ISO 26262 traceability sound rather daunting. Having a software platform to manage all of this for us sounds ideal, a way to manage all of the data and IP blocks across my entire team and geographies. I’d call this IP Lifecycle Management (IPLM).

One commercial IP Lifecycle Management system out there is called Percipient, from Methodics, and I’ve blogged about them before over the past year or so. The Percipient tool is engineered to meet the needs of traceability requirements for the ISO 26262 standard, and here are some of the other industry tools it works well with:

Abstraction

With Percipient a design team can now connect high-level requirements to documentation to design to manufacturing, with traceability built-in. This methodology will keep track of all your semiconductor IP, cell libraries, scripts, verification files, results, bugs and even bug fixes. You use Percipient in all six phases of design to manufacturing:

The abstraction model used by Percipient enables it find and track all data from your SoC design project, it even knows low-level details like file permissions, bug tracking, locations for all IP instances, who is using each IP block, who owns an IP block, who created a cell, and even which workspace it all belongs to. Industry tools play well together with Percipient, by design. Here’s a quick summary of useful features in Percipient:

  • Workspace tracking
  • IP usage tracking
  • Release management
  • Bug tracking
  • IP versioning
  • Tracking design files
  • Tracking file permissions
  • Uses labels and custom fields
  • Handles hierarchy
  • Hooks for integrating any other tool

Self-documentation is a goal of the ISO 26262 standard and using a tool like Percipient really automates that process.

Summary
Driving a car today provides us mobility and we all want to arrive at our destination safely and without drama, so the automotive industry has wisely created and followed the ISO 266262 standard for functional safety requirements. The traceability part of the standard is now automated for semiconductor design through the use of a tool like Percipient. With it’s unique IP abstraction model this approach provides traceability across design, verification and testing.

The folks at Methodics will be attending the ISO 26262 To Semiconductors USA conference, June 11-14 in Michigan.

Read the complete White Paper here, after a brief registration process.

Related


Thermal and Reliability in Automotive

Thermal and Reliability in Automotive
by Bernard Murphy on 06-12-2018 at 7:00 am

Thermal considerations have always been a concern in electronic systems but to a large extent these could be relatively well partitioned from other concerns. Within a die you analyze for mean and peak temperatures and mitigate with package heat-sinks, options to de-rate the clock, or a variety of other methods. At the system level you might rely on passive cooling or plan for forced air or even liquid cooling. These methods treat heat as more or less a bulk property to be managed. But that approach alone is breaking down in a number of modern applications, for which automotive (in ADAS and autonomy) provides good examples.

What changed? Ambient temperatures in a car (up to 150[SUP]o[/SUP]C) are a lot more stressful than mobile devices have to consider. This isn’t new but we’re now packing those mobile technologies and more into the car, with much higher safety expectations. That’s just to start. Automakers need higher levels of integration of heterogenous technologies, in part driving a trend to advanced packaging where we now have to consider not only thermal effects within a die but also between stacked die. System builders also moving much more aggressively to advanced processes because they need the performance and lower standby power. But this means gates and wires crammed closer together with more heat concentrated in smaller areas. Worse yet, FinFETS with their wrap-around gates are unable to dissipate heat as effectively as traditional planar gates.

FinFETs have higher drive strengths, which is good for performance, but into narrower interconnects which increases the risk of electromigration (EM), impacting device reliability. Local heating also accelerates EM, and it increases power consumption and risk of timing failures. Heat can cause mechanical problems. In 3D stacks, or 2/5D on interposer, also on the board, heating can lead to warping between layers (Toyota had a recent problem with cracking solder joints caused by thermal stress). None of this is acceptable in automobile safety-critical functions.

OK, you get it. We need to analyze thermal more carefully, but there’s another challenge. In product design we like to split our analysis into different domains: timing, power, thermal, EM, die-level, package-level, board-level and system-level. It’s just too hard to do it any other way, right? We do detailed analysis in one domain at a time and we handle inter-dependencies between domains using margins. But increasingly the margin approach is requiring impractical over-design. More importantly, the automakers/Tier1s are demanding more cost-effective high-reliability solutions, which can only be accomplished though co-design and optimization from the system down to the die (incidentally this is also driving closer collaboration between the semis and the OEMs/Tier1s.) Effective thermal analysis has to span all of these domains, though here I’ll just touch on thermal analysis from system to die and related mechanical analysis.

Using ANSYS products, analysis spans a wide range of technologies, from RTL power analysis and RedHawk thermal, up to computational fluid dynamics (CFD) to model cooling at the system level, and ANSYS/Mechanical to model thermal-induced warping. Many of these are multi-physics analyses, pulling together fine-grained data from multiple domains (thermal, fluid modeling, mechanical, …) to provide accurate analytics for potential hotspots, rather than the approximations inherent in a domain-by-domain approach.

ANSYS starts with profiling at RTL, these days often driven through emulation-based modeling, so you might characterize for power profiles (developed in PowerArtist) during OS boot versus 4K streaming. From this they develop block power profiles and then chip profiles based on the floorplan. RedHawk CTA then builds a chip thermal model (CTM) containing understanding of hotspots in that die. In a multi-die package these analyses can be combined to provide a package-level thermal analysis, combined with a mechanical analysis of stress (and potential warping) in that configuration.

Up at the system level, thermal models for each of the components (chips, voltage regulators, sensors, …) are combined in an Icepak CFD analysis to assess steady state and transient thermal profiles, including whatever passive or active cooling may be provided. Naturally this analysis is iterative; you model system-level thermal profiles and take this back to the die for refined modeling. That gives you improved data on EM and other risks across the die, to which you can respond with appropriate design optimizations. Which in turn should provide a more accurate handle on thermal-related failure rates across the system. I don’t know if anyone in the automotive value chain is looking at this yet but based on what I’ve heard about rising expectations in ISO 26262, I wouldn’t be surprised to see this kind of analysis become a requirement at some point.

You can watch the recorded webinar (delivered by Karthik Srinivasan, Sr. Corporate AE Manager in the Semiconductor BU at ANSYS) HERE. He covers a lot more detail, including local thermal effects and doing power/thermal loop simulations using SIwave and Icepak at the system level. There is also some interesting discussion on where these methods are important beyond automotive. Well worth watching.


RAL, Lint and VHDL-2018

RAL, Lint and VHDL-2018
by Alex Tan on 06-11-2018 at 12:00 pm

Functional verification is a very effort intensive and heuristic process which aims at confirming that system functionalities are meeting the given specifications. While pushing cycle-time improvement on the back-end part of this process is closely tied to the compute-box selection (CPU speed, memory capacity, parallelism option), the front-end involves many painstaking setup preparation and coding. As such, any automation and incremental checks on the quality of work for both the design and the embedded codes used for its verification should help prevent unnecessary iterations and shorten the overall front-end setup time.

UVM Register Generator
Register Abstraction Layer (RAL) was part of the Universal Verification Methodology (UVM) supported features introduced in 2011. It provides a high-level abstraction for manipulating the content of registers in your design. All of the addresses and bit fields can get replaced with human readable names. RAL attempts to mirror the values of the design registers in the testbench, so one could use the register model to access those registers. A RAL model comprises fields grouped into registers, which along with memories can be grouped into blocks or eventually grouped into systems.

Aldec’s Riviera-PRO™ verification platform enables testbench productivity, reusability, and automation by combining the high-performance simulation engine, advanced debugging capabilities at different levels of abstraction. In its latest release (2018.02), it introduces RAL support.

As illustrated in figure 1a RAL model automatic generation involves taking the register specifications of a design (Riviera-PRO supports either IP-XACT or CSV spreadsheet formats) and generates the equivalent register model in System Verilog code.

To better appreciate how this register model is used in the UVM environment, consider how it interacts with the rest of components in the verification ecosystem as illustrated in figure 1b.

The creation of register model is normally followed by the creation of an adapter in the agent, which makes abstraction possible. It acts as a bridge between the model and the lower levels. Its function is to convert register model transactions to the lower level bus transactions and vice versa. The predictor
receives information from the monitor such that changes in values in the registers of the DUT are passed back to the register model.

As register model is captured at a higher level of abstraction, it does not require knowing either the targeted protocol or the register type to be accessed. Hence, from the testbench point of view, one can directly access the registers by name, without having to know where and what they are. Instead, the Register Model stores the details of all the registers, their types as well as their locations. It is the responsibility of the RAL to convert these abstracted accesses into read and write cycles at the appropriate addresses and using bus functional model. This convenient approach also makes tests to be more reusable.

Another component in the ecosystem is the sequencing part as shown in figure 1c. Sequences are built to house any register access method calls or Access Procedural Interface (API’s). Users may categorize these API’s into three types: active (read, write, mirror), passive (set, reset, get, predict) or indirect (peek, poke). The registers are referenced hierarchically in the body task to call write() and read(). The commands peek() and poke() which are utilized for individual field accesses.

Unit Linting
Linting is a prerequisite for good coding practice. It is common to have this done at the end of system code completion prior to checking-in the release. Unit linting which was previously done as stand-alone from Active-HDL Workspace, has been integrated as part of Riviera-PRO User Interface. Launching unit linting from this Riviera-PRO dashboard can be done through selecting a new button that will run unit linting on the open file and display the violations back on the console. This integration lets designer to work on a design, do the simulations as well as run linting from the same interface. It provides cross-probing facility that cross reference the violation versus the affected line of code as illustrated in figure 2.

Productivity Improvements and Partial Support to VHDL 2018
Code elaboration takes time as well as memory. In this Riviera-PRO release memory footprint during elaboration is reduced by as much as 80%, while improvement in simulation speed of 25% is noted for System Verilog constraint random design and up to 10x faster for assertion based designs with multi-clocks.

Proactive partial support is also available to the VHDL 2018 extension, which is going through the formalization process. This includes handling the conditional analysis directives and inferring constraints from initial values of signals and variables.

Furthermore, less restrictions were imposed on handling these constructs:
– Optional component keyword after the end keyword closing component declaration
– Denoting the end of the interface list with an optional semicolon sign.
– Allowing empty record declarations or qualified expressions or signatures of formal generic subprograms in a generic association list.

I had the chance to talk with Louie De Luna, Aldec Director of Marketing to further comment on these recent updates.

Would the corresponding System Verilog codes automatically generated for the register models and their associated adapters correct-by-construction or need to be validated?
We generate the UVM-RAL from user’s provided spreadsheet (*.csv). Successful generation is highly dependent on the input quality. Once generated, it is correct-by-construction. The users do not need to review it and they can easily attach it on their testbench.

Can designers skip those units already passed unit linting when they do full design linting?
Unit linting provides facility to conveniently perform code checks while it is being constructed. Unit linting may have simpler rules compares with the full linting that requires different ruleset. Designers have option to include or exclude particular checks. Linting is good, but since too many rules may clutter the process these filtering options should help.

What reference point used for performance comparison and any plan to extend beyond this supported list when VHDL 2018 ?
The comparison made for Riviera-PRO 2018.02 release is with respect to 2017.10. We plan to fully support when the VHDL 2018 is formally published.

For more detailed discussions on these features please check HERE


DRC is all About the Runset

DRC is all About the Runset
by Daniel Nenni on 06-11-2018 at 7:01 am

EDA companies advertise their physical verification tools, aka DRC (Design Rule Check), mostly in terms of specific engine qualities such as capacity, performance and scalability. But they do not address an equally if not more important aspect: the correctness of the actual design rules.

Put bluntly: It’s not about how fast you check, it’s about what you check. To draw an analogy from a slightly different EDA domain: Designers can implement their RTL design in an FPGA device from vendor-A or vendor-X. Sure, there are differences between the two, but in essence, if the logic circuit fits in both devices and they are fast enough, either will do. What absolutely cannot be compromised is the correctness of the RTL code. If there are bugs in the RTL code, extra capacity and higher speed of the FPGA device will not help; functional verification is of the essence.

DRC is all about the runset
The same holds for DRC. One tool may run faster or take less memory than the other, but in the era of abundant and low cost cloud computing these are not critical factors any more. The quality of the check is determined less by the engine that runs it, and more by the correctness of the DRC runset code:

  • Is the DRC runset code correct? : Does it accurately represent the design rule intent?
  • Are the rule checks complete? : Does the DRC code cover all possible layout configurations?In practice the likelihood and severity of potential issues vary from case to case; broadly one can distinguish between the following three situations:
  • A physical design that is implemented in a mature technology that “saw” hundreds or thousands of designs already. In this case chances are that “holes” and bugs have already been (painfully) found in previous designs and have been fixed by the foundry. Using a 5-years-in-volume-production vanilla flavor process? – probably no need to worry.
  • A design that targets a relatively new technology, or a customized process flavor. There is a reasonable chance that the runset may still have inaccuracies or hidden “holes”. In this case it makes sense to inquire about the QA process of the runset, how many different designs have used the exact same process in production. If some design rules are customized and specific for this product or design style – runset verification can save a lot of cost and grief.
  • An early technology project: be it a foundry internal technology enablement project, an IP partner project, or an early access customer design team working closely with the foundry. In this case the runsets are clearly work in progress and must go through rigorous verification and QA process.The next obvious question is what tools and methodologies are available to address these issues and verify the DRC runset?Runset verification with DRVerify
    DRVerify is the leading commercial tool for DRC runset verification. Here is why:
  • Spec driven: The tool reads the foundry design rule descriptions (from any spreadsheet or text file) and allows to enter rules using a GUI. This rule description that represents the rule intent and is independent of a particular implementation is the source for creating the tests and the reference for their correctness and accuracy.
  • Powerful enumeration: DRVerify has a pattern enumeration engine then automatically generates pass/fail variations that exercise all boundary conditions of each design rules. The generated fail and pass cases cover all situations based on the design rule description (spec).
  • Flexibility: In addition to the automatic layout creation, the tool accepts additional layout patterns or actual design clips. Given such clips, the tool will automatically recognize the rules under test, and use them as seeds for further pattern enumeration.
  • The tool uses an internal independent spec-driven DRC engine that examines all these test cases, sorts them into fails and passes and places markers that are later compared to the markers created by the target DRC tool runset.
  • Coverage: DRVerify has a coverage measurement engine that analyzes all fail/pass cases and determines the level of coverage accomplished for each rule.
  • Detailed error report: Once the target runset or DUT (deck under test) runs on the QA test layout, DRVerify will run a comparison between each DUT marker and its own markers and measurements, and will provide a detailed comparison report, pointing to any mismatch or potential error in the runset.
  • No limitations: DRVerify is not limited to any given set of rules or specific technologies.
  • Tool independence: DRVerify is tool agnostic and can be used to verify any runset for any DRC tool or language.To learn more about DRVerify: DRVerify White Paper

Michelin Moving On: Deep Diving on Autonomous Driving

Michelin Moving On: Deep Diving on Autonomous Driving
by Roger C. Lanctot on 06-10-2018 at 12:00 pm

When it comes to autonomous mobility – things are changing awfully fast. A “deep dive” workshop at Michelin’s Movin’ On 2018 event in Montreal (concluding today) dug into the issue revealing hopes and anxieties shared by executives culled from a wide range of industry constituencies. The overall sense of the room was that autonomous technology is coming – perhaps sooner than some think but not as soon as some have claimed or desire. All agree there is work to be done on the technology and the regulations and maybe even on public education.

Executives from Keolis (Andreas Mai), Bestmile (Lissa Franklin) and SAE International (William Gouse) led the group-based discussion with the help of Jason Thompson from Jigsaw. The debate focused on three key issues:

  • Approval
  • Certification
  • Monitoring

Approval – Innovation or regulation?

50 states in the U.S. 10 provinces across Canada. A myriad of jurisdictions spanning Europe. And every one of these governments has an idea how autonomous vehicle features should be addressed and regulated.

What is the right balance? Too much regulation can stifle innovation. Too great a focus on full-speed-ahead-innovation can increased safety risks. The participants attempted to answer these questions:

  • Do we need an approval process to put vehicles on the road?
  • What are the most critical components of an approval process?
  • How do you balance “innovation” against “regulation?”
  • How do you address liability?
  • Should aftermarket autonomous driving gear require approvals?
  • Who should bear responsibility?
  • National strategy or provincial/state strategy?

(A sampling of the responses is pictured above the headline of this blog from Post-It notes.)

Certification – What’s in the box?

Autonomous vehicles – whether commuter trains or SUVs require reams of data to test and improve algorithms. Companies in the space tend to be highly protective of algorithims and the datasets that populate them. One expert remarked that “I go to a lot of meetings with these companies, and when it’s time to share information, the room gets awfully quiet.

”That said, certification is becoming increasingly important to heighten consumer trust, improve safety and define what an autonomous vehicle is and where it can drive.

Experts spent time describing scenarios that could accomplish the goal of certification. Chief among them was the need to test, in the same way that crash test dummies are used to test products today. Car companies don’t want to share their specific details on enabling safety – what’s important is to know it works.

To get attendees thinking about certification, the experts asked seven new questions:

  • How do you balance “innovation” against “regulation?”
  • How do you certify an always-learning black box?
  • Should “transparency” be required?
  • How do you create “trust” for certification?
  • How does certification apply for commercial vehicles?
  • What kind of testing should be required?
  • How do you pursue the Good Housekeeping Seal of Approval?

The audience was split on the idea of opening the black boxes of proprietary IP. Some felt that creating transparency was vital for safety. Others felt it would cause fewer companies to innovate and compete.


Monitoring – Worth the (privacy) trade off?

Earlier this year, an Uber driver in Arizona was involved in a fatal crash involving a pedestrian. Part of Uber’s pilot program for autonomy, the driver was reviewing data on a tablet and hadn’t looked at the road for more than six seconds.

How does Uber know this? There were cameras in the vehicle monitoring the driver. What they didn’t do was warn the driver of the impending collision, or take action to avoid the problem. It appears the emergency braking was not engaged with the autonomous driving system.

How can monitoring – cameras, radar or thermal – make for a safe autonomous driving experience? Experts discussed the concept of monitoring in a broader context, expanding beyond driver monitoring to car monitoring or commercial vehicle monitoring. They asked what the role and value of monitoring – and touched on the trade off between safety and privacy.

Ultimately, the group identified a series of technologies and issues for the audience to discuss with these results:


This exercise got audience members dreaming about the future – particularly when it came to the potential for 5G technology. Ideas flew fast and furious throughout expert and attendee exchanges, and if there was one shared thought, it was this – this is a rapidly moving industry that’s thriving on big thinking, but at what cost and for what benefit?

Editor’s note: Thanks to Jason Thompson of Jigsaw for his moderation and for sharing his detailed notes. And thanks to Michelin and the event sponsors, of course.


Is This the Death Knell for PKI? I think so…

Is This the Death Knell for PKI? I think so…
by Bill Montgomery on 06-10-2018 at 7:00 am

It was 1976 when distinguished scholars Whitfield Diffie and Martin Hellman published the first practical method of establishing a shared secret-key over an authenticated communications channel without using a prior shared secret. The Diffie-Hellman methodology became known as Public Key Infrastructure or PKI.

That was a long time ago. Do you even remember 1976? If you’re over the age of 50 you likely recall some things about this era, but if you’re under 40, your knowledge of the ‘70s is probably stuff you’ve seen on television or read in history books.

In 1976 the USA average annual income was $16,000, gas was $0.39 a gallon, and the median price for a new home was $43,600. In the world of technology, Steven Jobs and Wozniak formed the Apple Computer Company and months later, Bill Gates registered Microsoft with the Office of the Secretary of the State of New Mexico. Matsushita launched VHS video recorders, and the first commercially-developed supercomputer – the Cray 1 – was installed in the US at the Los Alamos National Laboratory.

In 1976, the Internet didn’t exist, at least not in the way that it does today. There were no personal computers, no mobile phones – and of course no smartphones, It was a largely electro-mechanical, analogue world that was on the cusp of experiencing what over time was dubbed “the digital revolution.”

A technology guru who had gone into a deep sleep in 1976 only to awaken 42 years later would be shocked by the massive technological advances that have forever changed our planet. The guru would see a connected world with close to four billion Internet users, five billion mobile phones, and myriad applications that render global communication instantaneous. He or she would see a world with billions of connected things, with millions being added daily, extending well beyond consumer products to mission-critical business and government infrastructure. The 70’s guru would see that the very pillars of civilized society – nations’ energy grids, financial systems, and national security networks– all deeply ingrained and reliant on our connected world. And he’d also see a connected world constantly under attack by cyber criminals. He’d see the average cost of a data breach was $3.62m in 2017. He’d see nations under constant siege as enemy states and others work tirelessly to hack and destroy the digital foundation upon which we rely so heavily.

The world today would surely stun our tech guru, but what would absolutely shock him would to learn that virtually every person, place and thing on the planet, and every mission critical application is protected by 1970’s technology! PKI. And remarkably, enterprises and governments worldwide were paying an average $75/year total cost of ownership for each PKI-“protected” cryptographic unit (CU).
It begs the question, “how could our world have achieved so much in the way of technical advancement, without addressing the issue that can bring everything down?”

I won’t drone on about the perils of PKI – not the protocol, per se, but the vulnerabilities that a world full of fake and unrevoked certificates has created – but if you want to learn more I suggest you read Lipstick on the Digital Pig. What I do want to highlight is how one country – Singapore – is tackling the problem head on through an exciting initiative called Project GRACE.

A Tectonic Shift in Digital Security
I love the term “tectonic shift.” Its origins are rooted in geographic descriptions of the 15 or so tectonic plates that comprise the Earth’s crust. They are constantly moving, and when they move more dramatically, bad things happen – like earthquakes.

In the world of business, tectonic shifts are usually defined by the emergence of new technology that completely alters the landscape. Consider the tectonic shift from analogue to digital technology, which eliminated complete industries and ushered in the dawn of a new era. Apple, for example, obliterated the portable, personal music listening industry (remember the 1970s “Walkman?”) when it introduced the IPod.

The Government of Singapore – which is ranked #1 in the world in the IAC International E-Government rankings – is leading the way by creating a tectonic shift in digital security. Through Project GRACE, it will completely eliminate the many threats posed by PKI by completely erasing the dated Diffie-Hellman scheme from its digital equation.

GRACE has been entered in the co-sponsored US NIST and Homeland Security 2018 Global City Teams Challenge, an event which this year is focused on Cybersecurity. The GRACE initiative is described as follows:

“The present Public Key Infrastructure (PKI) is known to be inadequate for the current scale of the Internet and the situation is exacerbated with the advent of IoT. Project GRACE (Graceful Remediation with Authenticated Certificateless Encryption) implements a security architecture using an advanced form of pairing-based cryptography called Verifiable Identity-based Encryption (VIBE) to provide a simple, scalable and secure key management for the cloud services, the IoT and indeed the Critical Information Infrastructure (CII) which are otherwise vulnerable to the extant and new cyber-physical attacks.”

One of the partners in GRACE, the University of Glasgow, is the lead agent in a city project to secure all smart meters over public wi-fi networks using the certificateless approach (i.e. no PKI) inherent in GRACE.
GRACE is led by Singapore-based QuantumCeil which describes the projects deliverables, as follows:

  • Provide a simple scheme where it is difficult to commit errors of implementation.
  • Provide a scalable scheme to address very large networks (centralized, distributed or mesh – billions of entities) at a great reduction in complexity – O(N) over PKI – complexity O(N2).
  • Provide a secure scheme rooted in hardware with counter-measures against the crippling side-channel attacks [author’s note: this eliminates threats due to critical hardware vulnerabilities in modern processors, such as those exposed inMeltdown and Spectre].

The time has come for our connected world to migrate from PKI and embrace security technology and cryptographic schemes designed in this era for this era.

I just sent a text to the 1976 technology guru. He agrees. Do you?

p.s. the GRACE system and its operation are certifiable to ISO 27001:2013.