RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Where Circuit Simulation Model Files Come From

Where Circuit Simulation Model Files Come From
by Daniel Payne on 02-07-2019 at 7:00 am

I started out my engineering career by doing transistor-level circuit design and we used a proprietary SPICE circuit simulator. One thing that I quickly realized was that the accuracy of my circuit simulations depended entirely on the model files and parasitics. Here we are 40 years later and the accuracy of SPICE circuit simulations still depend on the model files and parasitics, but with the added task of using 3D field solvers to get accurate parasitic values, and even the use of 3D TCAD tools to model the complex physics of nm IC designs using FinFET transistors.

Foundries and IDMs both need a proven and accurate flow from TCAD to SPICE simulation for accurate FinFET behavior that match silicon measurements. This blog will consider such a flow using tools from Silvaco like DeckBuildor the Virtual Wafer Fab (VWF). I will start with an example circuit of a 9-stage ring oscillator using a FinFET technology with 20nm gate lengths:

The next step is to create an annotated layout that identifies active devices (FinFET transistors) and the interconnect between the devices:

The initial IC layout is performed on a 2D representation, so the TCAD tool takes this 2D info as a starting point in doing a 3D process simulation where we get to choose process parameters like:

  • Fin height
  • Equivalent gate oxide thickness
  • Source-drain diffusion times

With just a handful of input parameters the process simulator then automates the device meshing and creates a 3D representation of a p-channel FinFET including the SiGe source-drain stressors for mobility enhancement:

The 3D TCAD simulator Victory will automatically create the SPICE model files based on the p-channel and n-channel physical and electrical characteristics. Process engineers can iterate and investigate how different process parameters, strain and layout effects impact circuit performance (aka Design Technology Co-optimization).

With a transistor model, you can then visualize the I-V curves for p and n transistors:

When running device simulations the process engineer decides which device physics to include:

  • Standard model set
  • Strain effects
  • Gate tunneling
  • Band to band tunneling
  • User-defined effects

Finally, the BSIM-CMG models are created based upon these TCAD I-V curves and the 3D physics involved. In the 1970’s we only had proprietary models and SPICE simulators, but today we have model standards like BSIM-CMG which is approved by the Compact Modeling Council.

You can customized how your model cards are created if needed by using the UTMOST-IV GUI, but for this example a standard script was used without any tweaking. One quality control step is to check the difference between the original I-V curves and the generated SPICE model, look at how close these sets of curves are:

When I look at the TCAD curves versus SPICE model curves, the values are consistent. The Utmost-IV tool does have a full SPICE simulator that is used to fit the curves so closely, saving time for us and keeping the desired accuracy. So the transistors are well modeled in this tool flow and it’s time to look at the interconnect between devices.

Just as the FinFET devices required 3D modeling, the interconnect between FinFET transistors also requires a 3D field solver, Clever, to extract accurate resistance and capacitance values. The 3D Back End Of Line (BEOL) structure for the nine-stage inverter layout is shown below where Metal 2 is in Red, Metal 1 is Purple:

This 3D field solver produces the SPICE netlist which has both FinFET transistors and RC interconnect values.

An engineer can even run a large Design Of Experiments (DOE) with the Virtual Wafer Fab tool and use the built-in statistical features to fit response surface models, relating input variables to predicted outputs.

Running the SPICE netlist in the SmartSpice circuit simulator shows that the 9 stage ring oscillator is functioning properly (left plot), and we know the average power consumption (right plot).

This same TCAD to SPICE model file flow was also run on a D-type Flip Flop from the Nangate digital library, here’s the 3D interconnect view:

All possible logic states are simulated with SmartSpice on the Flip Flop netlist using extracted parasitics:

A CAD engineer could run many of these steps using the DeckBuild environment, or for further automation you should consider using the Virtual Wafer Fabtool instead. An example of using VWF is for a Design Of Experiments where a virtual split lot tree is built and run across many computers, where the final results are fitted to a Response Surface Model (RSM):

One insight from the RSM is that the maximum frequency is inversely proportional to the metal line width, to this particular circuit is parasitic capacitance limited, instead of resistance limited. Now the circuit designer knows how to better make trade-offs in reaching the requirements.

Summary
I’ve laid out the steps used by process engineers and CAD engineers to create SPICE model files:

  • 3D TCAD simulation (Victory)
  • SPICE model parameter extraction (Utmost IV)
  • 3D netlist extraction (Clever)
  • SPICE simulation (SmartSpice)

Silvaco has all four tools needed in this tool flow and they also added plenty of automation to save your team precious time (VWF) for performing Design Of Experiments. For more details read the recent article in Silvaco’s Simulation Standard online.

Related Blogs


Machine Learning and Gödel

Machine Learning and Gödel
by Bernard Murphy on 02-06-2019 at 7:00 am

Scanning ACM tech news recently, I came across a piece that spoke to my inner nerd; I hope it will appeal to some of you also. The discovery will have no impact on markets or investments or probably anyone outside theories of machine learning. Its appeal is simply in the beauty of connecting a profound but obscure corner of mathematical logic to a hot domain in AI.

There is significant activity in theories of machine learning, to figure out how best to optimize neural nets, to understand what bounds we can put on the accuracy of results and generally to add the predictive power you would expect in any scientifically/mathematically well-grounded discipline. Some of this is fairly close to implementation and some delves into the foundations of machine learning.

In foundational theory, one question is whether it is possible to prove, within some appropriate framework, that an objective is learnable (or not). Identifying cat and dog breeds is simple enough – just throw enough samples at the ML and eventually you’ll cover all the useful variants. But what about identifying patterns in very long strings of numbers or letters? Since we can’t easily cross-check that ML found just those cases it should and no others, and since potentially the sample size could be boundless – think of data streams in a network – finding a theoretical approach to validate learnability can look pretty attractive.

There’s a well-established mathematical framework for this analysis called “probably approximately correct (PAC) learning” in which a learning system reads in samples and must build a generalization function from a class of possible functions to represent the learning. The use of “functions” rather than implementation details is intentional; the goal is to support a very general analysis abstracted from any implementation. The target function is simply a map between an input sample (data set) and the output value, match or no match. There is a method in this theory to characterize how many training samples will be needed for any given problem, which apparently has been widely and productively used in ML applications.

However – when a theory uses sets (of data) and functions on those sets, it strays onto mathematical logic turf and becomes subject to known limitations in that domain. A group of mathematicians at the Technion-Israel Institute of Technology in Haifa have demonstrated that there exist families of sets together with target learning problems for which learnability can neither be proved nor disproved within the standard axioms of mathematics; learnability is undecidable (or more precisely, independent of the base mathematical system, to distinguish this from computability undecidability).

If you ever read “Gödel, Escher and Bach” or anything else on Gödel, this should sound familiar. He proved, back in 1931, that it is impossible for any mathematical system to prove all truths about the integers. There will always be statements about the integers that cannot be proved either true or false. The same restriction applies to ML it seems; there are learning problems for which learnability cannot be proved or disproved. More concretely, as I understand it, it is not possible to determine for this class of problem an upper bound to the number of training samples you would need to supply to adequately train the system. (Wait, what about proving this from the halting problem? The authors used Gödelian methods, so that’s what I describe here.)

This is unlikely to affect ML as we know it. Even in mathematics, Gödelian traps are few and far between, many quite specialized although a few like Goodstein’s theorem are quite simple. And of course we know other problems, like the traveling salesman problem which are theoretically unbounded yet are still managed effectively every day in chip physical design. So don’t sell your stock in ML-based enterprises. None of this will perturb their efforts even slightly. But it is pretty, nonetheless.


Open Letter to the FTC Bureau of Consumer Protection

Open Letter to the FTC Bureau of Consumer Protection
by Matthew Rosenquist on 02-05-2019 at 12:00 pm

In December 2018 the FTC held hearings on Competition and consumer Protection in the 21st Century. A number of people spoke at the event and the FTC has graciously opened the discussion to public comments. The Federal Trade Commission has interest, certain responsibilities, and can affect changes to how data security evolves. This is our opportunity for the public to share its thoughts and concerns. I urge everyone to comment and provide your viewpoints and expertise to the FTC committee.

Comments can besubmitted electronically no later than March 13, 2019.
Below is my Open Letter to the FTC – Bureau of Consumer Protection, that has been submitted. As always, I am interested in your thoughts. Feel free to comment.

Open Letter to the FTC – Bureau of Consumer Protection
I would like to make clear as an important preface to the following that I do not speak for or on behalf of Intel Corporation. In this regard, all of this material, the opinions and positions expressed and the conclusions drawn are my own and do not reflect the material, positions or conclusions of Intel Corporation.

In response to your hearing on Competition and Consumer Protection in the 21st Century, I respectfully provide the following insights and recommendations:

Protecting consumer data continues to grow in importance while the technology challenges expand the complexity and risks. The difficulty will sharply increase with emerging innovations that will be able to analyze and aggregate vast amounts of consumer data in new and exciting ways.

The challenges are as significant as the benefits that new technology adoption brings. Insightful and calculated strategic action now is necessary to establish a solid foundation to allow technology benefits to prosper, while instituting the frameworks that will protect consumer data in ways that maintain alignment to public expectations as the risks increase.

The technology industry has a focus in providing innovative solutions for profit but must also build trust in how products protects consumers. Incidents which victimize users represent an inhibitor to long-term adoption and business viability. Consumers are increasingly placing more purchasing weight on security factors.

Trust will be key. Consumers must be confident in the security, privacy, and safety of technology. Brand reputation for cybersecurity will emerge as a competitive differentiator. It is in everyone’s best interest to have a healthy technology market where competition drives toward the optimal balance of risks, costs, and usability to meet consumer’s needs.

Observations, inputs, and recommendations:
Digital technology connects and enriches the people and prosperity of the world. Innovation is hugely beneficial, but it also brings risks. In a symbiotic manner, as technology grows in capability and reach, so do the accompanying risks.

  • Society wants the benefits that technology enables, but with manageable controls to protect their security, privacy, and safety in the face of ever increasing, creative, and motivated threats. In short, consumers want innovation at the lowest price, without additional risk.
  • The risks to consumers will increase with the processing of more personal data. Information is the fuel powering future digital technology and services. Soon, automated and intelligent systems will be the preferred tool to make sense and derive new value from the vast oceans of collected data. It is crucial that data Confidentiality, Integrity, and Availability be protected and its usages aligned to the benefits of the users. These risks must be taken into account now as part of any future control framework.
  • The acceleration of widespread consumer victimization is the driving force for expectation changes and regulatory oversight. Consumers want better protective standards and the ability to act through their own choices.
  • It is difficult, for all parties, to identify and deliver the optimal balance of security. For consumers, the ambiguity and complexity of risks are more challenging to understand as compared to tangible benefits of the technologies they desire. This delta has traditionally led to the blind acceptance of risks as a tradeoff for benefits. As impacts rise however, the tolerance will not hold and consumers will want action and uncomplicated empowerment to choose a better balance for themselves. The concept of ‘trust’ will emerge as an easy way to help consumers with buying decisions and brand loyalty. Trusted technology, vendors, and service providers, which protect user’s security, privacy, and safety, will have a business advantage to thrive as compared to less-trustworthy competitors. The value of ‘trust’ can be a healthy and sustainable model for market reinforcing incentives that continuously align to consumer data protection needs.

In order for a sustainable ecosystem to maintain parity between risks, costs, and usability, an optimized set of incentives, controls, and oversight must be established.

  • Partnership between the public and private sector is crucial. Academia, business, and government must work together in strategic ways to achieve both the adoption of beneficial technology and the mitigation of risks to acceptable levels
  • Consumers also have an important role in being responsible for their data and should be given the visibility, tools, and ability to seek remedies for the protection of their information that could be used to their detriment. Efforts that educate them over time support better decision making for tradeoffs and a more informed culture for consumer data protection.

The goal should be to establish a sustainable environment where good data practices benefit ethical market players, and overall trust in technology is elevated through transparency and accountability requirements, enabling users the informed empowerment to make educated choices for beneficial trade-offs.

  • Governmental oversight is well suited to regulate and enforce the adherence of businesses to transparency and accountability requirements. This takes minimal effort and primarily targets offenders to ensure a fair and competitive playing field. Market forces then drives the evolving beneficial behaviors across the system and quickly adapts priorities to align with evolving risks.
  • One mistake we must avoid is the unnecessary constraint of innovation or attempt for prescriptive controls on behalf of consumers with shifting expectations. Inhibiting innovation is counterproductive as technology can contribute to more protections for consumers and undermines the compounding benefits of iterative advancement and growth. Regulations are not as nimble as the evolving threats and should not attempt to define specific controls to mitigate attacks, but rather establish a framework that promotes the ecosystem to rapidly respond to new risks based upon healthy competition for consumer loyalty.
  • Fostering market forces, to reward ethical behaviors of businesses, is key in building a sustainable and self-supporting environment for technology prosperity and better consumer data protections.

I believe that ‘Trust’ in technology is a key to both prosperity and the realization of tremendous benefits by consumers for innovative products and services. Technology providers should compete for consumers trust by providing secure, private, and safe products. The FTC can play an important role to facilitate the healthy competition for consumers benefit while ensuring a fair playing field by targeting organizations which seek to undermine transparency and accountability necessary for consumers to better understand the risks and how organizations compare when it comes to trustworthiness.

Technology organizations, possessing expertise on technology innovation and supporter of ethical competitive practices, should be sought to assist the FTC, peer organizations, and academia to establish a sustainable regulatory structure to maximize market incentives to achieve the best possible optimization for protecting consumers.

Respectfully,

Matthew Rosenquist, Cybersecurity Strategist


How Apple Became a Force in the Semiconductor Industry

How Apple Became a Force in the Semiconductor Industry
by Daniel Nenni on 02-05-2019 at 7:00 am

From our book “Mobile Unleashed”, this is the semiconductor history of Apple computer:

Observed in hindsight after the iPhone, the distant struggles of Apple in 1997 seem strange, almost hard to fathom. Had it not been for the shrewd investment in ARM, Apple may have lacked the cash needed to survive its crisis. However, cash was far from the only ingredient required to conjure up an Apple comeback.
Continue reading “How Apple Became a Force in the Semiconductor Industry”


Getting to 56G Takes The Right Stuff

Getting to 56G Takes The Right Stuff
by Tom Simon on 02-04-2019 at 12:00 pm

During the 1940s when aerospace engineers were attempting to break the sound barrier for the first time, they were confronting a slew of new technical issues that had never been dealt with before, and in some cases never seen before. In subsonic flight airflow was predictable and well understood. In crossing the sound barrier, they were confronted with new physical effects and issues that had to be resolved. Today there is a similar challenge facing designers of chips and systems that communicate at data rates of 56G and 112G. Circuits operating at Millimeter Wavelength (mmWave) frequencies are not unlike aircraft flying at supersonic speeds. New behaviors including reduced margins, increased noise and electromagnetic effects can dominate the performance of these circuits.

Designing SerDes for 56G and 112G is complex and challenging. In fact, there are only a few companies with working silicon. eSilicon is one of them and made an announcement at DesignCon this year that highlights another huge challenge for these designs – how to test them and measure their performance. What is necessary to test these new high speed circuits is a collaboration of companies that can all contribute essential pieces of the puzzle. eSilicon worked with Wild River Technology to design and build a test board that allows de-embedding up to and beyond 70GHz. Also involved were Keysight’s Advanced Design System and Samtec’s Bulls-Eye Test Point System.

eSilicon’s 56G PAM4 & NRZ DSP based 7nm SerDes was used to drive the channels in the test system. Without proper de-embedding the actual circuit operation cannot be accurately characterized. PAM4 introduces multi-level signaling, which decreases the noise margin, making characterization and validation of the SerDes more difficult. For eSilicon this marks an important milestone along the way to having a fully qualified, production ready 56G SerDes.

Central to this project is the upcoming IEEE P370 standard which addresses the quality of measured S-parameters for PCBs and related interconnect. Wild River has expertise in this area. Their founder and chief technologist, Al Neves, has contributed a great deal of time and effort to this developing standard. Within the standard there are three groups: test fixture, de-embedding, and S-parameter quality.

Prior to the announcement I had a chance to talk to Al Neves. He said they did a number of things to ensure their success. One of them was to create three teams to work on the problem to validate the results. He took inspiration from the Apollo program, where independent teams were given the assignment of calculating the launch trajectories. If they all agreed then the likelihood of success improved, as witnessed by their overall operational record.

Another important factor that Al identified was their tool selection. Projects like this live or die by their analysis tools. They used both ANSYS HFSS and Simeor THz software, among others. Al said they push their tools pretty hard and often have to consult with the vendors to get fixes and confirm portions of the methodology.

eSilicon now plans to move to the next step with their 56G SerDes by building a test socket suited for 70GHz. While no test pilots’ lives are at stake, the risks are not insignificant. This stuff is hard, but some companies have the right stuff. The eSilicon website has more information on this announcement and other aspects of their 56G/112G SerDes project.


Jeephack Repercussions

Jeephack Repercussions
by Roger C. Lanctot on 02-04-2019 at 7:00 am

Automotive cybersecurity is an intractable nightmare with significant though inchoate implications for consumers and existential exposure for auto makers. This reality became painfully clear earlier this month when the U.S. Supreme Court declined to hear FiatChrysler Automotive’s appeal in a class action lawsuit over allegations of vulnerabilities in its Jeeps and other trucks.

The case will go to trial in March.

At the core of the lawsuit, arising from the infamous 2015 so-called “Jeephack” orchestrated by Chris Valasek and Charlie Miller (now General Motors employees), is the issue of FCA’s liability and responsibility for the hack or future hacks. The litigation raises the question of whether truck buyers can sue over hypothetical future injuries without being actual victims of cybersecurity.

Approximately 200,000 FCA vehicle owners are parties to the class action and the penalty they are asking the court to apply is $2,000 per vehicle. Obviously that means FCA’s exposure in the litigation is potentially $400M. In reality, given that 1.4M vehicles are implicated in the alleged FCA vulnerability the actual exposure is $2.8B.

The consumers have said that, had the defects been disclosed, they never would have purchased the vehicles in the first place or would have paid less for them. They also said the defects reduce their vehicles’ resale value. A U.S. District judge certified the class action for claims of fraudulent concealment, unjust enrichment and violation of various state and federal consumer protection laws..

The significance of the lawsuit is that it quantifies the potential value of cybersecurity to the consumer at $2,000/car. It also suggests that large numbers of consumers are starting to care enough and understand enough about cybersecurity to take legal action where and when it is found to be wanting.

The action is the latest step in the process of the automotive industry coming to grips with the implications of cybersecurity. Every month brings word of yet another vehicle hack. Usually these “reveals” are accompanied by a description of the remedy being offered by the car maker – often in the form of a software update delivered either wirelessly to the car or during a dealer visit.

The cost of a dealer visit for a software update can be between $200 and $300. The lawsuit increases by tenfold the understanding of that financial exposure and elevates cybersecurity to a board-level concern for most auto makers. Members of the Auto-ISAC, which is coordinating the industry’s reaction to the cybersecurity dilemma, acknowledge a steady flow of reported hacking attempts of cars.

But, as an industry, we appear to be whistling past the graveyard.

The simultaneous announcements from car makers of hacks and fixes reflects the reality that most automotive hacks, of late, have been conducted by ethical or white hat hackers. Hackers working as individuals or as employees of organizations such as IOactive, Lab Mouse of Tencent’s Keen Labs have used automotive hacks to build their reputations and relationships with auto makers.

A wide range of cybersecurity suppliers have also used hacks of auto makers or their suppliers as “door openers.” Multiple automotive cybersecurity companies have hacked cars and components as part of educating auto makers to the scope of the cybersecurity problem.

The choreographed nature of these hack-fix announcements further reflects the limited preparedness of the industry. The pending lawsuit calls into question the adequacy of the existing process of consumer notification regarding vehicle vulnerabilities.

The reality is that cars may never be certifiably secure. In fact, it is known that car company’s enterprise operations have been hacked into via their connected cars and vice versa. Knowing that is enough to cause some lost sleep among senior auto executives while motivating others to elevate cybersecurity to a board-level responsibility.

The automotive industry has one of the most complex supply chains and is further compromised by its dependency on networks of franchise dealers. Add car sharing networks and autonomous vehicles into the mix and you have a gargantuan challenge.

If FCA can be sued on the grounds of potential vulnerability, successfully or not, the time has arrived to prioritize cybersecurity counter-measures. The challenge for auto makers is that consumers assume their vehicles are secure. The reality is that nearly any car can be hacked given enough time and determination on the part of the hacker.
The saving grace is that to-date most hacks have required significant time and effort including disassembly of vehicle systems and reverse engineering of firmware. It is also somehow reassuring that the potential opportunity derived from the hacking appears to boil down to vehicle or identity theft. Thus far terrorism remains nothing more than a boogeyman.

The onset of near-universal vehicle connectivity along with collision avoidance, self-parking and other automated driving features, demand better cybersecurity preparedness in the industry. As to whether a lack of preparedness exposes auto makers to billions of dollars in liability, the courts will decide. One thing is clear, the Supreme Court of the United States chose not to dismiss the lawsuit – so the entire industry will be paying attention this March.

Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. Roger will be participating in the Future Networked Car Workshop, Feb. 7, at the Geneva Motor Show –https://www.itu.int/en/fnc/2019.

More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.


MENTOR at DVCON 2019

MENTOR at DVCON 2019
by Daniel Nenni on 02-04-2019 at 7:00 am

The semiconductor conference season has started out strong and the premier verification gathering is coming up at the end of this month. SemiWiki bloggers, myself included, will be at the conference covering verification so you don’t have to. Verification is consuming more and more of the design cycle so I expect this event to be well worth our time, absolutely. Mentor, of course is the premier verification company so first let’s see what they are up to:

Mentor, a Siemens Business will have experts presenting conference papers and posters, as well as hosting a luncheon, panel, workshop, and tutorial at DVCon 2019 February 25-28 in San Jose, CA. You’ll also find experts on the exhibit floor in booth #1005. Special guest Fram Akiki, VP Electronics Industry at Siemens PLM, will deliver Tuesday’s keynote “Thriving in the Age of Digitalization”.

KEYNOTE
Thriving in the Age of Digitalization
Tuesday, February 26, 1:30pm-2:30pm
Presented by Fram Akiki, Siemens PLM
Semiconductor technology continues its relentless advance with shrinking geometries and increased capacity stressing design and verification of today’s most demanding System on Chip solutions. While these challenges alone are significant, future challenges are rapidly expanding as market demand for Internet of Things, Automotive electronics and autonomous systems, 5G communication, Artificial Intelligence and Machine Learning technologies – and more – show explosive growth. These advances add significant complications and drive exponential growth in design and verification challenges of these solutions. This exponential development not only expands the market; it sparks new competitive pressures as new companies look to challenge the more established businesses that have long defined the industry. These factors explain why it’s important to have an integrated digitalization strategy to succeed in today’s semiconductor market.

SPONSORED LUNCHEON
A Tale of Two Technologies: ASIC & FPGA Functional Verification Trends
Tuesday, February 26, 12:00pm-1:15pm
The IC/ASIC market in the early- to mid-2000 timeframe underwent verification process growing pains to address increased design complexity. Similarly, due to increased complexity we find that today’s FPGA market is being forced to address its processes. What solutions are working? What separates successful projects from less successful ones? And how do you measure success anyway? At this luncheon we will address these and other questions. Please join Mentor, A Siemens Business as we explore the latest industry trends and what successful projects are doing to address growing complexity.

PANEL
Deep Learning –– Reshaping the Industry or Holding to the Status Quo?
Wednesday, February 27, 1:30pm-2:30pm
Participating Companies: Advanced Micro Devices, Babblelabs, Arm, Achronix Semiconductor, NVIDIA
Moderator Jean-Marie Brunet from Mentor, a Siemens Business, will take panelists through various scenarios to determine how AI and deep learning will reshape the semiconductor industry. They will look carefully at the chip design verification landscape to access whether it’s equipped to handle this new and potentially exciting area. Audience members will be encouraged to bring questions and opinions to ensure a lively and thought-provoking panel session.

WORKSHOP
It’s Been 24 Hours – Should I Kill My Formal Run?
Monday, February 25, 3:30-5:00pm
In this workshop we will show the steps you can take to make an informed decision to forge ahead, or cut your losses and regroup. Specifically, we will describe:

  • How you can set yourself up for success before you kick off the run by writing assertions, constraints, and cover properties in a “formal friendly” coding style
  • What types of logic in your DUT will likely lead to trouble (in particular, deep state space creators like counters and RAMs), and how to effectively handle them via non-destructive black boxing or remodeling
  • Matching the run-time multicore configuration and formal engine specifications to the available compute resources
  • Once the job(s) start, how to monitor the formal engines’ “health” in real time
  • Confirm the relevance of the logic “pulled in” by your constraints
  • Show how a secure mobile app can be employed to monitor formal runs when you are away from your workstation
  • Examine whether a run’s behavior is consistent with the expected alignment between the DUT’s structure and the formal engines’ algorithmic strengths
  • Leverage all of the above to make the final “continue or start over” decision

TUTORIAL
Nex Gen System Design and Verification for Transportation
Thursday, February 28, 8:30am-11:30am
In this tutorial, Mentor experts will demonstrate how to use these next-generation IC development practices to build and validate smarter, safer ICs. Specifically, it will look at:

  • How to use High-Level Synthesis (HLS) to accelerate the design of smarter IC’s
  • How to use emulation to provide a digital twin validation platform beyond just the IC
  • How to use develop functionally safe IC’s

CONFERENCE PAPERS
Tuesday, February 26, 3:00pm-4:30pm
5.1 UVM IEEE Shiny Object
5.3 Fun with UVM Sequences – Coding and Debugging

Wednesday, February 27, 10:00am-12:00pm
9.2 A Systematic Take on Addressing Dynamic CDC Verification Challenges
9.3 Using Modal Analysis to Increase Clock Domain Crossing (CDC) Analysis Efficiency and Accuracy
10.1 Unleashing Portable Stimulus Productivity with a PSS Reuse Strategy
10.2 Results Checking Strategies with the Accellera Portable Test & Stimulus Standard

Wednesday, February 27, 3:00pm-4:30pm
11.1 Supply Network Connectivity: An Imperative Part in Low Power Gate-level Verification
12.2 Formal Bug Hunting with “River Fishing” Techniques

EXHIBIT HALL – MENTOR BOOTH #1005
Mentor, a Siemens Business, has pioneered technology to close the design and verification gap to improve productivity and quality of results. Technologies include Catapult® High-Level Synthesis for C-level verification and PowerPro® for power analysis. Questa® for simulation, low-power, VIP, CDC, Formal and support for UVM and Portable Stimulus. Veloce® for hardware emulation and system of systems verification, unified with the Visualizer™ debug environment.

Join the Mentor booth theater sessions to watch technology experts discuss a broad range of topics including Portable Stimulus, emulation for AI designs, verification signoff with HLS, functional safety, accelerating SoC power analysis, and much more!

Theater sessions include:

  • Portable Stimulus from IP to SoC – Achieve More Verification with Questa inFact
  • Accelerate SoC Power, Veloce Strato – PowerPro
  • Exploring Veloce DFT and Fault Apps
  • Mentor Safe IC: ISO 26262 & IEC 61508 Functional Safety
  • Adding Determinism to Power in Early RTL Using Metrics
  • Scaling Acceleration Productivity beyond Hardware
  • Verification Signoff of HLS C++/SystemC Designs
  • An Emulation Strategy for AI and ML Designs
  • Advanced UVM Debugging

POSTER SESSIONS
Tuesday, February 26, 10:30am-12:00pm
Mentor experts will be representing the following poster sessions:

  • SystemC FMU for Verification of Advanced Driver Assistance Systems
  • Transaction Recording Anywhere Anytime
  • Multiplier-Adder-Converter Linear Piecewise Approximation for Low Power Graphics Applications
  • Verification of Accelerators in System Context
  • Introducing your Team to an IDE
  • Moving Beyond Assertions: An Innovative Approach to Low-power Checking using UPF Tcl Apps

DVCon is the premier conference for discussion of the functional design and verification of electronic systems. DVCon is sponsored byAccellera Systems Initiative, an independent, not-for-profit organization dedicated to creating design and verification standards required by systems, semiconductor, intellectual property (IP) and electronic design automation (EDA) companies. In response to global interest, in addition to DVCon U.S., Accellera also sponsors events in China, Europe and India. For more information about Accellera, please visitwww.accellera.org. For more information about DVCon U.S., please visitwww.dvcon.org. Follow DVCon on Facebookhttps://www.facebook.com/DvCon or @dvcon_us on Twitter or to comment, please use #dvcon_us.


CES 2019 Stormy Weather for IBM

CES 2019 Stormy Weather for IBM
by Roger C. Lanctot on 02-03-2019 at 12:00 pm

Ginni Rometty, chairman, president and CEO of IBM was kind enough to take on the task of an hour-long keynote at CES 2019 in Las Vegas last week. She used the opportunity to highlight three areas of computational innovation at IBM – deep data, broad AI and quantum systems – with the help of three partners: Delta Airlines, Wal-Mart and Exxon Mobile.

CES 2019: CTA State of the Industry Address and IBM Keynote

Rometty proclaimed IBM’s advances in weather forecasting and the importance of building trust, transparency and security around data analytics. She also talked about the positive societal impacts of these technological advances and how artificial intelligence, while changing everything about how humans work in the future, will be a force for good.

Unfortunately, there were two big gaps in Rometty’s comments. Firstly, Rometty had nothing to say about the automotive industry – an industry where IBM is deeply embedded and invested and therefore implicated in the nearly 40,000 annual fatalities that occur on the nation’s highways. And, secondly, she had nothing to say about the Weather Channel’s alleged sharing of location data via its smartphone application which has 45M active monthly users.

Rometty correctly pointed out that IBM is not thought of as a consumer technology company, yet IBM underpins many if not most consumer interactions with technology on a daily basis. A week before her keynote at CES, the city of Los Angeles attorney filed a lawsuit claiming that the Weather Company, which is owned by IBM, “unfairly manipulated users into turning on location tracking by implying that the information would be used only to localize weather reports,” according to a report in the New York Times.

“Yet the company, which is owned by IBM, also used the data for unrelated commercial purposes, like targeted marketing and analysis for hedge funds, according to the lawsuit,” reported the Times. The lawsuit alleges a violation of California’s Unfair Competition Law.

(Two years before, IBM announced a partnership with General Motors to create a contextual marketing platform in GM vehicles to be called OnStar Go. The cooperation was announced at CES 2017, but by CES 2018 IBM had handed the opportunity over to a company called Xevo which now manages the application as the “GM Marketplace.”)

The report in the New York Times sounds like a sad tale of corporate malfeasance unlikely to end well for all involved. For its part, IBM has asserted its innocence.

For me, though, the bigger issue is the underlying question of weather and automotive safety. In Nate Silver’s “The Signal and the Noise: Why So Many Predictions Fail – but Some Don’t,” the author notes that weather is a rare area where prediction and forecasting has shown steady improvement in accuracy.

There is a lot of promise in weather forecasting and it may be for that reason that IBM acquired The Weather Channel in 2016. In fact, weather has become a contentious issue around enabling automated vehicle operation, with some analysts – including professors at Michigan State – suggesting that autonomous vehicles will never be able to overcome the challenges posed by weather.

Interestingly enough, a growing cadre of private weather researchers including Global Weather Corporation, Foreca, Weather Cloud, Weather Telematics and others are applying their analytic platforms to better understand and predict road surface conditions. These efforts are integrating data sources including roadside and on-vehicle sensors along with atmospheric indicators to enable automated vehicles to advise drivers when it may be necessary for a human operator to intervene.

IBM’s keynote focused on how IBM was integrating new weather data forecasting sources including inputs from airplanes to enhance the accuracy and granularity of global weather forecasts. The focus for IBM is its Global High-Resolution Atmospheric Forecasting System, or GRAF, which is using IBM supercomputers to aggregate data from millions of sources, including the smartphones of Weather Channel app users (on an opt-in basis). GRAF will be rolled out later this year, IBM says.

Clearly, the app is playing a role in the larger weather-related message for IBM. In an age of European Global Data Protection Regulation and California’s new privacy legislation, one would expect IBM to make sure it gets privacy and disclosure right.

Rometty told USAToday: “Your ATM doesn’t work without us, you can’t get an airline ticket without us, you cannot fill your car with gas without us, you (won’t have a) supply at Wal-mart without us. We really are underneath almost all of it.”

IBM wants and deserves credit for changing our lives every day – maybe even saving our lives and making our lives better. It is important, therefore, that IBM take on the most challenging and important tasks facing society and avoid the trivialities of potential legal violations via smartphone apps.

Better weather forecasts are a powerful value proposition – but accountability matters too. At a trade event dominated by personal transportation transformation, IBM was oddly silent on its contribution to resolving automotive safety issues and automated driving, in particular. How can the company solve these big challenges if it can get tripped up by an app-related end user disclosure?

Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. Roger will be participating in the Future Networked Car Workshop, Feb. 7, at the Geneva Motor Show –https://www.itu.int/en/fnc/2019.

More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.


Open-Silicon SiFive and Customizable Configurable IP Subsystems

Open-Silicon SiFive and Customizable Configurable IP Subsystems
by Daniel Nenni on 02-01-2019 at 12:00 pm

After 8 SemiWiki years, 4,386 published blogs, and more than 25 million blog views, I can tell you that IP is the most read semiconductor topic, absolutely, and that trend continues. Another correlating trend (from IP Nest) is the semiconductor IP revenue increase in relation to the semiconductor market (minus memory) which more than doubled from 2006 to 2016 and is set to double again by 2026.

If you really want to know how important IP is ask an ASIC expert like Open-Silicon who specializes in chip differentiation by using customizable and configurable IP Subsystems. Here is a quick look at proprietary and third party IP that is an integral part of customizable system and physical design solutions offered by Open-Silicon:

By using an HBM2 IP Subsystem (controller + PHY + I/O), with silicon validation completed in TSMC’s FinFET and CoWoS technologies, customers can minimize the integration risk. Open-Silicon can also do a complete ASIC for you if time-to-market is a challenge.

Hybrid Memory Cube (HMC) is an innovative memory architecture in terms of performance, bandwidth, power efficiency, and reliability: 15x the performance of a DDR3 module, 70% less energy per bit than DDR3 DRAMs, and 90% less space than today’s RDIMMs.

Open-Silicon, a founding member of the Interlaken Alliance formed in 2007, launched the 8th generation of Interlaken IP core supporting up to 1.2 Tbps bandwidth. This high-speed chip-to-chip interface IP features an architecture that is fully flexible, configurable and scalable. Open-Silicon provides a complete Networking IP Subsystem which includes MAC IP + FlexE IP + PCS IP + MCMR FEC IP + Interlaken IP for ease of integration and as a one-stop solution to customers designing ASICs in TSMC FinFET technologies.

Investigating, evaluating, and integrating IP is rapidly becoming the biggest challenge of the SoC/ASIC industry. The success of a chip depends on the careful selection of reliable IP. Open-Silicon has a dedicated IP team that works with a wide variety of IP providers and is continually qualifying and ranking IP and updating a portfolio of recommended IP. The goal being to help you make informed IP decisions that differentiate your product, assure IP quality and reusability, and deliver first-time working silicon.

As you may have read Open-Silicon is now a SiFive company. It was a very disruptive move which greatly accelerated SiFive’s mission of becoming a fabless custom SoC powerhouse by leveraging Open-Silicon’s large customer base and ASIC implementation expertise.

The SiFive Tech Symposiums start this month in North America where you can spend time learning about the latest RISC-V offerings differentiated by customizable and configurable IP subsystems.

The RISC-V ISA has spawned a worldwide revolution in the semiconductor ecosystem by democratizing access to custom silicon with robust design platforms and custom accelerators. SiFive is fueling the momentum with myriad hardware and software tools for new and innovative RISC-V based solutions for IoT, AI, networking and storage applications. Attendance is free and includes lunch and plenty of time to meet and network with the speakers.

Mohit Gupta, SiFive Vice president of SoC IP, will be talking about all the offerings described above and more at the SiFive Tech Symposium at the Computer History Museum in Mountain View.

About Open-Silicon
Open-Silicon is a system-optimized ASIC solution provider that innovates at every stage of design to deliver fully tested IP, silicon and platforms. To learn more, please visit www.open-silicon.com

About SiFive
SiFive is the leading provider of market-ready processor core IP based on the RISC-V instruction set architecture. www.sifive.com

Also Read:

Ethernet Enhancements Enable Efficiencies

RISC-V End to End Solutions for HPC and Networking

A 2021 Summary of OpenFive


How to be Smart About DFT for AI Chips

How to be Smart About DFT for AI Chips
by Tom Simon on 01-31-2019 at 12:00 pm

We have entered the age of AI specific processors, where specialized silicon is being produced to tackle the compute needs of AI. Whether they use GPUs, embedded programmable logic or specialized CPUs, many AI chips are based on parallel processing. This makes sense because of the parallel nature of AI computing. As a result, in silicon for these applications we are seeing large numbers of replicated processing elements and distributed memories. These large AI designs fortunately lend themselves to advanced DFT solutions that can take advantage of their architectural characteristics.

Mentor has produced a white paper, titled “AI Chip DFT Techniques for Aggressive Time to Market”, that talks about how the properties of many large AI chips can be leveraged to save DFT, ATPG and test time. The first step they recommend is to take advantage of AI chip regularity. They propose doing test insertion and pattern generation/verification at the core level. Hierarchical DFT, like that found in Mentor’s Tessent, can use hierarchically nested cores that are already signed off for DFT to run DFT on the entire design from the top level. Higher level blocks can include blocks or cores that have already had DFT sign-off. These in turn can be signed off and used repeatedly within a chip.

Tessent’s IJTAG allows plug and play for core replication and integration. It also offers automation for chip-level DFT configuration and management. The flexibility this allows for some interesting optimizations. One such case is where there are a large number of very small cores. Mentor suggests using hierarchical grouping of cores for test to reduce overhead and save time. This is a happy middle ground between too granular and completely flat ATPG.

Another optimization that their approach allows is channel broadcasting. This allows the same test data to be used for identical groups of cores. It reduces test time and the number of pins required. Tessent is smart enough to help optimize the configuration for channel broadcasting.

In addition to repeating logic elements, AI chips have a large number of smaller distributed memory elements. If each memory core had its own BIST controller this would require a large area overhead. With Tessent it is possible for one BIST controller to be shared among multiple memory cores. To go along with this they offer a shared-bus interface to optimize the connections to the BIST controller.

Another topic the white paper covers is their move to RTL for test insertion. When this is used, it is possible to run test verification before the synthesis. RTL verification runs much faster than gate level verification. Also, the debug process is easier. Moving test debug and verification to the RTL level means that synthesis is not required each time a test fix is made. Mentor has also implemented a number of testability checks at RTL that can save down-steam iterations during ATPG.

While AI is making the lives of end users easier, it is certainly creating a demand for increasingly powerful silicon for processing. Despite this growing complexity of silicon, there is a bright spot in the test arena. Mentor clearly has been investing in their DFT product line. The good news is that many of the characteristics of these AI chips create opportunities for improving the efficiency of the design process and the resulting design, particularly in the area of test. If you want to delve into the specifics of how Mentor proposes designers take advantage of DFT optimizations for AI chips, the white paper is available on their website.