RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Functional Safety in Delhi Traffic

Functional Safety in Delhi Traffic
by Bernard Murphy on 04-24-2018 at 7:00 am

While at DVCon I talked to Apurva Kalia (VP R&D in the System and Verification group at Cadence). He introduced me to the ultimate benchmark test for self-driving – an autonomous 3-wheeler driving in Delhi traffic. If you’ve never visited India, the traffic there is quite an experience. Vehicles of every type pack the roads and each driver advances by whatever means they can, including driving against oncoming traffic if necessary. 3-wheelers, the small green and yellow vehicles in the picture below, function very effectively as taxis; they’re small so can zip in and out of spaces that cars and trucks couldn’t attempt. The whole thing resembles a sort of directed Brownian motion, seemingly random but with forward progress.

India city traffic

Making autonomy work in Western traffic flows seems trivially simple compared to this. But that’s exactly what an IIT research group in Delhi have been working on. Apurva said he saw a live example, in Delhi traffic, on a recent visit. We should maybe watch these folks more closely than the Googles and their kind.

After sharing Delhi traffic experiences, Apurva and I mostly talked about functional safety, a topic of great interest to me since I just bought a new car loaded with most of the ADAS gizmos I’ve heard of, including even some (minimal) autonomy. To start, he noted that safety isn’t new – we’ve had it for years in defense, space and medical applications. What is new is economical safety (or, if you prefer, safety at scale). If you add $4k of electronics to a $20k car, you have a $24k total cost. If you duplicate all of that electronics for safety, you now have a $28k total cost, less attractive for a lot of buyers. The trick to safety in cars is to add just enough to meet important safety goals without adding cost for redundancy in non-critical features.

A sample FMEDA table showing potential failure modes with expected failure rates, mitigating safety mechanisms and expected diagnostic coverage

For Apurva, ensuring this boils down to two parts:

  • Analysis to build a safety claim for the design
  • Verification to justify that safety claim is supportable

In my understanding, the first step is analysis plus failure mitigation. You start with failure mode effect and diagnostic analysis (FMEDA), decomposing the design hierarchically and entering (as in the table above) expected failure rates (FIT), planned safety mechanisms to mitigate (dual-core lockstep in this example) and the diagnostic coverage (DC) of failures expected from that mechanism. I don’t know how much automation can be found today in support of building these tables; I would guess that this is currently a largely a manual and judgement-based task, though no doubt supported by a lot of spreadsheets and Visual Basic. Out of this exercise comes the overall safety-scoring/analysis Apurva refers to in his first step.

Functional safety mechanisms are by now quite well-known. Among these, there’s the dual-core lock-step methods I mentioned above – run two CPUs in lock-step to detect potential discrepancies. Triple modular redundancy is another common technique – triplicate logic with a voting mechanism to pick the majority vote; or even just duplicate (as in DCLS) as a method to detect and warn of errors. Logic BIST is becoming very popular to test random logic, as is ECC for checking around memories. Also in support of duplication/triplication methods, it is becoming important to floorplan carefully. Systematic or transient faults (manufacturing defects or neutron-induced ionization for example) can equally impact adjacent replicates, defeating the diagnostic objective; mitigation requires ensuring these are reasonably separated in the floorplan.

Cadence functional safety analysis flow

The verification part of Apurva’s objectives is where you most commonly think of design tools, particularly the fault simulation-centric aspect. Cadence has been in the fault-sim business for a long, long time (as I remember, Gateway – the originator of Verilog – started in fault sim before they took off in logic sim). Fault sim is used in safety verification to determine if injected faults, corresponding to systematic or transient errors, are corrected or detected by mitigation logic. Therefore the goal of a safety verification flow, such as the functional safety flow from Cadence, is to inject faults in critical areas (carefully selected/filtered to minimize wasted effort), run fault simulation to report those errors then roll up results to report diagnostic coverage in whatever mechanism is suitable for the Tier-1/OEM consumers of the device.

So when you next step into a 3-wheeler (or indeed any other vehicle) in Delhi, remember the importance of safety verification. In that very challenging traffic, autonomy will ultimately make your journey through the city less harrowing and very probably faster as more vehicles become autonomous (thanks to laminar versus turbulent traffic flow). But that can only happen if the autonomy is functionally safe. Making that happen in Delhi traffic will likely set a new benchmark for ultimate safety. You can learn more about the Cadence functional safety solution HERE.


Mentor’s Approach to Automotive Electrical Design

Mentor’s Approach to Automotive Electrical Design
by Daniel Payne on 04-23-2018 at 12:00 pm

Most of us continue to drive cars and for me there’s always been a fascination with all things electrical that go into the actual design of a car. I’ve done typical maintenance tasks on my cars over the years like changing the battery, installing a new radio, replacing bulbs, changing a fuse, swapping out dashboard lights, and even putting in new power window assemblies. The automotive engineers that get to do the actual design work face a lot of unique challenges because of the rapid changes in the electrification of modern cars by providing passengers with GPS, WiFi and Bluetooth connectivity. Would you believe that we could soon see 50% of our car cost coming from the vehicle electrical systems?


Source: IHS Markit

In the consumer electronics market we expect new smart phones every year or so, which in turn places demands on what we expect our automobiles to provide us. It’s clear that auto makers have to become more nimble jn order to address the growing trends of connectivity, autonomous vehicles and electrification. One promising path is to use both new software tools and services combined to met these challenges.

On the autonomous vehicle front we hear from experts at Toyota that say it will take 14.2 billion miles of testing to reach SAE level 5 safety standards, so that would take too much time to do physically so it makes sense to add some virtual verification in the mix as well.


Source: Toyota

I spoke with two automotive experts at Mentor over the phone to get an update on design tools and services that are helping automotive systems designs, Scott Majdecki is part of the Capital support and consulting services, and Andrew Macleod is Director of automotive marketing. The Capital tool is built for both electrical and wire harness design.

Q: What challenges do you see for automotive systems designs?

Scott Majdecki – Customers bought tools then struggled getting new designs into production, because they needed help to pilot the use, so we came up with a structured deployment called PROVEN through a cycle of evaluating current design, through a pilot project. For wire harnesses there is an issue with legacy data. The Capital tool has online, on-demand training, or instructor led training for basics, and then continued learning as needed on-demand. Vehicle wiring architecture is the modern approach versus manual wiring.

Q: What automotive design trends do you see?

Andy – There are so many new issues, like: connectivity, sensors, autonomous driving, multiple voltages in the vehicle, complexity, zero defects. Virtual design trends are emerging where we have high-level design first, simulate first mechanical, then simulate electrical. It’s all about Time To Revenue.

Q: Where is virtual design at for automotive?

Scott – Mentor has a virtual testing environment for vehicles that includes pedestrians, weather, sensors and verification platforms.

Services
Companies can get to market more quickly by adopting Mentor Automotive Services for many thing, like: pilot, production rollout, legacy data migration, PLM integration, support and training services. Here’s a quick look at the services delivery model:

Simulation and Modeling
Platform design includes many parts like sensors, batteries, motor drivers, power electronics, AUTOSAR software and ECUs. Here are six types of simulation that can be done by modeling sensors as part of a system:

The Big Picture
There are four mega-trends in automotive systems design today, and those companies that excel at meeting these trends will grow in market share despite turbulent market conditions:

  • Connectivity – connecting car, driver and the external world
  • Autonomous – self-driving and driver-assisted systems
  • Electrification – EV (Electric Vehicles), hybrids and supporting technology
  • Architecture – both EE architecture and system implementation

By adding new software tools along with automotive-specific design services will certainly decrease the inherent risks for new designs, shorten design cycles, and help ensure that demanding safety standards are met. The team at Mentor has been serving the automotive market for years in both tools and services, so it makes sense to look into their approach to see how it could benefit your project teams.

White Paper
To read the full six page white paper, follow this link and fill out a brief request form.

Related Articles


Will the Rise of Digital Media Forgery be the End of Trust?

Will the Rise of Digital Media Forgery be the End of Trust?
by Matthew Rosenquist on 04-22-2018 at 12:00 pm

Technology is reaching a point where it can nearly create fake video and audio content in real-time from handheld devices like smartphones.

In the near future, you will be able to Facetime someone and have a real-time conversation with more than just bunny ears or some other cartoon overlay. Imagine appearing as a person of your choosing, living or deceased, with a forged voice and facial expressions to match.

It could be very entertaining for innocent use. I can only image the number of fake Elvis calls. Conversely, it could be a devastating tool for those with malicious intent. A call to accounting from the ‘boss’ demanding a check be immediately cut to an overseas company as part of a CEO Fraud or your manager calling to get your password for access to sensitive files needed for an urgent customer meeting.

Will digital media forgery undermine trust, amplify ‘fake’ news, be leverage for massive fraud, and shake the pillars of digital evidence? Will there be a trust crisis?

Tools are being developed to identify fakes, but like all cybersecurity endeavors it is a constant race between the forgers who strive for realism and those attempting to detect counterfeits. Giorgio Patrini has some interesting thoughts on the matter in his blog Commoditization of AI, digital forgery and the end of trust: how we can fix it. I recommend reading it.

Although I don’t share the same concerns as the author, I do think we will see two advancements which will lead to a shift in expectations.

Technical Advancements
1. The fidelity of fake voice + video will increase to the point that humans will not be able to discern the difference between authentic and real. We are getting much closer. The algorithms are making leaps forward at an accelerating pace to forge the interactive identity of someone else.


2. The ability to create such fakes in real-time will allow complete interaction between a masquerading attacker and the victims. If holding a conversation becomes possible across broadly available devices, like smartphones, then we would have an effective tool on a massive scale for potential misuse.

Three dimensional facial models can be created with just a few pictures of someone. Advanced technologies are overlaying digital faces, replacing those of the people in videos. These clips, dubbed “Deep Fakes” are cropping up to face swap famous people into less-than-flattering videos. Recent research is showing how AI systems can mimic voices with just a small amount of sampling. Facial expressions can be aligned with audio, combining for a more seamless experience. Quality can be superb for major motion pictures, where this is painstakingly accomplished in post-production. But what if this can be done on everyone’s smartphone at a quality sufficient to fool victims?

Expectations Shift
Continuation along this trajectory of these two technical capabilities will result in a loss of confidence for voice/video conversations. As people learn not to trust what they see and hear, they will require other means of assurance. This is a natural response and a good adaptation. In situations where it is truly important to validate who you are conversing with, it will require additional authentication steps. Various options will span across technical, process, behavioral, or a combination thereof to provide multiple factors of verification, similar to how account logins can use 2-factor authentication.

As those methods become commonplace and a barrier to attackers, then systems and techniques will be developed to undermine those controls as well. The race never ends. Successful attacks lead to a loss in confidence, which results in a response to institute more controls to restore trust and the game begins anew.

Trust is always in jeopardy, both in the real and digital worlds. Finding ways to verify and authenticate people is part of the expected reaction to situations where confidence is undermined. Impersonation has been around since before antiquity. Society will evolve to these new digital trust challenges with better tools and processes, but the question remains: how fast.

Interested in more? Follow me on your favorite social sites for insights and what is going on in cybersecurity: LinkedIn, Twitter (@Matt_Rosenquist), YouTube, InfoSecurity Strategy blog, Medium, and Steemit


Maybe it is time to #DeleteWhatsApp

Maybe it is time to #DeleteWhatsApp
by Vivek Wadhwa on 04-22-2018 at 7:00 am

WhatsApp differentiates itself from Facebook by touting its privacy and end-to-end encryption. “Some of your most personal moments are shared with WhatsApp”, it says, so “your messages, photos, videos, voice messages, documents, and calls are secured from falling into the wrong hands”. A WhatsApp founder expressed outrage at Facebook’s policies by tweeting “It is time. #deletefacebook”.

But WhatsApp may need to look into the mirror. Its members may not be aware that when using WhatsApp’s “group chat” feature, they are also susceptible to data harvesting and profiling. What’s worse is that WhatsApp makes available mobile-phone numbers, which can be used to accurately identify and locate group members.

WhatsApp groups are designed to enable discussions between family and friends. Businesses also use them to provide information and support. The originators of groups can add contacts from their phones or create links enabling anyone to opt-in. These groups, which can be found through web searches, discuss topics as diverse as agriculture, politics, pornography, sports, and technology.

Researchers in Europe demonstrated that any tech-savvy person can obtain treasure troves of data from WhatsApp groups by using nothing more than an old Samsung smartphone running scripts and off-the-shelf applications.

Kiran Garimella, of École Polytechnique Fédérale de Lausanne, in Switzerland sent me a draft of a paper he coauthored with Gareth Tyson, of Queen Mary University, U.K. titled “WhatsApp, doc? A first look at WhatsApp public group data”. It details how they were able to obtain data from nearly half a million messages exchanged between 45,754 WhatsApp users in 178 public groups over a six-month period, including their mobile numbers and the images, videos, and web links that they had shared. The groups had titles such as “funny”, “love vs. life”, “XXX”, “nude”, and “box office movies”, as well as the names of political parties and sports teams.

The researchers obtained lists of public WhatsApp groups through web searches and used a browser automation tool to join a few of the roughly 2000 groups they found—a process requiring little human intervention and easily applicable to a larger set of groups. Their smartphone began to receive large streams of messages, which WhatsApp stored in a local database. The data are encrypted, but the cipher key is stored inside the RAM of the mobile device itself. This allowed the researchers to decrypt the data using a technique developed by Indian researchers L.P. Gudipaty and K.Y. Jhala. It was no harder than using a key hidden atop a door to enter a home.

The researchers’ goal was to determine how WhatsApp could be used for social-science research. They plan to make their dataset and tools publicly available after they anonymize the data. Their intentions are good, but their paper illustrates how easily marketers, hackers, and governments can take advantage of the WhatsApp platform.

Indeed, The New York Timesrecently published a story on the Chinese Government’s detention of human-rights activist Zhang Guanghong after monitoring a WhatsApp group of Guanghong’s friends, with whom he had shared an article that criticized China’s president. The Times speculated that the Government had hacked his phone or had a spy in his group chat; but gathering such information is easy for anyone with a group hyperlink.

This is not the only fly in the WhatsApp ointment that this year has revealed. Wiredreportedthat researchers from Ruhr-University Bochum, in Germany, found series of flaws in encrypted messaging applications that enable anyone who controls a WhatsApp server to “effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation”. Gaining access to a computer server requires sophisticated hacking skills or the type of access that governments can only gain. But as Wired wrote, “the premise of so-called end-to-end encryption has always been that even a compromised server shouldn’t expose secrets”.

Researcher Paul Rösler reportedly said: “The confidentiality of the group is broken as soon as the uninvited member can obtain all the new messages and read them… If I hear there’s end-to-end encryption for both groups and two-party communications, that means adding of new members should be protected against. And if not, the value of encryption is very little”.

WhatsApp also announced in 2016 that it would be sharing user data, including phone numbers, with Facebook. In an exchange of emails, the company told me that it does not track location within a country and does not share contacts or messages, which are encrypted, with Facebook. But it did confirm that it is sharing users’ phone numbers, device identifiers, operating-system information, control choices, and usage information with the “Facebook family of companies”. That leaves open the question as to whether Facebook could then track those users in greater detail even if WhatsApp doesn’t.

Facebook and its “family of companies” are being much too casual about privacy, as we have seen from the Cambridge Analytica revelations, harming freedom and democracy. It is time to hold them all accountable for the bad design of their products and the massive breaches of our privacy that they enable.

For more, you will want to read my forthcoming book, Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain–and How to Fight Back


EUV Continues Roll Out With Lumpy Quarters Ahead

EUV Continues Roll Out With Lumpy Quarters Ahead
by Robert Maire on 04-20-2018 at 12:00 pm

ASML put up good results with revenues of Euro2.285B versus street of Euro2.22B and EPS of Euro1.26 versus street of Euro1.17. Guide is for Euro2.55B versus street of Euro2.46B but EPS of Euro1.16 versus street EPS of Euro1.35 on lower gross margins, slipping from 48% to 43%.

A couple of EUV systems have slipped out. This is not surprising given the hugely complex systems and surrounding environment. We would expect to see more slips going forward as customer acceptance and ability to ship may be impacted by many unpredictable variables.

The obvious issue is that the very high cost of EUV systems can make for lumpy quarters if a couple of systems shift around as we are seeing. We think this shifting is not likely to get better and could potentially get worse as customers may delay shipments or installs as they work the bugs out of their EUV process flow.

The gross margin may be more of a concern as investors could become worried that costs are getting away from the company as tools slip or problems arise that add to costs. Part of the issue can simply be product mix of lower margin EUV versus more DUV as experienced in Q1 but we think we need to watch for increasing costs associated with problems.

ASML is not the second domino after Lam
Investors have cause for concern after Lam’s meltdown yesterday and today now followed by ASML’s less than stellar guide as well. While both stock problems are due to guidance issues they are not tied to the same root cause.

In Lam’s case, the concern is about a roll over in memory spending and in ASML’s case it is more related to lumpy roll out of EUV and associated margin variation and revenue instability. The only commonality is that business is at levels that are so high that its difficult for things to get much better from here which means there is one direction of travel.

Everyone’s in line for EUV & lining up for high NA
ASML said they will no longer talk about back log. The company has very solid backlog as no one wants to be shut out of EUV, or left behind, when the industry makes the transition. However, by the same token, no one wants to have a bunch of unproductive, very expensive EUV tools sitting around while bugs are being worked out of the process.

This is a very difficult balancing act and will contribute to pushes and pulls in EUV shipments. Though ASML surely wants certainty in ship dates and revenue, customers may want to game the system a bit more. With the announcement of 3 orders for high NA systems we can be sure that means Intel, Samsung & TSMC are the first 3 in line.

Big memory exposure
Memory sales jumped to 74% of business in March up from 53% which paralleled Lam’s jump to memory at 84% of business. This obviously helped out with DUV’s higher margins. As we mentioned yesterday, this also re-focuses investor attention on less memory centric tool makers such as KLAC, NANO or NVMI etc; who could have experienced a dip much as Lam or ASML did but it also may not be the case as yield management tools sell on a different cycle from wafer fab tools such as ASML and Lam.

Multiple E Beam
ASML also announced a significant step in its E Beam program with the successful test of a 9 parallel beam system. Though this is a very long way off from a highly parallel systems of many more beams, it is none the less a positive step that bears watching.

KLA missed E Beam when it canceled its program and whose engineer went on to found E beam at Hermes, which was highly successful, which is now inside ASML on its way to multi beam. KLA will have to double down on its actinic inspection that it has revived as ASML is not slowing down and continues to push all things EUV as hard as possible.

The stocks
Both ASML and LRCX are fighting to stay above their $200 price points. We think this is a critical support level that both stocks have continued to dance around. We think they will continue to be weak as there aren’t any catalysts to go out and buy the shares. We wouldn’t abandon ship and dump our position, we simply have no strong incentive to put more money to work in these names as we are fighting a negative tape and negative sentiment for the group. The secular story for both ASML and LRCX remain strong but the near term news will likely stop forward progress.


The Intention View: Disruptive Innovation for Analog Design

The Intention View: Disruptive Innovation for Analog Design
by Daniel Nenni on 04-20-2018 at 7:00 am

Intento Design builds responsive analog EDA. The ID-Xplore tool is used for analog design acceleration and technology porting at the functional level, helping companies move analog IP quickly between technology nodes and across business units. The Intention view is a simple, elegant, and powerful concept that gives the speed of digital design to the analog designer for the first time. Intento Design receives a lot of questions on how the Intention view was conceived and how it works.

The following is a Q&A discussion with Dr. Caitlin Brandon of Intento Design:

How did Intento Design create the Intention view?Well, Analog EDA has proven very difficult for many reasons but mostly because the approach used by analog designers was not considered. From their training, analog designers are much more pen-and-paper than digital designers and these processes are counterintuitive to automate.

For instance, the first training an analog designer receives is usually the DC bias of an amplifier. The DC bias and sizing are calculated with pen-and-paper, transistor size data is entered in the schematic by hand and an AC performance simulation is started with a SPICE simulator. Performance results from the hand-calculated DC bias calculation are now seen for the first time. Clearly, using a pen-and-paper approach, getting to the performance results is slow.

But, here’s the interesting thing, before starting an AC analysis, the SPICE simulator first calculates the DC operating point using a DC solver. Likewise, a TRAN analysis also requires the starting DC operating point calculation, with energy stored in components. For Periodic AC, a more complex analysis necessary for modern mixed-signal analog systems, multiple DC operating points are calculated using – you guessed it – the DC solver. Spot the trend? Yes – SPICE simulation relies on the DC solver.

There it is – if a designer could directly choose their optimal DC operating point, the AC, TRAN and other performances would be met in an almost straightforward manner. The importance of DC analysis has been underestimated in analog design automation. And the EDA industry has failed to provide a tool for design of the analog circuit DC bias – until now.

So, the Intention view is used for DC bias and transistor size calculations? Yes, ID-Xplore uses the testbench directly for performance analysis, and the innovation of the Intention view provides the ability to quickly calculate transistor size data.

In creating the Intention view, it became really clear that the DC bias itself is not a simple act to follow. During calculation of a DC bias, the analog designer uses substantial explicit/trained knowledge and implicit/gained knowledge. Their trained knowledge obviously involves applied circuit analysis, such as Kirchhoff laws and known circuit topologies, whereas their gained knowledge may involve knowing by experience where signal frequency content will be lost due to the physical implementation. The designer understands – at a glance of the schematic – how analog sub-blocks work together to achieve block-level performance. To calculate a good DC operating point, the analog designer translates sub-block DC bias parameters into a first order model of some performance objectives and then calculates preliminary transistor sizing. It is this unique combination of advanced trained knowledge and advanced gained knowledge that makes analog circuit design challenging.

So – if it isn’t broke, why fix it?
Exactly. The analog designer knows how to describe their DC bias. What they really need is an automation tool to help them explore a range of bias and sizing in a technology PDK. Intento Design responded to this analog automation gap with the ID-Xplore tool and the Intention view.

What is the Intention view?
The Intention view is a text-based description of the analog circuit DC bias, written in a way which is very similar to the paper-and-pen version. The Intention view itself is technology independent, and an exploration can be done in any PDK. Electrical and physical parameters described in the DC bias, such as channel inversion, node voltage, branch current, transistor length – the list goes on – can be explored in just minutes to hours. With many varying parameters, the potential number of DC bias points can be quite large, often in the 100’s of millions, which obviously requires ID-Xplore automation to handle.

So, as a way of thinking about it, the Intention view is old-school – just like the pen-and-paper approach – and the ID-Xplore tool is new-school, advanced, automation enabling large scale exploration of the designer intentions (the Intention view).

But what happens to the performance specifications? As a plugin tool, the testbench performance specifications are evaluated at full SPICE accuracy for each individual DC bias point. Transistor sizing, previously handcrafted into transistor parameter fields in traditional design, is now fully automated. The ID-Xplore tool uses the OpenAccess database to back-annotate the designer selected DC bias and transistor sizing.

Does the Intention view play well with others – for collaboration? Yes, because collaboration and individual member experience are among the strongest factors impacting analog design team performance, the Intention view and ID-Xplore enable both knowledge transfer and training. Engineers share Intention views directly when they share the schematic database. For training, the ID-Xplore tool is useful to understand circuit sensitivity and design impacts at the performance level of the PDK. Exploration can uncover trends or verify hard limits of performance, illustrating where to concentrate design effort or when it is necessary to implement an entirely different schematic topology.

In short, the Intention view was created to allow designers to design, document and share a vision for the analog circuit performance as they see it – transistor by transistor.

Is Intention-based exploration faster than size-based optimizers?
Yes. Analog designers are plenty smart. At Intento Design, we like to think we are plenty smart too – for designing EDA tools. Here’s why.

Size-based optimizers use the SPICE DC solver on the whole circuit – even when only minor incremental or local size changes are required, resulting in substantial excess calculation. In addition, being size-based, the exploration requires many steps and can sometimes produce non-realizable results.

For ID-Xplore, we took a different approach. Using graph-theory, we analyzed the nodes and edges of the DC solver matrix and created a structural approach. While graph-theory itself can be very complex, the simple fact is that the number of SPICE calculations is much, much lower. In addition, variation of electrical parameters, rather than size, ensures that an exploration stays within a designer-validated region.

Analog designers appreciate right away the graph-based structural approach. Structural, of course, refers to the fact that the arrangement of the transistors in the circuit is taken in account while calculating size and bias. For instance, take a differential pair with a varying tail bias current. Assuming the input gate-voltage is fixed, the differential pair source-voltage varies with the current. Consequentially, changes can occur in the tail current transistor drain-voltage and the differential pair bulk-source voltage. Graph theory allows ID-Xplore sizing operations to accurately transmit branch-level, or even circuit-wide, information so local transistor sizing is correct-by-construction, taking into account both gross changes (current) as well as local effects, such as threshold voltage variation. When you think about it, this is exactly what the analog design engineer does with pen-and-paper.

So, the pen-and-paper approach of the analog designer really is best?Yes, that’s true, compared with a size-based optimizer, which is a brute force approach, the pen-and-paper approach is already more intelligent. The Intention view, combined with the exploration capabilities and data analytics of ID-Xplore, are designed to mirror quite closely the applied training of the analog designer. This means corporate investment in analog design team knowledge is not lost – but accelerated.

How does the Intention view enable technology porting?This is simple to understand. The Intention view is a parameterized description of the DC bias. And, while default values may be set inside the Intention view, it’s really inside the ID-Xplore tool that values are assigned for exploration in any given PDK. To move from one technology to another, the ID-Xplore tool is simply pointed at another technology PDK and the default parameters are adjusted for the exploration.

What tool is used to create the Intention view?The Intention view is created inside the Constraint Editor[SUP]TM[/SUP] of Cadence. To do this, a transistor or group of transistors is selected and then the Intento Design pull-down menu in the Constraint Manager tab is selected which opens the constraint entry field. The constraint data fields take parameterized electrical descriptions of the DC bias parameters, such as bias current and overdrive voltage. Once complete, the Intention view is exported to the ID-Xplore tool for design exploration using testbenches already setup in ADE-XL or ADE Assembler.


Figure 1 Creation of the Intention view inside the Constraint Editor of Cadence

Can you show an example of an ID-Xplore Intention view exploration?Yes, the following image shows design curve results in ID-Xplore. Each curve shows specifications that resulted from a unique DC bias point; the design curves together show the range of performances for the number of design points simulated. Selecting a specific solution shows the electrical values in the design point, the performances, and transistor size data. In this way the ID-Xplore tool really displays DC bias vs. performance, and size data is directly available for back-annotation.


Figure 2 ID-Xplore showing results of exploration of Intention view

How fast is Intention-based exploration using ID-Xplore?
Fast. Moving into 2018, Intento Design has worked on enhanced data display and faster operations to produce almost 200x speed performance under-the-hood of ID-Xplore. With multi-core parallel partitioning and advanced graph analysis, exploration on the DC bias of a multi-stage, fully-differential 75 transistor CMOS amplifier now takes only a few minutes.


Figure 3 Design acceleration using ID-Xplore

Minutes?
Well, the creation of the Intention view itself can take an hour or more, because this is really the analog hand-crafting aspect of the tool, but the exploration takes only minutes – yes. And, once created, the Intention view stays attached to the schematic to enable more exploration or technology porting. We believe ID-Xplore using the Intention view is an elegant, powerful and disruptive analog EDA – putting the speed of digital design into the analog designer’s hands for the first time.

Related Blog


Data Breach Laws 0-to-50 States in 16 Years

Data Breach Laws 0-to-50 States in 16 Years
by Matthew Rosenquist on 04-19-2018 at 12:00 pm

It has taken the U.S. 16 years to enact Data Breach laws in each state. California led the way, with the first, in 2002 to protect its citizens. Last in line was Alabama, which just signed their law in March 2018. There is no overarching consistent data breach law at the federal level. It is all handled independently by each state. This causes some confusion as there are different standards and requirements. Businesses must understand and conform to each, in addition to all the international privacy laws.

Over the past decade, privacy compliance has become a massive bureaucratic beast, requiring policies, lawyers, audits, and oversight to meet a sometimes vague and complex regulatory landscape that is often changing. A legion of privacy professionals now exists throughout the world.

All for Good Reason

The world of technology leapt forward beyond the limits of paper records which were difficult to duplicate, share, and transit. We have successfully created a world where digital information can easily be created and disseminated across the planet in the blink of an eye. This has led to the desire to gather more data on people and their behaviors. Their financial status, social influence, purchasing preferences, political viewpoints, and many other facets are valuable to influencers and product vendors.

Innovation adoption moved too fast and mistakes were made. Companies who develop products and services were far too quick to begin gathering such valuable knowledge nuggets of their customers. Consumers and governments were lax or greatly delayed in establishing proper controls to protect people’s data. End users were blasé in caring what they shared, who could obtain it, and chose to remain ignorant in how it could be used to their detriment. It all seemed harmless, until it wasn’t.

Unscrupulous yet profitable data sharing crept into the mix. Criminals realized the windfall of nearly unprotected data just waited for them to scoop it up. The results began to turn the mindset of society. Data was valuable, even in the wrong hands.

People were being manipulated and treated with unfair bias, based upon private data that was now in the open. Personal financial data and healthcare records were the first major issues. Fraudsters who obtain a few select pieces of information could cause an economic tornado for victims, opening credit lines, loans, making fraudulent purchases, and even filing for fake tax refunds. Harvesting login credentials and passwords opened systems and services to manipulation and hacking. Even subtle data collection, such as web browsing habits, searches, and product purchases were used to create profiles that marketeers could wield to improve sales. Recently, social media connections have been used to manipulate the attention economy to sway viewers political and social opinions. It is a free-for-all, fueled by personal data.

Rules to Play Nice

As late as they are arriving, it has become apparent that regulations are needed to establish guard-rails that will begin to force boundaries of data gathering, handling, and protection to stem the hemorrhaging losses.

It has been a long sixteen years, to get a fundamental data breach law on the books in every state. The first privacy laws in the U.S. are primarily focused on breach notification. That is only the first step. Like Europe, we must also address the collection, protection, fair-use, and ability for subjects to correct and control their data. The upcoming EU General Data Privacy Regulation (GDPR) is the latest version that unifies privacy regulation across the European Union. The U.S. is far more fragmented and less comprehensive.

Enforcement is also required. Stiff penalties help with the encouragement for compliance and can take many forms. Regulatory fines, litigation, and customer loyalty are all plausible forces to positively shift protections to the users and away from other self-serving entities. In the U.S. the damages for non-compliance can vary but are considered minimal. The GDPR however can penalize a company up to 4% of their global revenue, which establishes a new high-water mark. Overall, no one carrot or stick will be a quick fix, but progress, maturity, and stability is needed.

This is a race. We must move faster, with greater purpose, and better foresight in cooperation with businesses, consumers, and legislatures if we are to limit damages while enabling the technology everyone wants in their lives.

Interested in more? Follow me on your favorite social sites for insights and what is going on in cybersecurity:

LinkedIn, Twitter (@Matt_Rosenquist), YouTube, Information Security Strategy blog, Medium, and Steemit


Meltdown, Spectre and Formal

Meltdown, Spectre and Formal
by Bernard Murphy on 04-19-2018 at 7:00 am

Once again Oski delivered in their most recent Decoding Formal session, kicking off with a talk on the infamous Meltdown and Spectre bugs and possible relevance of formal methods in finding these and related problems. So far I haven’t invested much effort in understanding these beyond a hand-waving “cache and speculative execution” sort of view so I found the talk educational (given that I’m a non-processor type); I hope you will find my even more condensed summary an acceptable intro to the more detailed video (which you can find HERE).


The speaker was Mike Hamburg of Rambus who, among other teams such as Google ProjectZero, were involved in the research on these vulnerabilities (summarized in the image above). I’ll only consider the Meltdown part of his talk, starting with a simple example he used. Consider this line of code:

result = foo[ bar [ i ] * 256 ];

In the simple world, to execute this, you look up a value referenced by bar, multiply by 256 and lookup what that references in the array foo. In an even modest OS with a distinction between kernel mode and user mode, a user-mode process will error on a bounds-check if it tries to access kernel-only memory. This is part of how you implement secure domains in memory. Only trusted processes can access privileged regions. Particularly for virtual machines running in a cloud, you expect these types of wall between processes. I can’t look at your stuff and you can’t look at mine.

But we want to run really fast, so the simple world-view isn’t quite right. Instead processors work on many instructions per clock-cycle, allowing for 100 or more operations executed simultaneously in flight, including instructions ahead of the current instruction. These speculative executions work with whatever values the processor reasonably assumes they may have – with data values already in cache or triggering a fetch if a needed value is not yet in cache or making a prediction about whether a branch may be taken or a variety of other guesses. When the current program counter catches up, if the guess was correct, the result is ready, if not the processor has to unwind that stage and re-execute. Despite the misses and consequent rework, overall this works well enough that most programs run much faster.

The problem, as Mike put it, is that the unwinding is architectural but not micro-architectural. When a speculatively-executed instruction turns out to have been wrong, the processor winds back the appropriate instructions to re-execute. But other stuff that happened during the speculative execution doesn’t get wound back. Data was fetched into cache and branch predictors were updated. Who cares? It doesn’t affect the results after all. Cache fetches in these cases even prove beneficial apparently – more often than not they save time later on.

This is where Meltdown happens but in a fairly subtle way, through cache-timing side-channel attacks. Daniel Bernstein showed in 2005 that it is possible to extract secret data for the AES encryption algorithm through such attacks, simply by measuring encryption times with a precision timer for a series of known plaintext samples and running a simple analysis on those values. In a system with cache these times vary, which is what ultimately leaks information; you run enough data through the system and you can reconstruct the key.

In case you hoped this might be an isolated problem for AES, these kinds of attacks are more broadly applicable and not just for getting encryption keys. Mike made the point that without mitigation, a Meltdown-enabled attack can effectively read all user memory. A defense especially in the cloud is to use separate (memory) page tables per process which may not be a big deal on modern server-class CPUs but can have a 30% performance penalty on older CPUs.

Another approach would be to make all memory accesses take the same time, at least within certain domains, or disallow speculation without bounds checks on critical operations or … In fact given the complexity of modern processor architectures, it’s not easy to forestall or even anticipate all possible ways an attack might be launched. In Mike’s view formal can play a role in detecting potential issues, though he thinks this would be limited to small cores today. So not yet server-class cores, but I’m sure the big processor guys are working hard on the problem.

This would start by writing a contract for covert channels – what should and should not be possible. Mike feels this can’t be a blanket attempt to make attacks impossible – maybe that could only happen if we forbid speculation. But we could on small machines define contracts to bound general execution versus execution around privileged/secret operations and/or characterize those cases where such guarantees cannot be given. Then a careful crypto-programmer for example could write code in such a way that it would be not susceptible, or at least less so, to this class of attack.

Mike wrapped up with some general observations. Absent banning speculative execution, we’re likely to need careful analysis of covert channels. Perhaps we need to rely on slower but more resilient secure code (allowing for memory access checks even in speculation – I think I heard that AMD already does this). We also need to plan more for secure enclaves inaccessible by any form of external code, privileged or not. It was unclear in his mind whether TrustZone or similar systems could rise to this need today (maybe they can, but that’s not yet known). Certainly it seems more and more desirable to run crypto in a separate core or (if still run in software) supported by dedicated instructions hardened against side-channel attacks. I suspect there will be a lot of interest in further advancing proofs of resilience to such attacks. In formal at least this won’t be available anytime soon in apps – this is going to take serious hands-on property-development and checking.


RDC – A Cousin To CDC

RDC – A Cousin To CDC
by Alex Tan on 04-18-2018 at 12:00 pm

In a post-silicon bringup, it is customary to bring the design into a known state prior to applying further testing sequences. This is achieved through a Power-on-Reset (POR) or similar reset strategy which translates to initializing all the storage elements to a known state.

During design implementation, varying degrees of constraining may be applied on the Reset signal. For example, designer may impose some multicycle paths (MCP) constraint in order to avoid unneeded timing optimization on the reset logic (although check for slew violation is still necessary). In this article we will discuss Reset mechanism and RDC (Reset Domain Crossings).

Just like the notion of hard- or soft-reboot in system bringup, we could first categorize this initialization step into hard/soft-reset as captured in figure 1.
In synchronous designs, asynchronous reset de-assertion operation causes metastability issue and unpredictable values in the memory elements. This increases risk of not having a stable design initialization. The snapshot in figure 2 illustrates the issue, in which the reset signal de-asserts during the active clock edge change –causing metastability issues as well as randomly initialized register values. To avoid a non-determinism, synchronization at reset deassertion is needed.

Synchronous and Asynchronous Reset
Let’s probe into the flip-flop element which sits in the center of this phenomenon, In the standard cell library, this storage element or register may come in two flavors, i.e, with reset and no-reset option. On the other hand, in the design RTL codes, registers may be pre-instantiated or left to be inferred during logic synthesis, depending on whether logic designer would like to impose control on the type of registers used. If inferred, logic synthesis will also infer reset implementation and select registers from the library with the corresponding set/reset configuration.

In the FPGA design, a slice may contain cluster of registers sharing a set of control signals such as clock, enable and reset related. Frequent endorsement of using a non-resettable register in FPGA design is stemmed from better device utilization in both Shift Register LUTs (SRL) and Block RAM (BRAM), although care should be given in initializing registers to known values during functional simulation as registers with undefined “X” states are very common occurences here.


Comparing two types of reset implementation, each comes with advantages and disadvantages.

Unlike ASICs, FPGA designs implement the Power-on-Reset function. It initiates program load into the bitstream and configures the LUT’s. The bitstream contains the initial values for every register and RAM bit in the device. Registers are initialized throughout the configuration process and the Global Set Reset (GSR) signal keeps the device in non-operational mode until the end of configuration stage.

Reset Synchronization Techniques
Since reset signal is external to a device and asychronous, it needs to be synchronized. The conversion and synchronization of external-to-device to internal-to-device reset signal can be achieved through multi-stage registers. A minimum of two clock cycles is required to ensure minimum reset pulse-width is met. However, depending on the type of registers (i.e, non-resettable) used, the reset synchronizer could require as much as ten clock cycles. It is recommended that any reset de-assertion should be done only upon stable clock and during its active operation. For example in some subsystems containing finite state machines or counters, all registers must come out of reset on the same clock edge to prevent illegal state transition.

Similar to the solution in Clock Domain Crossing (CDC), NDFF synchronizer can also be utilized as synchronizer across two clock domains. Since each domain has varying minimum pulsewidth requirement, a pulse stretcher can be inserted prior to the synchronizer to ascertain that minimum pulsewidth is met. In FPGA, glitch prevention may be warranted due to non-resettable registers being used. Initializing these registers to similar values of the external asynchronous reset signal should avoid possible reset glitches.


FPGA resources such as SRL and BRAM contain non-resettable registers and may introduce non-determinism as the GSR net can release different storage elements in different clock-cycles. This in turn triggers a chain reaction causing some registers to “wake-up” one or more clock cycles earlier than the others. Coupled with any presence of sequential loop-back condition, this may corrupt the initialization values and lead design into an unpredictable state.

The selection of non-resettable register is also to accommodate synthesis optimization techniques commonly seen in Intel’s devices, such as register retiming, pipelining or other register related netlist modifications. Register-specific optimizations are done only in the absence of an asynchronous reset. For list of design practices that could help eliminate non-determinism, refer to this.

In design with multiple reset signals targeting different sections of the system, RDC could occur. These signals introduce asynchronoous reset assertion events in each reset domain, which may lead to metastability and unpredictable design initialization (see figure 3a).
The reset operation frequency also increase susceptibility to the RDC effect, and on the other hand, a proper reset ordering sequence should minimize its occurrences. Linting tools such asAldec’s ALINT-PRO help designer to identify RDC and other aspects of reset domains. It helps generate design assertions to confirm proper reset sequences are done in the design.

RDC synchronization methods includes isolating the receiving domain register from the source domain register. An enable such as “iso_en” is asserted prior to the assertion of “rst1”. The receiving registers holds its value during an asynchronously set FF data change as seen in figure 3b. Through linting step, such isolation cells used in RDC prevention can be identified and verification code can be generated to ensure correct operation of these “iso_en” signals relative to signal assertions. For discussion of other technique, refer to this.

Despite fewer occurrences compared with CDC, RDC has been getting more attention especially for heterogeneous designs with complex reset strategies and segregated regions needing frequent reset sequences. Preventing and fixing such condition can be addressed through the use of both Linting tools and subsequent functional verification.

For info on Aldec’s ALINT-PRO™, please checkhere,
For Aldec’s white-paper on RDC, please find ithere.