webinar banner2025 (1)

10 Areas of Change in Cybersecurity for 2020

10 Areas of Change in Cybersecurity for 2020
by Matthew Rosenquist on 05-17-2020 at 10:00 am

10 Areas of Change in Cybersecurity for 2020

Cybersecurity in 2020 will be evolutionary but not revolutionary. Although there is always change and churn, much of the foundational drivers remain relatively stable. Attacks in the next 12 months are likely to persist in ways already known but taking it up-a-notch and that will lead to a steady escalation between attackers and defenders. The growth of devices, users, and data continue to expand the playing field while the weaknesses of people’s behaviors continue to contribute to the greatest risk factors for compromises.

Here are some of the key areas to keep an eye on.

2020 Cybersecurity Predictions

  • Internet-of-Things (IoT): IoT continues to expand with insecure devices, services, and interfaces. Another 4 billion IoT devices will come online in 2020. Devices being hacked and insecure data being compromised are the two primary threats. Some progress will be made with the retirement of default or no passwords, but IoT devices will still be far from secure. At the same time, such devices will be called upon to handle even more sensitive data. Overall, the risks increase as devices become more autonomous and are employed in sensitive industries like healthcare, critical infrastructure, transportation, security, and remote-work applications.
  • Cybersecurity Workforce: The demand for cybersecurity professionals continues to rise and outpace the available talent pool. By the end of 2020, there will likely be over 3 million unfilled positions. Such a global shortage leaves a dangerous range of organizations under-resourced to protect assets, services, data, and people. Leadership and technical roles will be in the highest demand. Ironically, entry-level placement will be difficult for those without experience. Frustration will fester for many entering into the field. The training gap remains for 2020 and beyond, but academic institutions will continue to move, albeit slowly, to close the deficiencies in preparing the next generation of cybersecurity professionals
  • Critical Infrastructure (CI): The war targeting critical infrastructure will heat up while remaining largely stealthy. Nation-States will jockey for access in the systems of potential adversaries. Defenders will actively pursue detection and eviction, but never achieve a high level of confidence. It is a chess game where the winner retains a foothold that could be used in the future as part of a devastating attack, to send political messages, or fuel disruption. On a positive note, no major critical infrastructure attacks will occur in 2020, at least on purpose. Accidents do sometimes happen at this level of gamesmanship. Vital sectors including government, communications, transportation, logistics, energy, national industries, and even healthcare are all potential targets for compromise. This is part of the long-game that countries play against one another.
  • Cybercrime: The number of cybercriminals and attacks grows significantly, victimizing more people and incurring losses that may approach $6 trillion by the end of the year. At the top, the organized and funded crews will continue to expand and orchestrate top-tier attacks as well as massive fraud at an ever-growing scale. At the bottom of the cybercrime hierarchy pyramid, swells of novice criminals will join the ranks to help with basic labor-intensive duties. Financial hardship, desperation, and a lack of other options will draw new internet users from economically struggling geographies to venture into cybercrime.  They are lured into activities such as botnet/malware distribution, money and reshipping mule duties, ransomware-as-a-service (RaaS) victim on-boarding, social engineering data harvesting, human authentication verification, amplification of investment scams, and propagation of retail fraud to make money. As a result, the global online community will suffer from an increase of ransomware, denial-of-service, online-harassment, data breaches, financial fraud schemes, and cryptojacking. The severity will drive up the overall losses due to cybercrime. The elite digital syndicates will target specific organizations for big scores with Business Email Compromises (BEC), financial transaction tampering, and data accessibility ransoms in the millions. The largest single attacks of 2020 will likely reach into the hundreds of millions in losses.
  • Passwords/Authentication: Multi-Factor Authentication (MFA) and Two-Factor Authentication (2FA) will remain largely ignored, regardless of the massive fleecing of accounts.  Throughout the year, consumers will feel much more pain from highly automated credential-stuffing capabilities that are coupled with exploitation features for account hacking, extortion demands, and theft. Small and medium businesses (SMB) will feel the greatest pain and will struggle to find a balance between the risks, costs, and usability friction.
  • Privacy: Privacy compliance will be expensive, convoluted, and political. Expectations of customers will increase for companies to keep their data private. Credit monitoring will not be enough to appease the masses. Regulatory authorities around the globe will begin greater prosecution of offenders. The news will highlight more lawsuits, massive regulatory penalties, greater customer abandonment, and executives losing their jobs because of poor choices in protecting private data or not satisfying regulations.
  • Artificial Intelligence (AI): AI attacks and defenses will rise to a new level. Attacks will be more customized and scale to target large pools of potential victims. Defenses will lag, but also begin finding optimal ways to detect and block these types of attacks. Implementation of AI tools by the attackers and defenders is still in the early phases of what will be a very long and drawn out arms race.
  • Malware:  Vulnerability discovery, exploit creation, and development of malicious software will accelerate. It will also expand from the server, PC, and smartphone domains to include many more types of devices and services. Technical exploitation techniques get more sophisticated, but social engineering does not. It simply doesn’t need to. Humans continue to be the weakest link in the ecosystem and remain the primary means for practical compromises.
  • Zero-Trust: “Zero-Trust” will remain a marketing buzzword for most of the year. Basic standards and more narrow accepted concepts begin to emerge around Zero-Trust security. By the end of 2020, there will still not be a complete consensus, standards, or frameworks. As leaders emerge, customers will begin to fall into certain camps. Results will continue to vary for this premium capability. Expect various re-branding and renaming to ensue as the term begins to become stale and loses favor with marketing types because of a lack of competitive differentiation.
  • 5G cybersecurity risks: The security fears of 5G reached its pinnacle in 2019. A lot of hype but real risks won’t actually manifest in 2020. Yes, 5G allows for greater speed, lower latency, and more connection density but that plays for both sides. Risk organizations realize it is just the natural evolution of the battlefield, not a super-weapon. People will briefly wonder what all the fuss was about, as they enjoy a better experience. Security pundits will shift gears to focus on the next sexy potential emerging threat that could boost their budgets. Pity they aren’t focusing on the human behavioral weaknesses that represent a much greater problem.

The aggregation of these factors will contribute to a thriving cybercrime industry that will show no mercy in 2020. Tools for both attackers and defenders get better. The size and complexity of our digital world will increase significantly, creating scalability issues for security while opening new opportunities for threats.

The biggest overall concern for 2020 will be that significantly more data will be in peril. Vast amounts of data will be created and potentially exposed from significantly increasing numbers of devices, services, and users. Nearly 400 thousand new internet citizens will join the connected digital world, with the largest percentage from economically struggling countries. Businesses and governments will continue to gather more information than needed and aggregate it in ways that consumers did not expect. Security will remain weak, with protections lacking for data in-use, in-transit, and at rest.

Although 2020 predictions may sound extreme, this is the normal progression for cybersecurity. It should draw a mild yawn from security professionals who are familiar with maneuvering these troubled waters every day. The best of them will remain vigilant and keep continued pressure on intersecting the tactics, techniques, and processes of attackers to drive increasing demand for better and more coordinated cybersecurity throughout the year.


COVID-19: A Pandemic Made for Tesla

COVID-19: A Pandemic Made for Tesla
by Roger C. Lanctot on 05-17-2020 at 6:00 am

COVID 19 A Pandemic Made for Tesla

With Tesla Motors’ CEO Elon Musk spewing accusations of civil rights violations and fascism in the face of restrictions forcing him to keep his Fremont, California factory closed it is tempting to assume that Tesla is suffering through the shutdown with the rest of us. Don’t kid yourself. The COVID-19 pandemic is a crisis tailor-made for Tesla and one upon which Musk is even now capitalizing.

Musk’s Tesla is perfectly positioned for this pandemic. While parking lots are filling up with unsold cars and dealers are champing at the bit to be allowed to sell cars, Tesla Motors continues to deliver cars directly to consumers. Musk even went so far on the earnings call as to tout his progress toward a touchless vehicle purchasing proposition allowing a vehicle acquisition to occur within five minutes from a mobile phone app.

Tesla earnings recording: https://ir.tesla.com/events-and-presentations

Cars.com reports that dealers across the U.S. are struggling with 50 different state-level regulatory approaches to stay-at-home orders with some allowing vehicle repairs but no sales, others allowing only online sales, and many actually allowing vehicle sales from retail stores. There’s just one problem, consumer surveys show that potential buyers by-and-large don’t want to visit dealerships at this time.

Cars.com reporting: https://www.cars.com/articles/coronavirus-and-cars-can-i-buy-a-car-or-have-one-repaired-in-my-state-420549/

For their part, dealers have been scrambling to enable online sales – wrangling with local restrictions and restrictive franchise laws intended to protect traditional automobile retailing from online sales that have now become a barrier to that very thing.

While dealers and governors and regulators wrangle, Musk is laughing all the way to the bank. On last week’s earnings call he joked that buying a car these days is like a visit to the dentist – only worse.

SOURCE: Images of the jam-packed parking lots at the Port of Los Angeles on April 24th, with the Jupiter Spirit waiting with its cargo of cars in the harbor. Satellite images courtesy of MAXAR.

From one of his multiple homes in Los Angeles one can imagine Musk LOL-ing at the Jupiter Spirit, a container vessel loaded with 2,000 Nissans, idled in the harbor for weeks waiting to offload its cargo while personnel onshore scrambled to make room for the incoming vehicles. Needless to say, Tesla Motors doesn’t have those kinds of troubles. Those are the kinds of troubles reserved for the makers of gas-fueled automobiles sold by instantly irrelevant dealers.

Musk may put on a good show for the analysts and stockholders over California limiting his production plans, competitors may even cluck their schadenfreude-ish tongues his way, but it is clear that Tesla’s moment has arrived. An electric powertrain for a vehicle directly delivered to customers is a business model from the future with which today’s automobile industry is unable to compete.

Not content to humiliate competing car makers and disintermediate an entire distribution channel with millions of workers, Musk says he is adding operating-room-grade in-vehicle air filtration to his cars. (Ford Motor Company manufactured personal protective equipment and General Motors made ventilators to combat COVID-19. Tesla created its own ventilator from Model S and Model 3 parts – but never delivered.)

While the legacy auto industry spins its wheels and struggles to drop the clutch on massive structural impediments to its own progress, Tesla is waiting for the flag to drop on the final stage of its industry take over. Forecasters are talking about post-COVID-19 declines of 20%-25% in vehicle output globally, while Tesla is notching its growth rate expectations down to 40% from 50%.

Tesla can restart at the flip of a switch. The competition needs a week to ramp up supplier factories before revving production back up. But that revved up production will be shipping into a massive back-up of unsold vehicles – including cars returned from ailing rental car companies.

To make matters worse, Tesla will instantly begin shipping cars directly to consumers while dealers struggle to simultaneously reassure customers that it’s okay to visit newly opened showrooms while offering online sales, for many of them, for the first time.

Meanwhile, the pandemic impasse gives Musk yet another soapbox to proclaim his defense of the common man against the ravages of government regulators and the legacy auto industry (from this week’s earnings call):

“So, the extension of the shelter-in-place or, frankly, I would call it, forcibly imprisoning people in their homes against all their constitutional rights…Tesla will weather the storm (but) there are many small companies that will not. And … everything people have worked for their whole life is going to get — is being destroyed in real time. And we’re going to have many suppliers — or have many suppliers that are having super hard times.

“I think the people are going to be very angry about this … They should be allowed to stay in their house, and they should not be compelled to leave. But to say that they cannot leave their house, and they will be arrested if they do, this is fascist. This is not democratic. This is not freedom. Give people back their goddamn freedom.”

Needless to say, no competing auto maker CEO is capable of speaking this freely. It’s pretty potent yet apolitical stuff. Just raw meat for Tesla fans, Tesla owners, and those that want to own a Tesla. One can almost imagine the masses in driving gloves and bearing pitchforks and torches marching on the municipal offices in Fremont shouting: “Free Elon!”

It is high noon in the automotive industry. COVID-19 has brought the future to the industry’s doorstep. It’s now or never for every legacy auto maker and dealer to step up their game, step up their product, step up their incentives, step up their outreach, step up their service, and get in the game. Nothing will ever be the same. We are all now on COVID-19 time.


WEBINAR: Moving UVM Verification Up To The Next Level

WEBINAR: Moving UVM Verification Up To The Next Level
by Daniel Nenni on 05-15-2020 at 9:00 am

PSS

Tom Fitzpatrick, a Strategic Verification Architect at Mentor, a Siemens Business, has worked on IEEE and Accellera standards like Verilog 1364, System Verilog 1800, UVM 1800.2 and is Vice Chair of the Portable Stimulus working group, so when I heard that he was doing a webinar on how PSS can be used to create better stimulus for a UVM environment, I previewed it right away to improve my understanding. The webinar helped me better understand the important and powerful relationship between PSS and UVM and also included a few details about how Mentor is making PSS technology available “under the covers” for UVM users in their inFact CX product. For those interested, the webinar will be hosted on Tuesday, May 26th from 10am-11am PDT and you can register here. I’ve included what I believe to be some stand-out points below.

A biannual verification survey conducted by Wilson Research Group has listed the biggest verification challenge for both ASIC and FPGA users as the ability to create sufficient tests to verify the design and achieve coverage closure.

The Portable Test and Stimulus Standard (PSS) provides a common verification language across IP blocks, subsystem and full system. Even as your SoC goes through multiple generations, you can reuse verification intent. Finally, you can use a single specification across simulation, emulation and FPGA prototyping, saving you much time as compared to separate approaches.

In the specification of PSS it states, “The goal is to allow stimulus and tests, including coverage and results checking, to be specified at a high level of abstraction, suitable for tools to interpret and create scenarios and generate implementation in a variety of languages and tool environments, with consistent behavior across multiple implementations.

So, UVM is just one possible target environment for Portable Stimulus.

A typical UVM verification flow starts with a sequence item and set of constraints, then it’s run through random simulation, sending transactions through the agent to the DUT, covergroups measure coverage metrics, but then we often realize that our coverage goals are not met. At this point, we can write new constraints to target additional coverage, but ultimately we need to write directed tests to reach the last 5-10% of coverage.

This is the result of constrained-random simulation treating the coverage specification as passive. In a typical constrained-random testbench, you are defining your critical states (green circles), and then constrained random tests hit those states in unpredictable ways. On the downside, constrained random will often repeat states or entirely miss coverage points. With this passive coverage approach we can still see lots of uncovered states, including states that may indicate a bug.

Instead of relying on procedural tests and hoping you’ll hit your coverage goals, PSS is a declarative language that lets you define scenarios at an abstract level and lets your tool, such as Questa inFact, generate different target-specific implementations of the abstract scenarios in either SystemVerilog (including UVM) or C. Because the PSS scenarios are fully declarative, the tools can analyze the scenarios and generate tests that actively target your coverage goals.

Thus, a tool like Questa inFact can generate the minimum number of tests required to reach your coverage, and then generate all other legal scenarios that will fully exercise all other states, including the one with the bug.

When all is said and done, the PSS description is transformed into a UVM virtual sequence that calls lower-level UVM sequences in the right order to implement the abstract scenario you described.

You can then use the UVM factory to simply swap in the PSS-generated sequences into your existing UVM environment without having to change any of your UVM code.

Summary
PSS allows you to look at verification from a higher level, because it’s declarative. This does require a change in thinking for most UVM users. Mentor’s Questa inFact automates a lot of the analysis to hide most of the PSS details and extracts the important details from your existing UVM code to build a coverage-targeted test for you. As you can see below, using PSS with a tool like Questa inFact (green line) when compared to constrained-random (red),  lets you reach your test coverage goals much more efficiently, giving you more time to explore additional scenarios outside of your coverage scope, with the promise of 2-3X improvement in regression efficiency.

Plan to signup and attend this webinar online on May 26th, which also includes a Q&A.


WEBINAR: Transitioning from Live to Virtual Events

WEBINAR: Transitioning from Live to Virtual Events
by Daniel Nenni on 05-15-2020 at 6:00 am

SemiWiki Webinar Banner

The foundation of SemiWiki.com has always been to transition live semiconductor related events to an easy to digest digital format via a worldwide online semiconductor community. SemiWiki is staffed by working semiconductor professionals that transform live events, press releases, whitepapers, webinars and other collateral into easy to read blogs for the semiconductor ecosystem.

Since going online in 2011, SemiWiki has published more than 7,000 blogs (in collaboration with our sponsors) that have garnered more than 42,000,000 views. As a result, SemiWiki in total has experienced more than 3,400,000 users and is the #1 semiconductor ecosystem portal around the world.

I have worked with dozens of marketing teams through SemiWiki over the last ten years and the one that stands out is the former eSilicon team of Mike Gianfagna and Sally Slemons. In regards to quality of results Mike and Sally are the best of the best which is why I have asked them to join me in this webinar, absolutely.

REGISTER HERE

The 2020 pandemic has accelerated the move to digital content so the multi-billion dollar question is: How do we thrive during this transition?

In this webinar you will learn:

  • What tools and resources are available
  • How to repurpose your live event budget for maximum impact
  • How to create great content
  • How to find new ways to get your content in front of customers
  • How to maximize your reach
  • How to fully monetize this process

For smaller companies, this unexpected change in the marketing landscape can actually be an opportunity. In the digital world the playing field is much more level. You can get your message out in a way that you couldn’t at a live event, operating from a ten-by-ten booth across the aisle from your 500-pound-gorilla competitor in a 50-by-50 booth with an espresso machine next to their 30-seat theater.

Join us for a lively discussion about how to market your company in this new, virtual environment. You may be surprised at the options available to you.

MODERATOR:
Daniel Nenni is the Founder of SemiWiki.com, the Open Forum for Semiconductor Professionals. Daniel is an internationally recognized semiconductor ecosystem expert, public speaker, author and publisher, and was a professional blogger even before blogging was a profession.

PRESENTERS:
Mike Gianfagna is a principal at Gforce Marketing, an independent marketing consulting company. He is also a staff blogger for SemiWiki.com. Previously, Mike was a semiconductor and EDA executive. He offers demonstrated achievements in ecommerce, cloud migration, product launch, company branding and multi-tier communication strategies.

Sally Slemons is an independent marketing consultant at Noise! Marketing Communications. She brings 25 years of semiconductor industry marcom experience to help companies ideate, orchestrate, and execute integrated, results-oriented B2B marketing programs. She is skilled at making small technology companies look big and big technology companies look brilliant.

REGISTER HERE


High Speed SerDes Design and Simulation Webinar Replay from Mentor

High Speed SerDes Design and Simulation Webinar Replay from Mentor
by Tom Simon on 05-14-2020 at 10:00 am

Mentor SerDes Simulation

Over the years SerDes (serializer/deserializer) based connections have proliferated into just about every connection within and among computing systems. Years ago, parallel interfaces were the most common method of moving data, but issues of signal integrity, synchronization and power simply became too much for the required data rates. One by one old parallel links have been updated to modern serial connections. Remember parallel printer cables, ATA & IDE, or PCI connectors, to name a few? These have all been updated to serial equivalents and even on-chip connections between blocks have adopted network on chip architectures that rely on high speed serial links for moving packetized data.

The new age of USB, PCIe, NoC, ever faster memory, network and device interfaces have pushed SerDes designs into challenging high-speed realms. A single serial link carrying what used to be carried by a set of parallel wires needs to run that much faster. Add to it packetization, error correction and encoding and the speed requirements for SerDes become substantial.

All of these factors have made SerDes design critical for every market, including networking, IoT, automotive, servers, etc. One of the biggest challenges is verification with simulation. Modern SerDes have a mixture of digital and analog, thus making it impractical to run analog and/or digital simulation independently. Mentor, a Siemens Business, has recorded a webinar that breaks down the issues and potential solutions for SerDes simulation. In the video titled “Addressing Analog Mixed-Signal Verification Challenges of High Speed SerDes”, Scott Guyton, Solution Architect Manager in the Mentor AMS BU discusses all the facets of this issue.

Scott begins by summarizing the need for widely deployed high speed SerDes throughout electronic products. As mentioned above, all the major markets need increasing rates of data transfer and processing power. Some of the design challenges for these systems are clock and data recovery, dealing with high db loss & crosstalk, low power operation and support for multiple current and legacy protocols. Scott talks about how these system level challenges translate into SerDes circuit design challenges. High data rates call for stringent jitter and phase noise requirements. High channel loss necessitates equalization architectures and increased design complexities.

SerDes are not immune to issues found in advanced nodes. Long gone are the days where an interface SerDes could stay on an older process while the core moved to a new node for performance or capacity. SerDes designs also have to wrestle with clock domain crossings, programmable parameters and complex data paths. All of this this must be done while maintaining tight design margins.

Mentor has assembled a comprehensive simulation solution that addresses the digital, analog and mixed signal domains needed for SerDes design. Scott reviews the requirements in the webinar. He divides them up into Performance, Accuracy, Capacity and Ease of Use (PACE). Analog simulation is used for transient, transient noise and RF. Mixed signal needs to maintain SPICE accuracy and allow for sufficient cycles of digital or channel model to validate the design. Digital simulation covers corners and Monte Carlo to assure yield and help with design optimization.

Mentor offers their Symphony Mixed Signal Platform, which is powered by AFS and can integrate with a wide range of industry standard HDL simulators. Symphony lets designers switch between analog, digital and behavioral models to trade off performance or accuracy, as needed. With AFS, runtimes are dramatically shortened to allow more simulations in a smaller period of time.

Scott closes the webinar with a set of case studies showing how their customers took advantage of the Mentor simulation platform. The first case study includes information on simulation accuracy versus measured silicon at 7nm for SerDes intended for automotive and IoT applications. The second case study is a specially designed SerDes for use in a GPU interface PHY. The customer was able to run a mixed signal simulation at high accuracy with fast runtimes using Symphony. The third case study was a SerDes for 5G and automotive. Symphony solved the customers convergence issues while improving accuracy and runtime. The last case study covers variation aware verification for level shifters in a large design. The design had hundreds of level shifters and they needed to verify that they all would work over all PVT cases. Mentor’s Solido PVTMC made this possible with only 2713 simulations instead of the 9.7M required for brute force.

The webinar is filled with much more information than can be provided here. If you want to view the entire presentation, it can be found on the Mentor website along with the supporting material cited in the webinar.

In writing this article, Mentor provided me with a list of educational resources that may prove useful:

Mentor, together with Siemens Digital Industries Software, are offering special resources to help you make the best of this challenging time, including Free 30-Day On-Demand Training and a Free 12-month licenses of PADS Pro Student Edition.

If you’re interested in Mentor’s other webinar and virtual seminar offerings, check out:


The Problem with Reset Domain Crossings

The Problem with Reset Domain Crossings
by Bernard Murphy on 05-14-2020 at 6:00 am

Reset button

Design complexities in reset, like everything else in big SoC designs, has become incredibly complex, for all sorts of reasons. Long, long ago reset was something you just did once, when you turned the power on. Turn on, then hold reset for some amount of time until everything is in a known starting state, and off you go. Nice and simple.

Then we found we had to handle multiple clock domains – for the CPU, the PCI, USB, SPI, etc, etc I/Os and you couldn’t just run the same asynchronous reset into each of these because you create metastability problems when it de-asserts, essentially the same kind of metastability you can get in clock domain crossings.

(Then you get into questions of synchronous or asynchronous resets, or asynchronous assert and synchronous de-assert, a topic which always seems to provoke debates of near-religious fervor among the reset cognoscenti. I’ll leave it to them to battle that out.)

Resets became fragmented not just to deal with clock domains but also to manage more refined reset needs. Now reset isn’t just the big red button to reset the whole darn thing but also allows for selective reset. Maybe I just want to reset this block because it’s misbehaving, and I want to get it back on track. This is a technique that is really blossoming in ASIL-D compliant designs where a safety island regularly monitors status of sub-functions (e.g. AI accelerators) and will isolate and force a reset of misbehaving functions.

Resets may also need to be sequenced on bring-up (not a lot of value in resetting other logic until the CPU cluster has booted). Then I want to manage reset in a controlled and orderly way to get everything to a reasonable start state.

Then there’s the interaction of reset and power management. For isolation between blocks in different power domains, if the isolation control signal is generated by a device in a different reset domain than the block on the downstream side of that isolation, then you have a reset domain crossing and potential problems.

All of which underlines that the good old days of a reset being one wire, with a fanout all over the chip, are long behind us. Now reset is another bundle of complex control logic, ultimately driving smaller traditional fanout trees in their own respective domains. And crossing between those domains must be proven to be safe. Simulation is a tough way to do that – static analysis is the more common approach to ensure as complete coverage as possible.

Synopsys recently released a white paper on VC SpyGlass RDC with their views on the origins of potential RDC problems and their views on checking for these problems. They mention particularly VC SpyGlass RDC ability to do this analysis together with UPF, which I would think would be a must-have to ensure an RDC-clean design in virtually any modern SoC. VC SpyGlass RDC also natively reads design SDC, another must-have in ensuring clock and other definitions are accurate. All of this works with Verdi, as do other functions in VC SpyGlass, so you can debug RDC problems in a very familiar environment.

You can learn more about VC SpyGlass RDC HERE.

Also Read

What’s New in CDC Analysis?

SpyGlass Gets its VC

Prevent and Eliminate IR Drop and Power Integrity Issues Using RedHawk Analysis Fusion


The Uncertain Phase Shifts of EUV Masks

The Uncertain Phase Shifts of EUV Masks
by Fred Chen on 05-13-2020 at 10:00 am

The Uncertain Phase Shifts of EUV Masks

EUV (Extreme UltraViolet) lithography has received attention within the semiconductor industry since its development inception in 1997 with the formation of the EUV LLC [1], and more recently, since the 7nm node began, with limited use by Samsung and TSMC being touted as key advantages [2, 3]. As with any key critical technology, the devil is in the details.

While much has been written about the stochastic aspects of EUV [4-8], which are troublesome in terms of defects, or the infrastructure aspects, including pellicles [9] and hydrogen cleaning [10], the imaging aspects have often been taken to be granted. Less frequently, the details of the image formation in the resist are covered. The EUV-generated image is actually produced by chemical reactions triggered by electrons released by the EUV radiation [11, 12]; these electrons would have traveled a random number of nanometers before finally fixing the reaction location. However, even the optical image projection is a key departure from mainstream DUV (Deep UltraViolet) systems which have been in use for over two decades.

EUV is absorbed by all materials, so it is not feasible to make lenses for EUV. One mm of glass or even a thin silicon wafer would absorb all the EUV light, for example. So instead, EUV light can only be reflected using multilayers, and even here it is already partially absorbed, roughly 30% per multilayer. A fundamental consequence of relying on reflections for imaging is that the light path and its axis must be folded to avoid obstructions. The circuit layer pattern is projected from a mask into a wafer. On the mask, the features are 4x larger than the target size on the wafer, and the light is illuminated with a range of angles centered around 6 degrees with respect to the normal to the surface (Figure 1).

The amount of light reflected from the mask surface is itself a function of the angle as well as the wavelength [13]. The EUV light is commonly represented as a 13.5 nm wavelength, but it is actually a wide band of wavelengths. A longer wavelength has a higher reflectance at smaller angles with respect to the normal, while a shorter wavelength has a higher reflectance at angles further away from the normal [13, 14]. Even though the illumination distribution is centered or balanced about the optical axis (which is 6 degrees with respect to the normal), the angular dependence means, for a given wavelength, most of the light will be reflected toward one side. This produces a deviation from telecentricity, so that the object will tend to shift when out of focus.

Figure 1. Illumination configuration for the EUV mask. The circuit layer pattern is defined in an absorber. When the absorber is laid out on a fixed pitch, as in a grating, the light is reflected along specific direction, each labeled as a diffraction order.’0′ marks the direction for a blank pattern, i.e., pure specular reflection.

What is not so commonly mentioned, though, is that the EUV mask itself contributes additional anomalies to the imaging. It is a combination of two effects [15]. The EUV mask can be modeled as paterned layer covering the reflective multilayer, consisting of “bright” and “dark” areas. The “bright” areas have the multilayer surface exposed and available for reflecting while the “dark” areas are covered by an absorber consisting (at least in part) of tantalum. The “dark” areas on the EUV mask still reflect light since the (~60 nm thick) absorber still transmits light through to the multilayer underneath. After reflecting back, ~3% of the light is allowed to proceed to the rest of the optical system. This light, however, is shifted in phase (~150 degrees) with respect to the light that did not pass through any absorber. So instead of a black-and-white pattern, it’s more like an oil pattern on glass. It should also be borne in mind that this “dark” phase shift is also inversely proportional to the wavelength.

Figure 2 shows the impact of the phase shift on the image position. The 1:1 line-space image is assumed to be defined by just the 0th and 1st orders. For a 180 degree phase shift, there is no change of position, but for phase shifts that depart from 180 degrees, toward 90 degrees, the image begins to shift. Beyond 90 degrees, it starts shifting back, until it reaches the original position at 0 degrees.

Figure 2. The phase of the 3% dark space affects the CD as well as the position of the bright dense line image. The shift is zero for 0 and 180 degrees, and is maximized at 90 degrees. For a 20 nm line on 40 nm pitch, this can be in the 1-2 nm range. Going from 180 degrees to 0, the image peak intensity and width also grows.

The phase shift effect cannot be sufficiently compensated by optical proximity correction (OPC). With the 150 degree phase shift and 3% reflected through absorber, 20 nm linewidths on 40 nm and 80 nm pitches are compared in Figure 3, along with OPC by sizing alone as well as with sizing + subresolution assist features (SRAFs) [16].

Figure 3. With unbalanced illumination, a 20 nm dense line image (40 nm pitch) is shifted from its presumed position by 2-3 nm with the dark space transmitting 3% at a phase of 150 degrees. At 2X pitch, the shift is even larger, and not even OPC can correct it to match the dense line case. The vertical dashed lines mark the peak centers for the 40 nm pitch and 80 nm pitch (sizing + assist features) cases.

The 80 nm pitch is formed with 0th, 1st and 2nd diffraction orders. While the 0th and 2nd orders can correlate with the 0th and 1st orders of the 40 nm pitch, the 1st order of the 80 nm pitch will always pose an unremovable difference between the two pitches, and so cannot complete the optimization, even with the use of subresolution assist features. Moreover, when the phase is not 180 or 0 degrees, sizing the feature cannot improve the contrast, i.e, the sharpness of the edge slope. While the assist feature helps mitigate the shift to some degree, it does so by suppressing the 1st order, which also reduces the peak intensity and worsens the contrast. In practice, the imaging situation would not be as dire as in Figure 3, because there is partial balancing around the optical axis, just not complete balance.

On the other hand, the image shift gets worse when defocus is considered, and in fact, the best focus positions for different pitches can span a wide range [15, 17, 18]. Moreover, for the different wavelengths from the EUV source, each wavelength has a different phase shift independent of the others. As a result, overlay is a concern, when the spec is on the order of 1.5 nm [19]. ASML has also alluded to this vulnerability of EUV [20].

The underlying phase shift issue is fundamentally part of the EUV mask itself. So the solution requires a redefinition of the EUV mask. More specifically, the phase shift could be made zero by picking an absorber whose index of refraction has a real component of 1, while at the same time maintaining a sufficiently high absorption coefficient [15, 21, 22]. The most promising candidates have been based on nickel, including nickel-aluminum alloy, or nickel nanoparticles embedded in tantalum nitride [21-23], although these materials have been hard to process [23]. Other considered candidates include telluride-based, which is less stable chemically, and ruthenium-based, which deviates more from the desired optical properties [21, 24]. Challenges linger in being able to pattern these fairly exotic materials [24].

References

[1] https://www.latimes.com/archives/la-xpm-1997-sep-11-fi-31072-story.html

[2] https://news.samsung.com/global/samsung-electronics-starts-production-of-euv-based-7nm-lpp-process

[3] https://www.anandtech.com/show/13445/tsmc-first-7nm-euv-chips-taped-out-5nm-risk-in-q2

[4] P. De Bisschop and E. Hendrickx, “Stochastic Effects in EUV Lithography,” Proc. of SPIE 10583, 105831K (2018).

[5] M. Neisser et al., “Understanding EUV Shot Noise: Comparing Theory and Requirements to Experimental Evidence,” J. Photopoly. Sci. Tech. 26, 617 (2013).

[6] https://www.linkedin.com/pulse/euvs-stochastic-valley-death-frederick-chen/

[7] https://www.linkedin.com/pulse/stochastic-variation-euv-source-illumination-frederick-chen/

[8] https://www.linkedin.com/pulse/photon-shot-noise-impact-line-end-placement-frederick-chen

[9] O. Romanets et al., “Progress in imaging performance with EUV pellicles.” Proc SPIE 11177, 111770Z (2019).

[10] https://www.linde-gas.com/en/images/SST%20-%20March%202018%20-%20EUV%20Lithography%20Adds%20to%20Increasing%20Hydrogen%20Demand%20at%20Leading-edge%20Fabs_tcm17-477308.pdf

[11] H. Fukuda, “Localized and cascading secondary electron generation as causes of stochastic defects in extreme ultraviolet projection lithography,” J. Micro/Nanolith. MEMS MOEMS 18(1), 013503 (2019).

[12] I. Bespalov et al., “Key Role of Very Low Energy Electrons in Tin-Based Molecular Resists for Extreme Ultraviolet Nanolithography,” ACS Appl. Mater. Interfaces 12, 9881 (2020).

[13] https://www.linkedin.com/pulse/very-different-wavelengths-euv-lithography-frederick-chen

[14] N. Davydova et al., “EUVL mask performance and optimization,” Proc. SPIE 8352, 835208 (2012).

[15] M. Burkhardt and A. Raghunathan, “Best focus shift mechanism for thick masks,” Proc. SPIE 9422, 94220X (2015).

[16] J. G. Garofalo et al., “Automated layout of mask assist-features for realizing 0.5 k1 ASIC lithography,” Proc. SPIE 2440, 302 (1995).

[17] A. Erdmann et al., “Mask-induced best-focus shifts in deep ultraviolet and extreme ultraviolet lithography,” J. Micro/Nanolith. MEMS MOEMS vol. 15(2), 021205 (2016).

[18] A. Erdmann et al., “Characterization and mitigation of 3D mask effects in extreme ultraviolet lithography,” Adv. Opt. Tech. vol. 6(3-4), 187 (2017).

[19] https://www.asml.com/en/products/euv-lithography-systems/twinscan-nxe3400c

[20] https://www.fool.com/earnings/call-transcripts/2020/04/15/asml-holding-nv-asml-q1-2020-earnings-call-transcr.aspx

[21] A. Erdmann et al., “Attenuated PSM for EUV: Can they mitigate 3D mask effects?,” Proc. SPIE 10583, 1058312 (2018).

[22] V. Luong et al., “Ni-Al Alloys as Alternative EUV Mask Absorber,” Appl. Sci. 8, 521 (2018).

[23] D. Hay et al.,”Thin Absorber EUV Photomask Based on Mixed Ni and TaN Material,” Proc. SPIE 9984, 99840G (2016).

[24] V. Philipsen et al., “Novel EUV mask absorber evaluation in support of next-generation EUV imaging,” BACUS News, October 2019.


SEMI Takes the Jim Hogan and Simon Butler Conversation Virtual

SEMI Takes the Jim Hogan and Simon Butler Conversation Virtual
by Mike Gianfagna on 05-13-2020 at 10:00 am

Jim Simon

As I originally reported a few weeks ago, the Jim Hogan fireside chat with Methodic’s CEO and founder Simon Butler was moved to a virtual event on May 1. The event was produced by the Electronic System Design (ESD) Alliance, a SEMI Strategic Technology Community. Bob Smith, executive director of ESDA, moderated the event. I am happy to say that the magic of a Jim Hogan fireside chat does translate to a virtual setting quite well.  The event was full of good information about the ESD Alliance, the story of Methodics and a brief but compelling history of the chip design universe that got us here.

If you missed the event, don’t despair.  A replay link is coming, but first a little bit about what was discussed.

Bob Smith kicked it off with some background about the ESD Alliance. This organization supports the $10B+ design ecosystem that, in turn, supports the $2T+ electronics industry. Those numbers are not a misprint; a small industry can have a major, global impact. I’ll provide one great slide Bob used that shows all the places that the ESD Alliance can help – the green circles, below.

Bob then passed the floor to Jim and Simon. Thanks to the magic of cloud-based video conferencing, we were treated to a live interaction between these gentlemen as Jim skillfully maneuvered through the events in chip design that brought Simon and Methodics to the place they are today.

The story began with Simon designing DSPs for Fujitsu in the UK and ended with Methodics creating the new category of IP Lifecycle Management. I won’t take you through the whole story – you really need to see if for yourself.  There are a few people mentioned along the way that are genuine anchor tenants of the story.  I’ll mention just three of them here…

After a few years in the UK, Simon came to the US and joined a company called High Level Design Systems (HLDS). That company was subsequently acquired by Cadence while Jim Hogan was working there in the mid-1990’s. A fellow named Charlie Janac recruited Simon to HLDS and Jim to Cadence. Anyone who follows EDA or IP will have run across Charlie’s name more than once for sure. He’s the first anchor tenant of the story.

After a short stint as a core comp AE at Cadence, Simon joined a company working on MIPS microprocessor cores (SandCraft). After that, Simon started a consulting business that integrated tools into the Cadence flow. A lot of that work was based on the SKILL programming language, one that’s still around today. SKILL should probably be an anchor tenant as well.

One of the projects for Simon’s growing consulting business was the development of a design data management capability for NetLogic Microsystems.  Along the way, Simon realized the “service ware” he was building for NetLogic could actually be productized and sold to a broader customer base. I’ll inject a bit of my own history here. The foundation of one of my prior employers, Atrenta, followed this same path. SpyGlass was originally a custom service product that ultimately became a household name in EDA.

Productizing a service offering only works if the customer buying the service cooperates, however. Here is where we meet the second anchor tenant of the story, Dimitrios Dimitrelis, the VP of engineering at NetLogic. Dimitrios had the foresight to allow Simon to spin out and productize the design data management capability that he built for NetLogic. And so, the foundation of Methodics was born.

After Methodics had released its initial version of Percipient (known as “ProjectIC” at the time) and had deployed at various customers it became apparent that a new architecture would be needed to handle the scale of the larger enterprise customers that stood to benefit most from this new breed of platform, IP Lifecycle Management (IPLM). At this point, Peter Theunis was hired from Yahoo. Peter is the third anchor tenant of the story. As the new CTO, Peter’s background in systems scalability helped move the Percipient platform to the next level.

This is clearly not the end of the story, but I’ll stop here. There’s a lot more history, insight and strategy to be learned about Methodics and how they created the IPLM category. When you view the replay, you’ll meet more anchor tenants and also experience a spirited live video Q&A with Jim and Simon. Access the webinar replay here to get the whole story. And if you’d like to learn more about membership in the Electronic System Design Alliance, you can reach out to Bob Smith at bsmith@semi.org.

Also Read:

Project-centric Design Process, or IP-centric

UPDATE: Everybody Loves a Winner

Avoiding Fines for Semiconductor IP Leakage


Cadence – Redefining EDA Through Computational Software

Cadence – Redefining EDA Through Computational Software
by Mike Gianfagna on 05-12-2020 at 10:00 am

Screen Shot 2020 05 09 at 6.52.49 PM

Based on what I’m seeing, I believe Cadence is looking at the world a bit differently these days. I first reported about their approach to machine learning for EDA in March, and then there was their white paper about Intelligent System Design in April. It’s now May, and Cadence is shaking things up again with a new white paper entitled simply Computational Software. You can get your copy of this new Cadence white paper here

This new perspective from Cadence takes a look at EDA in a different way. Rather than tools and flows, it examines algorithmic complexity from a system design perspective. The subject of computational software and how to optimize it isn’t new. Anyone familiar with resolution enhancement and mask making will know what I mean. This field is called computational lithography and early work began in the 1980’s. The problem is simple to state—how do you accurately print a ~7nm feature with a light source whose wavelength is gargantuan by comparison (193 nanometers)?

Doing this isn’t easy. One needs to predict the printing distortion and then pre-distort the shape, so it comes out looking like you intended it to. The computation associated with this kind of thing explodes very quickly. Extreme UV lithography (light source wavelength = 13.5 nm) has tamed the problem some but has created a series of new challenges. This is a much longer discussion—I’ll stop here. You get the idea of what computational software is.

Back to the Cadence white paper. The perspective offered provides a refreshing look at chip design, one that looks beyond the chip to the system it is part of. Cadence points out that many system companies now design the entire stack for their product—chips, packages, PCBs and software. Getting all this right requires a holistic approach to analysis and optimization across all these design disciplines. What is happening is a convergence of traditional EDA (e.g., IC, package, PCB) with system design considerations (e.g., software algorithms and their many and sometimes subtle interactions with the physical hardware). Artificial intelligence and machine learning are part of this as well to deal with exploding data volumes and analysis requirements.

This is the backdrop for the new Cadence white paper and a view of what EDA will look like going forward. The white paper examines the details of several representative computational problems. Algorithmic optimization, acceleration through massive hardware deployment and taming complexity through abstraction are all discussed.

You should definitely download this white paper and take a look for yourself.

To further whet your appetite, I’ll leave you with three key innovations that make this era of computational software different according to Cadence:

  • Integration and co-mingling of previously independent design, analysis, and implementation to achieve optimal results,
  • Partitioning and scaling of computation to thousands of CPU cores and servers, and
  • The introduction of machine learning to improve and harness design heuristics for system optimization.

The piece ends with the statement that Cadence is a computational software company, and that’s a fresh look at EDA.


3 Steps to a Security Plan

3 Steps to a Security Plan
by Bernard Murphy on 05-12-2020 at 12:00 am

cybersecurity

Assessing the security of a hardware design sometimes seems like a combination of the guy looking under a streetlight for his car keys, because that’s where the light is (We have this tool, let’s see what problems it can find) and a whack-a-mole response to the latest publicized vulnerabilities (Cache timing side channels? What do we have to mitigate that?). Well-intentioned, but at the end of it all, you’re left feeling that while you know what attacks you have defended against, you have no idea whether that represents the majority of likely attacks on your design, or the most important attacks, or just a small sampling of what could be possible.

The problem with most approaches to security analysis is that they’re bottom-up; you start with a list of possible attacks. But when you start with the details of an attack, you lose sight of the coverage that a comprehensive security plan should be giving you. You also lose sight of the relative importance and risks associated with an attack and the balance of investment you want to put into defenses. You want a basis to assess security overall and be able to tradeoff staff, investment, schedule and risk, to find the best tradeoffs you can.

The recent Mitre Common Weakness Enumeration should help, but that’s a list of weaknesses, not a methodology to identify the threats most relevant and mitigations most applicable to your specific design. A better starting point is to take inspiration from the Secure Development Lifecycle (SDL) approach developed by Microsoft, initially in 2004, to build security into applications and services from the ground up. Originally targeting the software development lifecycle, Tortuga Logic and other chip vendors have adapted this approach for hardware. I talked to Nicole Fern (a Senior Hardware Security Engineer at Tortuga Logic) to understand the approach and the unique aspects of hardware which must be considered. This covers 3 top-down considerations: an asset inventory, threat modeling and lifecycle analysis.

1.  Asset Inventory

At the root of any SDL is the concept of what assets might be attacked. What should you should consider? You might assume just an encryption key or two, maybe a subscriber ID, that should be about it, right? Wrong. The keys are important certainly, but hardware attacks are getting more sophisticated; internal state is also important. In cryptography, random numbers and state in crypto algorithms are potential assets. Configuration and control register states are assets, for example memory protection region registers, watchdog timers, even program counters. Weights for machine learning, access control settings for the bus fabric and bitstream for FPGA programming are all assets. Software for trusted execution, device drivers and first stage boot loader – all assets. And of course, user data can be an important asset.

The key is that this process starts through the lens of the assets specific to your design and end target use case.  Assets are information in the design that you must protect.

2. Threat modeling

Threat modeling is a top-down approach to how those assets might be attacked. This is a process of identifying what you must protect, what threats might be possible, what the consequences of a successful attack would be and what resources the attacker might have.  Threats cover what is ironically known as the CIA triad – confidentiality (the information should not be leaked), integrity (the information should not be modified) and availability (the system remains responsive even when under attack). Consequences depend ultimately on the application but here can be bounded to disclosure of sensitive data, privilege escalation, data tampering, spoofing and so on. Attacker capabilities you may consider are remote attacks or physical access to the device. Finally, you will want some assessment of the likelihood and cost of a successful attack, against which you will be able to weigh the cost of protecting that asset.

3. Lifecycle Analysis

Why is this a lifecycle analysis? Because where and how assets are created, and therefore also where and how they can be attacked, is a lifecycle question. Is it hardwired into the chip logic, or generated by software and if so at runtime, or is it baked into the firmware? Or is it coded into the device by the maker during provisioning, perhaps over-the-air?

Can the asset be changed? Is it stored in volatile or non-volatile memory? Is the value transferred during execution? When should it be zeroized or destroyed? Should it persist across reset cycles or across context switches to different privilege levels? Is it externally accessible, even through highly privileged paths, after manufacturing?

Implementing the Plan

Answering these questions in a comprehensive analysis will help you build a systematic SDL for hardware. Think of it like a top-level functional coverage plan. Based on the security requirements produced from threat modeling you can then start to build tests to check your actual coverage, and mitigation techniques per that plan.  Security is a game of risk v. spend.  A systematic security plan of course requires resources to implement but in the end is more efficient and effective at mitigating risk than ad-hoc approaches.  Of course, it would be nice if you could code your plan into executable threat models, from which you can then accumulate threat model coverage assessment during the course of your normal functional testing. For that you should talk to Tortuga Logic.