RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Podcast EP32: Improving the Security of Hardware Designs

Podcast EP32: Improving the Security of Hardware Designs
by Daniel Nenni on 08-06-2021 at 10:00 am

Dan is joined by Dr. Alric Altoff, senior hardware security engineer at Tortuga Logic. The security risks associated with speculative execution are discussed along with scalable methods to address these risks.

Dr. Alric Altoff has over 15 years of experience in hardware security, hardware/software co-design, and applied statistics. At Tortuga Logic, Alric works with customers to improve the security of their hardware designs. His recent research includes algorithmic techniques, engineering methods, and business processes to improve and quantify hardware security assurance. Prior to joining Tortuga Logic, he was a senior scientist in Leidos’ electronic warfare division.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Extreme Optics Innovation with Ansys SPEOS, Powered by NVIDIA GPUs

Extreme Optics Innovation with Ansys SPEOS, Powered by NVIDIA GPUs
by Mathieu Reigneau on 08-05-2021 at 6:00 am

Ansys Speos CAD and Live Preview Rear Lamp view 2

Optical engineers rely on Ansys SPEOS to deliver extreme product innovation — and NVIDIA dramatically accelerates the development cycle

Product autonomy, artificial intelligence, the Internet of Things (IoT) and other challenging trends are placing new demands on optical engineers in the automotive, aerospace and general lighting segments. Driverless cars, remotely controlled drones, heads-up displays and other futuristic product designs rely heavily on optics. In general lighting applications, optics engineers are focusing on new applications for LED, increased levels of optical energy efficiency, better lighting appearance and smart functionality.

As they work to develop next-generation sensors, controls and other innovations that meet urgent market needs, optical engineers can’t afford to invest time in extensive physical testing and prototypes. Instead, they rely on industry-leading SPEOS simulation software from Ansys, which allows them to design lighting solutions and verify their performance in a low-cost, risk-free virtual design space. Via SPEOS, optical engineers ask what-if questions and test their out-of-the-box ideas without the time and costs incurred in physical prototype builds.

With SPEOS, optics product developers can customize their designs for critical performance parameters such as visibility, legibility, reflection and light propagation at an early stage, saving time and costs. SPEOS offers Live Preview, a unique feature that depicts the proposed lighting design under real-world conditions. Optical engineers easily and iteratively adjust the design in this simulated environment until it meets their expectations.

Ansys and NVIDIA: Combined Industry Leadership

SPEOS is the gold standard for optical simulation, delivering incredibly accurate results ― but these simulations involve large and complex numerical computations. Optical phenomena such as glare, reflection and fog are physically and mathematically complicated, making them traditionally time-consuming to replicate and solve. Optical simulations also rely on very detailed graphics that accurately capture the unique physical properties of light.

Through a collaboration with NVIDIA, Ansys SPEOS users can now achieve a 2x to 3x acceleration in their optical simulation run times, compared to previous generations of GPUs. Using NVIDIA Ampere architecture GPUs, optical engineers can access the processing speed and power of a high-performance computing (HPC) cluster, right on their desktops. Complex optics problems that used to take weeks to solve can now be solved in mere hours or even minutes.

Because NVIDIA RTX technology is extremely compatible with the requirements of simulation — for example, by supporting accurate ray tracing and parallel processing ― Ansys SPEOS users have access to new computing resources and methods that greatly surpass their traditional ways of working. They can meet customer demands for extreme development speed and optics innovation, while also maintaining a high degree of confidence in their designs. They protect profit margins by working more efficiently, amplifying human resources, and minimizing the need for physical tests and prototypes.

NVIDIA provides advanced tools to maximize the potential of its GPUs. Adapting SPEOS Live Preview to NVIDIA OptiX 7.1 libraries has significantly boosted simulation performance, without compromising accuracy, while also future-proofing the solution so that it remains fully optimized on the new generation of NVIDIA Ampere GPUs. Every bit of silicon is used, and the new NVIDIA RTX A6000 delivers twice the simulation speed of its predecessor.

A Case in Point: Solving Complex Problems Quickly

As one example, SPEOS and NVIDIA technology make it possible for optical engineers to quickly and accurately solve highly complex problems related to designing automotive interiors. As car interiors include more optics components, engineers must carefully balance comfort and aesthetics with uncompromising vehicle safety, while also considering cost control.

Communication and infotainment optics, including advanced heads-up displays, must be readable in all lighting situations, whether daytime or nighttime. Ghosting, reflection and other optical phenomena must be understood and managed. Modern “mood-lighting” systems that distinguish today’s luxury cars must be simultaneously designed for perceived quality, aesthetics, safety and mass production costs. The engineering challenge doesn’t end with the optics themselves; for instance, engineers must also consider that certain dashboard materials might cause a veiling reflection in the interior of the windshield, interfering with the driver’s sightlines.

All of these aspects require a true-to-life simulation that covers energy propagation, spectral light definition and polarization, so that engineers can confidently validate their design choices. SPEOS is designed to manage all these complex design considerations with just a single click — and NVIDIA delivers the solution in seconds, allowing engineers to leverage the full power of SPEOS to explore new optical solutions and experiment with innovative designs.

Illuminating the Possibilities

Today, optical engineers are working on truly groundbreaking innovations with the potential to impact millions of people in their everyday lives. Ansys SPEOS is critical to making these innovations possible. For example, optical simulations via SPEOS are ensuring that new intelligent automotive headlamps autonomously change their lighting properties to maximize safety under every weather condition, and that sensors on autonomous drones are properly interpreting light phenomena such as glare.

Benchmark Result Extract

By enabling a fast, inexpensive and accurate live preview of real-world performance, SPEOS is bringing these and other advanced optics solutions to life by correctly predicting their performance at an early stage.

As Ansys continues to advance its state-of-the-art capabilities, computing demands grow larger and larger. Fortunately, NVIDIA’s processing speed and power, along with its support for graphical accuracy, help Ansys users work quickly and confidently as they re-imagine many optical solutions and their applications. Together, Ansys and NVIDIA are illuminating incredible possibilities for the world’s optics development teams.

Also Read

Ansys Multiphysics Platform

There’s No Such Thing as Ground (But Perhaps There’s a Bob) Minimze Your Ports

Bouncing off the Walls – How Real-Time Radar is Accelerating the Development of Autonomous Vehicles


Stochastic Effects from Photon Distribution Entropy in High-k1 EUV Lithography

Stochastic Effects from Photon Distribution Entropy in High-k1 EUV Lithography
by Fred Chen on 08-04-2021 at 10:00 am

Stochastic Effects from Photon Distribution Entropy in High k1 EUV Lithography

Recent advances in EUV lithography have largely focused on “low-k1” imaging, i.e., features with pitches less than the wavelength divided by the numerical aperture (k1<0.5). With a nominal wavelength of 13.5 nm and a numerical aperture of 0.33, this means sub-40 nm pitches. It is naturally expected that larger pitches would be trivially easier to image. However, a closer look shows the situation to be very far from trivial.

Figure 1. (Left) pupil map for 56 nm x 98 nm unit cell. (Right) pupil map for 40 nm x 70 nm cell. Wavelength=13.5 nm, NA=0.33. Each color represents a different interference pattern from a different combination of diffracted plane waves from the mask pattern.

Figure 1 indicates that an image with a larger pitch (56 nm x, 98 nm y) would consist of a larger combination of diffraction orders than that of the smaller pitch (40 nm x, 70 nm y). The entropy of this combination (proportional to the natural logarithm of the number of possible combinations) is higher. The lower k1 (smaller pitch) case on the right comprises interference patterns of two or three diffracted plane waves at most, with much lower entropy. On the other hand, the higher k1 case comprises interference patterns of no less than four diffracted plane waves.

Figure 2. Different illumination source points produce vastly different images for the higher k1 case. The hexapole actually produces patterns which differ slightly aside from orientation.

Figure 2 shows that different illuminations can produce very different images, not resembling the pattern on the mask or the target pattern. A combination of such images, resulting from a combination of illuminations, can produce something closer to the target, a hexagonal array of circular spots (Figure 3). However, the dose applied to the photoresist layer is necessarily divided among the different component images, leading to relatively larger noise for each component, due to the splitting of the photon number among the different components. Fewer photons per component image leads to enhanced Poisson shot noise [1].

Figure 3. The combination of illumination sources in Figure 2 (left) is necessary to produce an image close to the target (right).

The enhanced noise is evident in a number of ways. First, it appears in the randomly varying stochastic areas of ~300 nm^2 (4-5%) and ~200 nm^2 (>9%) contacts printed within the 56 nm x 98 nm cell (Figure 4).

Figure 4. Stochastic trends of contact area for ~300 nm^2 contacts (left) and ~200 nm^2 contacts (right) for two different degrees of smoothing (3 x 3 pixel average and 5 x 5 nm pixel average). A nominal photon density of 40/nm^2 is assumed.

While edge photon counts of 80 per 1.4 nm pixel allow some containment of the shot noise, the smaller contact suffers more significant impact from the relative noise, as expected.

Noise also appears as a 1.4 nm (one pixel) swing in X-position/Y-position error (Figure 5) as well as a 1-2 pixel error in the Y CD, due to the smoother gradient for the larger Y-pitch (Figure 6). Note that the pixel discretization of the represented feature edge prevents accurate consideration of these CDs as being the “diameter.”

Figure 5. Stochastic trend of X/Y overlay for the 300 nm^2 contact (5×5 pixel averaging used).

Figure 6. Stochastic trend of X/Y CD for the 300 nm^2 contact (5×5 pixel averaging used).

Thus, even for larger k1 pitches, it is necessary to be choosy about the illumination, including stochastic consequences of photon division due to the higher entropy. This is highly relevant for EUV lithography utilizing doses comparable to the photon density used here (~59 mJ/cm2).

Reference:

[1]https://semiwiki.com/lithography/287526-the-stochastic-impact-of-defocus-in-euv-lithography/ 

This article originally appeared in LinkedIn Pulse: Stochastic Effects from Photon Distribution Entropy in High-k1 EUV Lithography

Related Lithography Posts


Security Hot Again in the Venture World

Security Hot Again in the Venture World
by Bernard Murphy on 08-04-2021 at 6:00 am

money min

From a lowly perspective in the hardware world, VCs can seem like magpies. Overly fascinated with bright, shiny ideas of questionable value. Stratospheric  valuations and a quick exit that have the rest of us scratching our heads. What difference is this going to make beyond minting a few new billionaires? Why not, say, have security be hot? We could all get behind that.

Sometime VCs do get excited about ideas we can understand, and serious technologies get serious funding. This is now happening in security ventures per an article in the New York Times. $12.2B so far this year has been pushed into security startups, with average valuations now of over $500M. Ransomware attacks are driving this heightened visibility according to Gartner among others.

Opportunities for hardware ventures?

These new ventures are all software based as far as I can tell. They emphasize  new methods to counter attacks in the cloud and to provide stronger identity verification. May be opportunities here for hardware ventures to contribute?

According to a SemiEngineering roundup, Kneron as part of a funding round acquired Vatics for surveillance and security camera operation. Armis raised $125M for IoT security. A software-based platform but tied into hardware endpoints. Again from SemiEngineering, Axiado raised $20M. Thinner pickings but I wouldn’t doubt there will be trickle down from the heavy-hitters in the cloud to edge applications needing to provide security support.

Where is early-stage semi funding going?

Always tricky to find definitive references but here’s one source. A lot of $$ going into mid-stage rounds for autonomy, nearly $200M for an AI hardware venture in China, then it drops off quite rapidly. Multiple ventures in EVs, sensors and batteries and a sprinkling in AI and memories. Even a used fab equipment reseller, though that apparently was a strategic investment. Judging by the success Silicon Catalyst is enjoying, I have no doubt there’s plenty of money to be had.


Upcoming Virtual Event: Designing a Time Interleaved ADC for 5G V2X Automotive Applications

Upcoming Virtual Event: Designing a Time Interleaved ADC for 5G V2X Automotive Applications
by Kalar Rajendiran on 08-03-2021 at 10:00 am

Mohammed Ismail Wayne State University

Over the last decade or so, the automotive industry has been rapidly adopting and deploying innovative and revolutionary technologies in automobiles. One such revolution is the autonomous vehicle technology. While this technology is not fully mature yet, some components of this technology are. Many late model automobiles already offer Advanced Driver Assist Systems (ADAS) which enhance the safety of drivers and vehicles. ADAS use information that they gather from sensors such as radar and cameras mounted in the vehicles. After processing the information, they provide actionable guidance to the driver or take automatic action to prevent accidents.

While ADAS is a revolutionary first step in the pursuit of automobile road safety, a key missing piece for achieving fully autonomous vehicles is the lack of a communications system to interconnect vehicles and traffic system infrastructure. The industry has been working on such a vehicular communication system and has named it Vehicle to Everything (V2X). The primary purpose of a V2X system is to improve road safety, enhance road traffic efficiency and bring energy savings to automobiles.

A complete V2X communication system has four aspects to it, namely, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-pedestrian (V2P) and vehicle-to-network (V2N) communications. Such as system may be called a fully connected system, with vehicles being able to communicate with other vehicles, traffic system infrastructure, pedestrians and the cloud data centers. Currently, there are a couple of competing standards under consideration for implementing a V2X system. Whether it is the IEEE 802.11p or the Cellular V2X standard, the implementation will operate in the analog/mixed-signal domain.

A V2X system will rely on the speed, precision, accuracy and reliability of the integrated circuits (ICs) used to implement that system. As a result, IC designers must ensure that their designs are fail-operational, have low defect rates, and operate reliably over a long period of time.

With the above objective, Synopsys and Global Foundries are jointly sponsoring a virtual event to promote state-of-the-art analog design practices for automotive circuits. It’s a 2-day educational virtual event that will be led by two engineering professors from Wayne State University and is scheduled for Aug 30 and Aug 31, 2021.

Mohammed Ismail, Chair & Professor, Electrical and Computer Engineering, Wayne State University

 

Mohammad Alhawari, Assistant Professor, Electrical and Computer Engineering, Wayne State University

 

About the Event

The virtual event is structured as a lecture and lab program. It is not a tools training or marketing presentation. The first day will begin with introductions to 5G in automotive applications, Global Foundries’ 22nm FDSOI technology and Time-Interleaved SAR ADCs for multiband V2X applications. The professors will then lead participants through designing and verifying a state-of-the-art 5G V2X design using the 22nm FDSOI process technology.

Why Attend?

    • Learn about the design and verification of an ADC for 5G V2X (Vehicle-to-everything) application, using the Global Foundries (GF) 22 nm FDSOI technology
    • Participate in hands-on lab sessions to learn about specific challenges related to ADC design

– designing the track-and-hold and comparator circuits

– handling effects of timing skew on Time-Interleaved (TI) ADCs

– accounting for effects of post-layout parasitics as well as aging and statistical variation

Who Should Attend?

This virtual experience is designed for:

    • Analog Design Engineers
    • Academic Luminaries
    • University Students

Registration Link: You can register for the virtual event here.

Also Read:

Optimize RTL and Software with Fast Power Verification Results for Billion-Gate Designs

Driving PPA Optimization Across the Cubic Space of 3D IC Silicon Stacks

Die-to-Die Connections Crucial for SOCs built with Chiplets


An FPGA-Based Solution for a Graph Neural Network (GNN) Accelerator

An FPGA-Based Solution for a Graph Neural Network (GNN) Accelerator
by Kalar Rajendiran on 08-03-2021 at 6:00 am

Screen Shot 2021 07 27 at 9.16.36 PM

Earlier this year, Achronix made a product announcement about shipping the industry’s highest performance Speedster7t FPGA devices. The press release included lot of details about the architecture and features of the device and how that family of devices is well suited to satisfy the demands of the artificial intelligence (AI) era. Emerging applications of the AI era rely on data intensive compute capability and zero latency to make real time decisions.

An earlier blog went into details highlighting the many benefits of using Speedster7t FPGA devices. That blog gave some insights into how the Speedster7t family of FPGAs offers a way to solve long standing chronic semiconductor chip problems. It explained how the lines between computing, communications and consumer market segments have faded to give rise to a number of smaller market segments. And how the requirements for each market segment were primarily driven by the use case the chips were to be deployed for. And how the Speedster7t devices offer the best attributes of the processor, ASIC, ASSP and traditional FPGA technologies.

With the markets moving toward an AI driven, edge-centric, fast-changing, data-accelerated product space with short life cycles, the stage is set for innovative efficient solutions to fill the demand. This blog covers the salient points garnered from a whitepaper that presents a Speedster7t-based solution for a Graph Neural Network (GNN) accelerator.

Machine Learning Algorithms and Data Complexity

Applications such as image classification, speech recognition and natural language processing involve operations on Euclidean data with a certain size, dimension and orderly arrangement. “Euclidean data” is data that can be modeled in n-dimensional linear space. Traditional machine learning (ML) algorithms work fine for these applications but not for many other applications that deal in non-Euclidean data such as graphs. Non-Euclidean data is complex as it contains not only the data but also the dependencies between the data elements. Social networks, protein molecular structures, and e-commerce platform customer data are examples of non-Euclidean data.

In order to handle this increase in data complexity, new graph-based machine learning algorithms or graph neural networks (GNNs) models are emerging at a fast rate from academia and industry alike.

GraphSAGE Algorithm

GraphSAGE is an algorithm proposed by Stanford University as a way to arrive at a GNN data acceleration solution. The algorithm involves three main steps. The first step involves sampling of adjacent nodes in a graph. To limit complexity, this step is generally limited to sampling only two layers deep. The second step is aggregation of feature information from the adjacent nodes. And the third step is the predicting of the target node label.

Mathematical Model of GraphSAGE Algorithm (Source: http://snap.stanford.edu/graphsage)

As can be seen from the mathematical model, the algorithm involves a large number of matrix calculations and memory access operations. An x86 architecture-based implementation will be very inefficient in terms of performance and power consumption. A GPU may improve the performance per watt metric compared to a CPU implementation but the solution will still fall short on the performance level needed for real-time calculations of a graph.

A better GNN data acceleration solution is needed to execute real-time applications that operate on non-Euclidean data. The solution should support highly concurrent, real-time computing, huge memory capacity and bandwidth and scalable. 

GNN Accelerator Design Challenges

Research has thrown light on the characteristics of the aggregation and merge operations involved in executing the GNN algorithm. Refer to table below. It can be seen that the two types of operations have completely different requirements. 

Comparison of Aggregation and Merge operations in the GNN algorithmSource : https://arxiv.org/abs/1908.10834

FPGA Design Scheme of GNN Accelerator

Based on the differences in requirements for performing the aggregation and merging operations, it makes sense to design two different hardware structures in the GNN core of the accelerator design to handle these respective operations.

The extensive set of features included in the Speedster7t1500 FPGA make it easy to overcome the challenges faced in implementing GNN accelerator solutions. As indicated earlier, the Achronix Speedster7t family of high-performance FPGAs is optimized to eliminate performance bottlenecks found in solutions based on CPUs, GPUs, ASICs, ASSPs and even traditional FPGAs. For the full set of features and details of the architecture, refer to the product page at Speedster7t-fpgas. The following table gives a high-level mapping of how the Speedster7t1500 meets the GNN design challenges.

Summary

The whitepaper explains how the unique features provided by the Achronix Speedster7t AC7t1500 FPGA devices lend themselves to creating a highly scalable GNN acceleration solution that can deliver excellent performance. For all the details covered in the whitepaper, you can download here. For more details about the Speedster7t FPGA family, go to the product page at speedster7t-fpgas.

 

 

 


Mobileye’s Uncut Gem

Mobileye’s Uncut Gem
by Roger C. Lanctot on 08-02-2021 at 10:00 am

Mobileyes Uncut Gem

Last week, Mobileye released a 40-minute long unedited video of the company’s camera-only Supervision automated driving system in action on the chaotic streets of New York City. The video release was part of an announcement of an expansion of Mobileye’s testing of autonomous vehicle technology including New York City (as the only permitted AV operator in the state) and with two systems or paths to market – the camera-only solution and a radar and lidar enhanced offering.

Unedited Mobileye Autonomous Vehicle Ride in New York: https://www.youtube.com/watch?v=50NPqEla0CQ

Mobileye described the initiative as part of a broader effort targeting the “robotaxi” segment encompassing planned deployments in Tel Aviv (with Volkswagen), Paris, New York, Munich, Tokyo, Shanghai, and elsewhere with auto maker partners Ford, Nio Motors, and Geely, among others. Speaking at the event, Mobileye CEO Amnon Shashua made a distinction between robotaxi deployments and related system configurations and “consumer” AV systems, such as that offered by Tesla Motors.

In fact, Shashua painted a picture of Tesla as the lone auto maker offering a semi-autonomous driving system vs. all other AV developers working toward robotaxi solutions. This is the dichotomy defining the future of automotive transportation in the estimation of Mobileye. (It is a view at odds with the emerging conventional wisdom that the commercial long haul trucking industry will be first to benefit from autonomous tech.)

He further noted that the robotaxi proposition is encompassed by the consumer AV vision. This is manifest in and validated by Tesla’s announced intention to enable a robotaxi capability in the not-too-distant future. In contrast, the robotaxi concept does not encompass the consumer AV vision, which is defined by vehicle ownership.

Shashua is proud of Mobileye as the only commercially available camera-based autonomous vehicle hardware-software stack solution suitable for both robotaxi and consumer deployment. As if adding a cherry on top of the icing on the cake, Mobileye also owns Moovit which is intended to ultimately serve as the app-based front end to the full mobility-as-a-service (MaaS) offering.

Mobileye is in a unique position to bring its vision to life. The company is the dominant supplier of silicon and software for front facing camera systems.  Mobileye shipped 19.5M systems to auto makers in 2019 – nearly a quarter of the entire global market – and anticipates surpassing 100M Mobileye-equipped vehicles on the road by the end of 2021.

Mobileye parent, Intel, has AV-supporting microprocessor solutions and has a stake in mapping provider HERE. Mobileye is also building out a data collection platform in its Road Experience Management system for identifying road signs and hazards in real time.

Mobileye is demonstrating a Tesla-like consumer AV solution with Geely-owned Zeekr in Israel with a production vehicle due this year. For the purposes of the New York event, Mobileye appears singularly focused on the robotaxi proposition – a concept that has yet to prove its commercial viability.

At the heart of the robotaxi solution is a geographically limited area of operation – hence, Mobileye’s city by city deployment and testing. At the heart of the Tesla-led consumer AV concept is cost concerns (Target: <$5K) and the ability to operate anywhere.

To demonstrate its camera-only solution Mobileye chose New York City, an operating environment better suited to a system additionally equipped with LiDAR and radar. It may be that Mobileye is constrained by the reality that each of its OEM partners – such as Nissan with its ProPilot 2.0 system – has its own vision for camera-only Level 2 (semi-autonomous driver supervised) hands-free highway operation a la GM Super Cruise.

It is notable that the Intel Mobileye event occurred the week before Tesla’s earnings report and the same week that Tesla announced plans for a $199/month self-driving subscription service. Coincidentally, Revel simultaneously announced New York approval for its launch of a Tesla-based taxi service in New York.

Shashua’s portrayal of the company’s progress toward a global robotaxi offering scaling and deploying on a city-by-city basis belies the reality that Mobileye offers the only commercially available autonomous vehicle solution ready for production. By demonstrating that system’s camera-only offering in action with the unedited video Mobileye is serving notice to its broad customer base that it has a market ready road worthy solution proven on some of the meanest streets in the world.

Shashua talks a good game about thousands of robotaxis and scalability, but the ultimate objective is to deliver a production ready mass market solution suitable for immediate implementation. This was especially important for Mobileye to communicate ahead of the end-of-week announcement of Magna’s acquisition of Veoneer – the leading rival to Mobileye.

Skeptics reviewing the Mobileye unedited video saw it as an effort to demonstrate its ability to outperform Tesla’s Autopilot. While superior to Tesla Autopilot, the video suggested other limitations. Most auto makers are probably more interested in hands-free highway operation than AV operation in an urban setting. In that respect the demonstration was something of a science experiment – as if Mobileye was simply telling the industry: “Look what we can do.”  For the time being, Mobileye is the only robotaxi in town – even if it is only in testing mode.


Prototypical II PDF is now available!

Prototypical II PDF is now available!
by Daniel Nenni on 08-02-2021 at 6:00 am

Prototypical II

Our latest book has finally been published! A PDF version of “Prototypical II – The Practice of FPGA Prototyping for SoC Design” is now available in the SemiWiki book section. The first book “Prototypical – The Emergence of FPGA Prototyping for SoC Design” was published in 2016 and a lot has happened since then so it was high time for an update.

In this book, we look at the history of FPGA-based prototyping and the leading providers – S2C, Synopsys, Cadence, and Mentor. Initially, we look at how the need for co-verification evolved with chip complexity, where FPGAs got their start in verification, and why ASIC design benefits from prototyping technology.

My co-author this time is Steve Walters, a good friend from back in the Virage Logic days. Steve was an early employee of Quickturn, a pioneer in prototyping and emulation, which was later acquired by Cadence, so Steve knows where the emulation bodies are buried, absolutely.

Steve and I updated the first section and completely rewrote the second half. Here is the table of contents and a clip from the beginning of the book. For the greater good of the semiconductor industry! Comments are welcome, enjoy!

Part I – Evolution of Design Verification Techniques
The Art of the “Start”
A Few Thousand Transistors
Microprocessors and ASICs
The Birth of Programmable Logic
Pre-Silicon Becomes a Thing
Positioning: The Battle for Your Mind
First Pentium Emulation
Enabling Exploration and Integration

Part II – FPGA Prototyping for Different Design Stages
Design Exploration
IP Development
Hardware Verification
System Validation
Software Development
Compatibility Testing

The Art of the “Start”
The semiconductor industry revolves around the “start.” Chip design
starts lead to more EDA tool purchases, more wafer starts, and eventually to more product shipments. Product roadmaps develop to extend shipments by integrating new features, improving performance, reducing power, and reducing area – higher levels of functional integration and what is referred to as “improved PPA.” Successful products lead to additional capital expenditures, stimulating more chip designs and more wafer starts. If all goes well, and there are many things that can go wrong between the MRD and the market, this cycle continues. And in keeping with good capitalist intentions, this frenetic cycle drives increased design complexity and design productivity to feed the global appetite for economic growth.

Chip designs have mutated from relatively simple to vastly complex and
expensive, and the silicon technology to fabricate chips has advanced through rapid innovation from silicon feature sizes measured in tens of microns – to feature sizes measured in nanometers. Once visualized as ones and zeroes in a table, functions now must comprehend the execution of powerful operating systems, application software, massive amounts of data, and heretofore incomprehensible minuscule latencies. Continued semiconductor industry growth depends on delivering ever more complex chip designs, co-verified with specialized system software – in less time with relatively fewer mistakes.

New chip wafer fabs now cost billions of dollars, with production capacities in the 10’s of thousands of wafers per month – in May of 2019, TSMC announced that it would build a new wafer fab in Arizona. The total project spending for the planned new 5-nm wafer fab, including capital expenditures, is expected to be approximately $12B from 2021 to 2029, and the fab is expected to have the capacity to produce 20,000 wafers per month. [1]

One malevolent block of logic within a chip design can cause very expensive wafers to become scrap. If a flaw manages to escape, only showing itself at a critical moment in the hands of a customer, it can set off a public relations storm calling into question a firm’s hard-earned reputation as a chip supplier.

Chip design verification is like quality: it asymptotically approaches perfection but never quite achieves 100%. It may be expressed as a high percentage less than 100%, but close enough to 100%, to relegate fault escapes to the category of “outlier” – hopefully of minimal consequence. Only through real-world use in the hands of lots of customers will every combination of stimuli be applied to every chip pin, and every response be known. So, chip designers do their best to use the latest cocktail of verification techniques and tools, and EDA companies continually innovate new verification tools, design flows, and pre-verified silicon IP, in a valiant effort to achieve the elusive goal of achieving chip design verification perfection.

The stakes are very high today for advanced silicon nodes where mask sets can cost tens of millions of dollars, and delays in chip project schedules that slip new product roll-out schedules can cost millions of dollars more in marketing costs. With the stakes so high for large, sophisticated chips, no prudent leader would dare neglect investing in semiconductor process quality. Foundries such as GlobalFoundries, Intel, Powerchip, Samsung, SMIC, TSMC, UMC, and others have designed their entire businesses around producing high-quality silicon in volume at competitive costs for their customers.

So, chip design teams struggle to contain verification costs and adhere to schedules. The 2020 Wilson Report found that only about 32 percent of today’s chip design projects can achieve first silicon success, and 68 percent of IC/ASIC projects were behind schedule. [2] A prevailing attitude is that the composite best efforts of skilled designers using advanced EDA design tools should result in a good outcome. Reusing known-good blocks, from a previous design or from a reliable IP source, is a long-standing engineering best practice for reducing risk and speeding up the design cycle. Any team that has experienced a chip design “stop” or “delay” knows the agony of uncertainty and fear that accompanies these experiences. Many stories exist of an insidious error slipping through design verification undetected and putting a chip design, a job, and sometimes an entire company, at risk. The price of hardware and software verification escapes can dwarf all other product investments, and ultimately diminish a hard-earned industry leadership reputation.

Enter FPGA-based prototyping for chip design verification. A robust verification plan employs proven tests for IP blocks, and tests the fully integrated design running actual software (co-verification) – which is beyond the reach of software simulation tools alone. Hardware emulation tools are highly capable, and faster than software simulation, but highly expensive and often out of reach for many design teams. FPGA-based prototyping tools are scalable, cost-effective for almost any design, offer capable debug visibility, and are well suited to hardware software co-verification.

Also Read:

StarFive Surpasses Development Goal with the Prodigy Rapid Prototyping System from S2C

CEO Interview: Toshio Nakama of S2C EDA

S2C Raises the Bar for High Capacity, High-Performance FPGA Prototyping


KLA – Chip process control outgrowing fabrication tools as capacity needs grow

KLA – Chip process control outgrowing fabrication tools as capacity needs grow
by Robert Maire on 08-01-2021 at 10:00 am

KLA Tencor SemiWiki

-KLA dominates process control like ASML dominates litho
-Industry in “panic mode” over capacity drives process control
-Like others, KLA tool supplies are in demand & tight supply
-Balance of 2021 “filled out” – now booking for 2022

Solid numbers and very solid guide for a better second half
KLAC reported revenues of $1.93B with non-GAAP EPS of $4.43 on gross margins of 62%. This very handily beat analyst expectations of $3.99B and EPS of $3.99.

More importantly KLA is looking for a stronger second half of the year with September revenue guidance of $1.92B-$2.12B ($2.02B midpoint) and EPS of $4.01-$4.89 ($4.45 midpoint) versus current analyst expectations of $1.92B and EPS of $4.13.

This is obviously better than Lam’s guide last night which left some investors wanting. It would appear that KLA may have a bit more confidence and a bit more growth in the second half.

Logic/Foundry remains in stronger demand than memory
Historically, KLA has seen more business out of foundry/logic customers as compared to memory and right now foundry/logic demand is off the charts while memory is in reasonable balance…..shortages are almost only in foundry/logic.

This obviously plays to KLA’s traditional strength and the higher need for process control in foundry/logic.

KLA stated on the call that they expect to outgrow the industry average in 2021 and they are likely correct. If this were a memory driven cycle it might have been different but the demand is in KLA’s wheelhouse.

Add to this all the myriad of technology process changes and the need for new, Chinese customers, to learn how to build chips and you get strong demand for process control beyond just the shortage induced demand. China represented about a third of demand and hasn’t seen any softening

EUV still paying dividends for KLA
The after effects of the adoption of EUV continue to ring the registers for KLA. The industry still needs a lot of help and KLA gets a lot of business associated with this, especially on the wafer inspection side with “print & check” type of applications.

The only significant hole in KLA’s otherwise strong armor of its product line remains its lack of Actinic inspection. KLA was embarrassed by a previously unknown but now very well known and rich little competitor called Lasertec which remains the only actinic reticle inspection in the industry. KLA still seems far away from having one and didn’t have a good, crisp answer about it on the call.

As Intel ramps its EUV in earnest as it recently laid out in its Intel accelerated presentation, KLA will likely lose more reticle inspection market share to Lasertec as Intel was their major sponsor. Lasertec came close to equaling KLA last year in their own wheelhouse of reticle inspection.

“Filled out” for the balance of the year
KLA like ASML , has always had the luxury of not being a “turns” business like the dep and etch guys. Litho and process control tools tend to have long lead times and those lead times have obviously gotten even longer as the company said on the call that 2021 was “filled out” and most orders being taken were for 2022 delivery.

This in essence means that KLA has several quarters of backlog on the books and can virtually “dial in” whatever revenue and profit numbers it wants. While there may be some headwinds on parts and sub systems they are not nearly as bad as lenses for EUV litho tools but still require work to prevent shortages.

The fact that customers have to wait until 2022 for tools ordered today probably adds to both the panic and sense of urgency in the market today which only adds to the demand….in essence a bit of a self fulfilling prophecy.

Financials remain great
KLA remains the ATM of the industry with great gross margins and terrific financial performance. More dividends and stock buybacks continue the shareholder cash returns that are so attractive.

This remains very well managed and tops in the industry.

The Stock
Like many other stocks in the semiconductor space, KLA is not cheap…..If we had to pick a couple of names to own in the space KLA and ASML would likely be there as they both are very dominant, monopolies or near monopolies.

KLA had a very strong beat and raise which is a good near term indicator for the stock. The balance of the year is already in the bag so the downside risk is quite low and near zero for what is historically a cyclical industry.

In short there was nothing to complain about and a lot to enjoy from the June quarter report which should make shareholders happy.

We would continue to be an owner and might try to add if it didn’t run up too much.

Also Read:

LAM – Surfing the Spending Tsunami in Semiconductors – Trailing Edge Terrific

ASML- A Semiconductor Market Leader-Strong Demand Across all Products/Markets

GloFo inside Intel? Foundry Foothold and Fixerupper- Good Synergies


LAM – Surfing the Spending Tsunami in Semiconductors – Trailing Edge Terrific

LAM – Surfing the Spending Tsunami in Semiconductors – Trailing Edge Terrific
by Robert Maire on 08-01-2021 at 6:00 am

Lam Research

-$80B + in WFE with strong back half in 2021
-Trailing edge strength adds to overall great demand
-Supply side headwinds require effort- Malaysia now open
-It just comes down to execution which Lam has done well

Nice beat and sandbagged guide
As Lam has done consistently for many quarters now, they beat numbers with Revenues of $4.15B and EPS of $8.09 versus street estimates of $4.01B and $7.55EPS. Guidance was for revenues of $4.3B +- $250M and EPS of $8.10 +- $0.50 versus street currently at $4.07B and $7.77 in EPS.

While we think this is a fine, normal, “sandbagged” , guidance range some investors in the after market took it as a sign of weakness as it was flattish off of the just reported numbers. We think its nothing more than Lam’s standard guide setup that it can easily beat.

On track for $80B+ in WFE spend for the year
It should come as no surprise to anyone following the semiconductor industry that we will se over $80B in spending on tools.

For Lam, it was also no surprise that China leads the way at 37%, followed by Korea at 30%, then Taiwan at 13%, Japan at 9% and the US a distant fifth in business at a paltry 5%.

So far, it appears that any restrictions on sales to China are having exactly zero impact on Lam’s large business there.

Its also very clear that even with the expected $52B in “Chips for America” spending by the government the US will likely remain very far behind in overall spend.

Trailing Technology Terrific
We have been suggesting that business for old tools and 8 inch wafers is very strong as much of the shortage in chip capacity is in those trailing edge fabs.

It’s not 12 inch, 7NM fabs that make chips for cars its 20 year old, 8 inch, 90NM and 65NM fabs that do the bulk of the work.

This trailing edge business is just icing on the cake of already booming leading edge technology and represents almost an annuity business as customers just buy more of what they already have in the fab with little competition and no room to negotiate.

Memory remains in pretty good balance
Memory, particularly NAND which is key for Lam at 49% of their revenues, remains in a healthy supply/demand balance with supply slightly behind demand in NAND and DRAM supply and demand in rough parity.

This all makes for a very good price environment with in turn supports continued capex in memory.

It certainly doesn’t appear that memory will get out of whack any time soon and likely not before the end of the year and beyond.

New process steps and technology helps grow capital intensity
In addition to the strong market demand trends and chip shortages we see growth on top of that due to new process steps either won, invented, or added to Lam’s offerings. The Vector “anti-stress” tool or the new dry resist tools are good examples of opportunities that did not exist a short time ago.
Lam has also done a good job of share gains in some key spaces as well.

Supply Side Stretched
As we had talked about with ASML where the supply problems are even worse, Lam has its share of getting enough parts to keep building and servicing tools.

With the thousands of parts that go into a typical tool even a simple, cheap, part can keep a tool from shipping.

This is a headwind but so far it has not impacted Lam in any negative way.

The Stock
We certainly like Lam’s positioning but the price of the stock has perfection and then some baked in. Multiples are high and so are expectations which lead to downdrafts such as we saw in the aftermarket less than happy about less than stellar guidance.

Its both tough to chase the valuation of the stock as well as tough not to own the stock. While its too early to take profit its also hard to put new money to work in an already heated valuation.

We would likely stand pat with a current position and trade around the potential ups and downs of news flow in the space. We would be very sensitive and aware of any news about the shortage situation slowing or the memory balance changing.

Also Read:

ASML- A Semiconductor Market Leader-Strong Demand Across all Products/Markets

GloFo inside Intel? Foundry Foothold and Fixerupper- Good Synergies

Chips for America Act – Funding Failures & Foreigners or Saving Semiconductors?