wide 1

Model-Based Design Courses for Students

Model-Based Design Courses for Students
by Bernard Murphy on 01-31-2023 at 6:00 am

System design min

Amid the tumult of SoC design advances and accompanying verification and implementation demands, it can be easy to forget that all this activity is preceded by architecture design. At the architecture stage the usual SoC verification infrastructure is far too cumbersome for quick turnaround modeling. Such platforms also tend to be weak on system-wide insight. Think about modeling an automotive Ethernet to study tradeoffs between zonal and other system architectures. Synopsys Platform Architect is one possible solution though still centered mostly on SoC designers rather than system designers. MATLAB/Simulink offers a system-wide view, but you have to build your own model libraries.

Mirabilis VisualSim Architect offers a model-based design (MBD) system with ready-to-use libraries for popular standards and components in electronic design. They have now added a cloud-based subset of this system plus collateral to universities as a live, actionable training course. Called “Semiconductor and Embedded Systems Architecture Labs” (SEAL), the course provides hands-on training in system design to supplement MBD/MBSE courses.

Mirabilis VisualSim and MBD

Deepak Shankar (Founder at Mirabilis) makes the point that for a university or training center to develop a training platform requires they procure and maintain prototypes and tool platforms and build training material and lab tutorials. This is extremely time-consuming and expensive, and quickly drifts out of date.

VisualSim is a self-contained system plus model library requiring no integration with external hardware, tools or libraries. Even more important the full product is in active use today for production architecture design across an A-list group of semiconductor, systems, mil-aero, space and automotive companies who expect accuracy and currency in the model library. As one recent example, the library contains a model for UCIe, the new standard for coherent communication between chiplets.

Hardware models support a variety of abstractions, from SysML down to cycle accurate, and analog (with linear/differential equation solvers) as well as digital functionality. Similarly, software can evolve from a task-graph model to more fully elaborated code.

The SEAL Program

The lab is offered on the VisualSim Cloud Graphical Simulation Platform, together with training collateral in the form of questions and answer keys. The initial release covers 67 standards and 85 applications. Major applications supported by SEAL include AI, SoC, ADAS, Radars, SDR, IoT, Data Center, Communication, Power, HPC, multi-core, cache coherency, memory, Signal/Image/Audio Processing and Cyber Physical Systems. Major standards supported are UCIe, PCIe6.0, Gigabit Ethernet, AMBA AXI, TSN, CAN-XL, AFDX, ARINC653, DDR5 and processors from ARM, RISC-V, Power and x86.

Examples of labs and questions posed include:

  • What is the throughput degradation of multi-die UCIe based SoC versus an AXI based SoC?
  • How do autonomous driving timing deadlines change between multi-ECUs vs single HPC ECU?
  • How much power is consumed in different orbits of a multi-role satellites?
  • Which wired communication technology is more suitable for a flight avionics system – PCIe or Ethernet?

Course work can be graded by university teaching or training staff. Alternatively, Mirabilis is willing to provide certification at two levels. A basic level offers a Certificate of Completion for a student who works through a module and completes the Assessment Questions. More comprehensive options include a Professional Certificate for a student who successfully completes 6 modules, or a Mini Masters in Semiconductor and Embedded Systems for a student who completes 20 modules.

What’s Next?

While an MBD system of this type obviously needs some pretty sophisticated underlying technology to manage the multiple different types of simulation needed and stitching required between different modeling styles and abstractions, the practical strength of the system clearly rests on the strength of the library. Deepak tells me their commercial business splits evenly between semiconductor and systems clients, all doing architecture simulation. Working with both types of client keeps their model library tuned to the latest needs.

Semiconductor clients are constantly optimizing or up-revving SoC architectures. Systems clients are doing the same for more distributed system architectures – an automotive network, an O-RAN system, an avionics system, a multi-role satellite system. Which makes me wonder. We all know that system companies are now more heavily involved in SoC design, in support of their distributed systems. Some form of MBD must be the first step in that flow. A platform with models well-tuned (though not limited) to the SoC world might be interesting to such architects I would think?

You can learn more about the SEAL program HERE.

Also Read:

CEO Interview: Deepak Shankar of Mirabilis Design

Architecture Exploration with Miribalis Design

Rethinking the System Design Process


Counter-Measures for Voltage Side-Channel Attacks

Counter-Measures for Voltage Side-Channel Attacks
by Daniel Payne on 01-30-2023 at 2:00 pm

agileGLITCH min

Nearly every week I read in the popular press another story of a major company being hacked: Twitter, Slack, LastPass, GitHub, Uber, Medibank, Microsoft, American Airlines. What is less reported, yet still important are hardware-oriented hacking attempts at the board-level to target a specific chip, using voltage Side-Channel Attacks (SCA). To delve deeper into this topic I read a white paper from Agile Analog, and they provide IP to detect when a voltage side-channel attack is happening, so that the SoC logic can take appropriate security counter-measures.

Approach

Agile Analog has created a rather crafty IP block that plays the role of security sensor by measuring critical parameters like voltage, clock and temperature. Here’s the block diagram of the agileGLITCH monitor, comprised of several components:

agileGLITCH

The Bandgap component ensures a voltage reference, and operates across a wider voltage span to provide glitch monitoring. You may increase accuracy optionally using production trimming.

Each reference selector has a configurable input voltage to the programmable comparators, allowing you to adjust the glitch side. You would adjust the thresholds if your core is using Dynamic Voltage Frequency Scaling (DVFS).

There are two programmable comparators, one for positive voltage glitches, and the other for negative glitch detection. You get to configure the thresholds for glitch detection, and the level-shifters enable the IOs to use the core supply.

The logic following each comparator provides control of enables based on the digital inputs, latching momentary events on the output of comparators, disabling outputs while testing, and 3-way majority voting on the latched outputs.

Not shown in the block diagram is an optional ADC component to measure the supply value, something useful for lifetime issues, or measuring performance degradation.

Use Cases

Consider an IOT security device like a wireless door lock to a home, where a malicious person gains access to the lock and uses voltage SCA to enter debug mode of the device, reading all of the authorized keys for the lock. With agileGLITCH embedded, the IOT device detects and records the voltage glitch, alerting the cloud system of an attack, noting the date and time.

IOT WiFi lock

A security camera has been compromised using voltage SCA to get around the boot-signing sequence, allowing agents to reflash using hacked firmware. This kind of exploit lets the hacker view the video and audio stream, violating privacy and setting up a blackmail scenario. Using the agileGLITCH counter-measure, the camera system detects voltage glitch events, then stops any unknown code to be flashed, plus it could report to the consumer that the device was compromised before they purchased it.

Security Camera

An automotive supply regulator tests OK at the factory, however over time, during high load conditions, the voltage degrades and eventually fails. The agileGLITCH sensor is a key component of a system that could measure voltage degradation over time (using an ADC and digital data monitor), and report back to the automotive vendor so that they can issue a recall in order to repair or replace the supply regulator. The trend is to provide remote automotive fixes, over the air.

Supply Regulator

A hacker wants to remove Digital Rights Management (DRM) from a satellite system, installing a voltage glitcher on the HDMI controller supply to reset the HDMI output to be non-HDCP validated. Counter-measures in agileGLITCH detect voltage glitching, safeguarding the HDMI controller from tampering.

Satellite Receiver System

Summary

Hacking is happening every day, all around the world, and the exploits continue to grow in complexity and penetration. Voltage SCA is a hacking technique used when the bad actors have physical access to the electronics and they use supply glitching techniques to put the system into a vulnerable state, but this approach only works if there are no built-in counter-measures. With an approach like agileGLITCH embedded inside an electronic device, then these voltage SCA hacking attempts can be identified and thwarted, before any unwanted changes are made. An ounce of prevention is worth a pound of cure, and that applies to SCA mitigation.

To download and read the entire white paper, visit the Agile Analog site and complete a short registration process.

Related Blogs

 


Achronix on Platform Selection for AI at the Edge

Achronix on Platform Selection for AI at the Edge
by Bernard Murphy on 01-30-2023 at 10:00 am

Edge compute

Colin Alexander ( Director of product marketing at Achronix) released a webinar recently on this topic. At only 20 minutes the webinar is an easy watch and a useful update on data traffic and implementation options. Downloads are still dominated by video (over 50% for Facebook) which now depends heavily on caching at or close to the edge. Which of these applies depends on your definition of “edge”. The IoT world see themselves as the edge, the cloud and infrastructure world apparently see the last compute node in the infrastructure, before those leaf devices, as the edge. Potato, potahto. In any event the infrastructure view of the edge is where you will find video caching, to serve the most popular downloads as efficiently and as quickly as possible.

Compute options at the edge (and in the cloud)

Colin talks initially about infrastructure edge where some horsepower is required in compute and in AI. He presents the standard options: CPU, GPU, ASIC or FPGA. A CPU-based solution has the greatest flexibility because your solution will be entirely software based. For the same reason, it will also generally be the slowest, most power hungry and longest latency option (for round trip to leaf nodes I assume). GPUs are somewhat better on performance and power with a bit less flexibility than CPUs. An ASIC (custom hardware) will be fastest, lowest power and lowest latency, though in concept least flexible (all the smarts are in hardware which can’t be changed).

He presents FPGA (or embedded FPGA/eFPGA) as a good compromise between these extremes. Better on performance, power and latency than CPU or GPU and somewhere between a CPU and a GPU on flexibility. While much better than an ASIC on flexibility because an FPGA can be reprogrammed. Which all makes sense to me as far as it goes, though I think the story should have been completed by adding DSPs to the platform line up. These can have AI-specific hardware advantages (vectorization, MAC arrays, etc) which benefit performance, power, and latency. While retaining software flexibility. The other important consideration is cost. This is always a sensitive topic of course but AI capable CPUs, GPUs and FPGA devices can be pricey, a concern for the bill of materials of an edge node.

Colin’s argument makes most sense to me at the edge for eFPGA embedded in a larger SoC. In a cloud application, constraints are different. A smart network interface card is probably not as price sensitive and there may be a performance advantage in an FPGA-based solution versus a software-based solution.

Supporting AI applications at the compute edge through an eFPGA looks like an option worth investigating further. Further out towards leaf nodes is fuzzy for me. A logistics tracker or a soil moisture sensor for sure won’t host significant compute, but what about a voice activated TV remote? Or a smart microwave? Both need AI but neither need a lot of horsepower. The microwave has wired power, but a TV remote or remote smart speaker runs on batteries. It would be interesting to know the eFPGA tradeoffs here.

eFPGA capabilities for AI

Per the datasheet, Speedster 7t offers fully fracturable integer MACs, flexible floating point, native support for bfloat and efficient matrix multiplications. I couldn’t find any data on TOPS or TOPS/Watt. I’m sure that depends on implementation but examples would be useful. Even at the edge, some applications are very performance sensitive – smart surveillance and forward-facing object detection in cars for example. It would be interesting to know where eFPGA might fit in such applications.

Thought-provoking webinar. You can watch it HERE.

Also Read:

WEBINAR: FPGAs for Real-Time Machine Learning Inference

WEBINAR The Rise of the SmartNIC

A clear VectorPath when AI inference models are uncertain


Taming Physical Closure Below 16nm

Taming Physical Closure Below 16nm
by Bernard Murphy on 01-30-2023 at 6:00 am

NoC floorplan

Atiq Raza, well known in the semiconductor industry, has observed that “there will be no simple chips below 16nm”. By which he meant that only complex and therefore high value SoCs justify the costs of deep submicron design.  Getting to closure on PPA goals is getting harder for such designs, especially now at 7nm and 5nm. Place and route technologies and teams are not the problem – they are as capable as ever. The problem lies in increasingly strong coupling between architectural and logic design and physical implementation. Design/physical coupling at the block level is well understood and has been addressed through physical synthesis.  However, below 16nm it is quite possible to design valid SoC architectures that are increasingly difficult to place and route, causing project delays or even SoC project cancellations due to missed market windows.

Why did this get so hard?

Physical implementation is ultimately an optimization problem. Finding a placement of interconnect components and connections between blocks in the floorplan which will deliver an optimum in performance and area. While also conforming to a set of constraints and meeting target specs within a reasonable schedule. The first goal is always possible if you are prepared to compromise on what you mean by “optimum”. The second goal depends heavily on where optimization starts and how much time each new iteration consumes in finding an improved outcome. Start too far away from a point which will deliver required specs, or take too long to iterate through steps to find that point and the product will have problems.

This was always the case, but SoC integrations in advanced processes are getting much bigger. Hundreds of blocks and tens of thousands of connections expand the size of the optimization space. More clock and power domains add more dimensions, and constraints. Safety requirements add logic and more constraints, directly affecting implementation. Coherent networks add yet more constraints since large latencies drag down guaranteed performance across coherent domains. In this expanding, many-dimensional and complex constrained optimization space with unpredictable contours, it’s not surprising that closure is becoming harder to find.

A much lower risk approach would start place and route at a point reasonably close to a good solution, without depending on long iteration cycles between design and implementation.

Physically aware NoC design

The integration interconnect in an SoC is at the heart of this problem. Long wires create long delays which defeat timing closure. Many wires running through common channels create congestion which forces chip area to expand to reduce congestion. Crossbar interconnects with their intrinsically congested connectivity were replaced long ago by network on chip (NoC) interconnects for just this reason. NoC interconnects use network topologies which can more easily manage congestion, threading network placement and routing though channels and white space in a floorplan.

But still the topology of the NoC (or multiple NoCs in a large design) must meet timing goals; the NoC design must be physically aware. All those added constraints and dimensions mentioned earlier further amplify this challenge.

NoC design starts as a logical objective, to connect all IP communication ports as defined by the product functional specification while assuring a target quality of service. And meeting power, safety and security goals. Now it is apparent that we must add a component of physical awareness to these logical objectives. Estimation of timing between IP endpoints and congestion based on a floorplan in early stages of RTL development, to be refined in later stages with a more accurate floorplan.

With such a capability, a NoC designer could run multiple trials very quickly, re-partitioning the design as needed, to deliver a good starting point for the place and route team. That team would then work their magic to fully optimize the physical awareness estimation. Confident that the optimum they are searching for is reasonably close to that starting point. That they will not need to send the design back for restructuring and re-synthesis.

Additional opportunities

Physically aware NoC design could offer additional advantages. By incorporating floorplan information in the design stage, a NoC designer can build a better NoC. Understanding latencies, placements and channel usage while still building the NoC RTL, they may realize opportunities to use a different topology (see the topology above as one example). Perhaps they can use narrower or longer connections on latency-insensitive paths, avoiding congestion without expanding area.

Ultimately, physical awareness might suggest changes to the floorplan which may deliver an even better implementation than originally considered.

Takeaway

Charlie Janac, CEO at Arteris, stressed this point in a recent SemiWiki podcast:

Physical awareness is helpful for back-end physical layout teams to understand the intent of the front-end architecture and RTL development teams.  Having a starting point that has been validated for latency and timing violations can significantly accelerate physical design and improve SoC project outcomes.  This is particularly important in scenarios where the architecture is being done by one company and the layout is being done by another. Such cases often arise between system houses such as automotive OEMs and their semiconductor design partners. Physical awareness is beneficial all around. It’s a win-win for all involved.

Commercial interconnect providers need to step up to make their NoC IP physically aware out of the box. This is becoming a minimum requirement for NoC design in advanced technologies. You might want to give Arteris a call, to understand how they are thinking about this need.

Also Read:

Arteris IP Acquires Semifore!

Arm and Arteris Partner on Automotive

Coherency in Heterogeneous Designs


Podcast EP141: The Role of Synopsys High-Speed SerDes for Future Ethernet Applications

Podcast EP141: The Role of Synopsys High-Speed SerDes for Future Ethernet Applications
by Daniel Nenni on 01-27-2023 at 10:00 am

Dan is joined by Priyank Shukla, Staff Product Manager for the Synopsys High Speed SerDes IP portfolio. He has broad experience in analog, mixed-signal design with strong focus on high performance compute, mobile and automotive SoCs and he has a US patent on low power RTC design.

Dan explores the use of high-speed SerDes with Priyank. Applications that enable high-speed Ethernet for data center and 5G systems are discussed. The performance, latency and power requirements for these systems is quite demanding, How Synopsys advanced SerDes IP is used to address these challenges is also discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CTO Interview: John R. Cary of Tech-X Corporation

CTO Interview: John R. Cary of Tech-X Corporation
by Daniel Nenni on 01-27-2023 at 6:00 am

20220307SpanishHillsTacoDinner blurred 1

John R. Cary is professor of physics at the University of Colorado at Boulder and CTO of Tech-X Corporation. He received his PhD from the University of California, Berkeley, in Plasma Physics.  Prof. Cary worked at Los Alamos National Laboratory and the Institute for Fusion Studies at the University of Texas, Austin, prior to joining the faculty at the University of Colorado. At the University of Colorado, Professor Cary has served as department chair, center director, and faculty mentor.

In 1994, he co-founded Tech-X Corporation, which concentrates on computational applications for a wide variety of science and engineering applications.  Prof. Cary has researched multiple areas related to beam and plasma physics and the electromagnetics of structures.  He is a fellow of the American Physics Society, past chair of its Division of Plasma Physics, the 2015 recipient of the John Dawson Prize for Numerical Simulation of Plasmas, the 2016 recipient of the NPSS Charles K. Birdsall Award for Contributions to Computational Nuclear and Plasma Sciences, and the recipient of the 2019 IEEE Nuclear and Plasma Sciences Section Particle Accelerator Science and Technology Award.

What is the Tech-X backstory? 
The folks here at Tech-X have been working in high-performance computing, specifically as it relates to physical simulation, since the early 90’s.  Distributed memory parallelism, where a calculation is split effectively over many separate computers, was in its infancy. Tech-X was bringing the power of parallelism to plasma computations. Specifically, we excelled at computations of plasma acceleration of electrons due to high energies from the wake fields generated in plasmas caused by incident laser pulses. This work supported experiments at multiple national laboratories, fulfilling their needs for very large simulations.  Following these successes, Tech-X branched out over many areas of plasma physics, including magnetic fusion. We further broadened our capabilities to include electromagnetics of structures, such as cavities, antennas, and photonics.

In the process, Tech-X built an experienced cadre of high-performance computing experts.  These experts constructed a software stack for efficient computational scaling, which means that the computation does not bog down when performed on a large number of processors.  This software, VSim, is licensed for use on our customer’s own hardware.  In addition, Tech-X engages in consulting projects and partnerships staffed by its 30 employees and multiple long-term consultants.

More recently Tech-X has devoted increasing effort to democratizing High-Performance Computing (HPC), by building out an easy-to-use Graphical User Interface. Known as Composer, it allows users to build and run simulations as well as analyze and visualize the results.  Composer abstracts the process of job submission on HPC clusters so that to the user it is just like working on a desktop.  Tech-X is also developing a cloud strategy, so expect more announcements later this year.

What areas are you targeting for future growth?
Our mission is to provide specific capabilities in two areas. We currently provide VSimPlasma software and consulting services for the modeling of plasmas in semiconductor chambers. We are also in the early phases of productizing software for modeling of nano-photonics for photonic integrated circuits (PICs).  Both of these applications present unique challenges because of their small feature size of interest compared to the overall system size, which makes them computationally intensive. This arises because the range of feature scales is large, requiring fine resolution over a large region.  For example, in semiconductor chambers there are small features at the wafer surface, but even if the wafer is uniform the plasma forms sheaths, which represent drops in the electric potential at the edge of the wafer. These sheaths are much smaller than the size of the chamber.

In nano-photonics, PIC components being designed are measured typically in microns – but manufacturing causes roughness in the sidewalls that is much smaller, on the order of nm.  In either of these applications the grid must be very fine to resolve these small features to provide accurate results and it must also span a large region, leading to the requirement for many billions, or even trillions, of cells.  This is where Tech-X software excels.

 What makes VSimPlasma software unique?
Plasma chambers involve many different spatial scales, from the scale of the chamber itself down to the details of the plasma close to the wafer.  The larger scales have traditionally been modeled with fluid codes. However, to compute the details of the plasma sheath, (and consequently the distribution of particles hitting the wafer which determine, e.g., whether one can etch narrow channels sufficiently deep) one must use a particle-in-cell (PIC) method, as provided by VSimPlasma from Tech-X.  For such problems VSimPlasma is the leader due to its extensive physics including its capability to handle large sets of collisions, its many electromagnetic and electrostatic field solvers, and its multiple algorithms for particle-field interactions.  VSim also has the ability to model particle-surface interactions, including the generation of secondary particles, and the reactions of particles on the surface. These are crucial for accurately modeling plasma discharges.  In semiconductor etching, deep vias require the ions to hit the wafer with a near-vertical angle. VSim models that critical distribution extremely well, and we continue to refine our code with each release with feedback from our customers in the semiconductor industry.

An additional uniqueness of VSim in plasma modeling is fitting into commercial workflows. It has an easy-to-use interface and integrates with CAD.  VSim further allows the development of analyzer plugins so that the user can analyze both the fields and the particles within the plasma.

What keeps your customers up at night?
As everyone knows, moving to smaller critical dimensions is making the problems harder and driving up capex, which causes all kinds of business problems.  There are too many variables in advanced plasma processing to optimize with a pure experimental approach.  Semiconductor companies are augmenting prototyping with simulation. Plasma etch is a difficult area involving many variables, including geometries of the etch, wafer and chamber, the plasma energy and chemistry in the chamber, and the wafer surface and etch profile. Our semiconductor customers’ interests are to reduce time and cost by reducing experimental iterations when tackling an advanced process etching recipe. The ROI from use of simulation is measured in reduced time to production, development cost and machine utilization.

How do customers engage with you?
There are several ways our customers engage with us including directly phoning or emailing our sales team or requesting an evaluation license through our website.  An application engineer (AE) will then contact the customer to determine how our software might best fit their needs.  The AE sets up the download and walks the customer through the software.  Several of our customers have independently set up simulations using the software on their own.  VSim comes with a rich set of examples for modeling of plasmas, vacuum electronics devices, and electromagnetics for antennas and cavities.  In addition, we provide various levels of consulting services, ranging from an AE setting up your problem and guiding you to the solution, to an AE completely solving your problem, including data analysis, and then providing the direct result.

What is next for Tech-X?
We have a number of skunk-works projects under way that will bring exciting new capabilities to plasma and photonics modeling.  We are looking at GPU and cloud computing with the aim of making computations fast to reduce development time, the number of fabrication cycles and the need for capital expenditures.  We expect to be able to have improved capabilities for modeling the latest plasma etch reactors, which will be unique in the industry.  We have an upcoming webinar on proving our current capabilities, and will soon have a series of webinars that demonstrate our latest features and plans.

Webinar: Learn More About VSim 12.0
Built on the powerful Vorpal physics engine that researchers and engineers have used for over 20 years, VSim 12 offers new reaction capabilities, user examples, and numerous improvements designed to increase user efficiency.

Also Read:

Understanding Sheath Behavior Key to Plasma Etch


ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn
by Robert Maire on 01-26-2023 at 10:00 am

Robert Maire Bloomberg

-Demand far exceeds supply & much longer than any downturn
-Full speed ahead-$40B in solid backlog provides great comfort
-ASP increase shows strength- China is non issue
-In a completely different league than other equipment makers

Reports a good beat & Guide

Revenues were Euro6.4B with system sales making up Euro4.7B of that. EPS was Euro4.6 per share. All beating expectations. 18 EUV systems were shipped and 13 systems recognized. Most importantly order intake was Euro6.3B of which EUV was Euro3.4B. In essence , ASML’s book to bill ratio remains very strong at better than 1.3.

ASML has a huge, multi year backlog of Euro40.4B, which keeps them very warm at night. Reassuringly , the backlog continues to build.

Backlog timeframe well exceeds any possible downturn length

With Euro40.4B in backlog and continuing strong orders, ASML has a multi year backlog. The bottom line is that customers never get off the order queue and the queue keeps growing in length.

Customers understand the long term growth model of semiconductors and are clearly ignoring a short term weakness whether its 6 months, a year, or more. ASML will ride over any expected weak period.

Other equipment makers, who compete for business with quick lead times are not so fortunate and will revert to a “turns” business and see orders fall off as customers can easily get out of the order queue and get back on when the industry picks up again.

ASP increases demonstrate strength

ASML mentioned that its EUV ASPs are increasing from 160M to 165-170M which further indicates the level of strength that being a virtual monopoly brings. ASML is the only EUV game in town and can price to market. DUV pricing has also increased. Both based on productivity parameters.

We highly doubt that other semiconductor equipment segments are able to push through price increases in the face of falling orders, even with increased performance, which they usually give away for free.

This is one of the keys that separates ASML from others in the semi equipment market and puts them in a league of their own. ASML is looking at an up 2023 while others are talking about WFE being down 20%.

This also implies that if lithography spend is actually up in 2023 the non-litho is actually down more than 20%, further separating ASML from other semi equipment makers

Full speed ahead with high NA and production capacity increases

ASML has been under a lot of pressure to increase production and has spent a huge amount of both money and effort with suppliers, most notably Zeiss, to increase production to an expected 60 EUV and 375 DUV systems in 2023.
ASML will continue to spend as the job is not over as they need more capacity. Also a major expense is the high NA product which is seeing a large spend in development in advance of any revenue.

This all suggests that ASML’s results might be even better without the “headwinds” of additional spend they currently have. Clearly the spend is relatively minor with a Euro7.4B cash balance and strong earnings, they are very comfortably awash in cash.

Results will still vary as to mix and lumpiness

Given the high ASP of systems and the differential between ASPs of DUV & EUV we expect lumpiness in quarters depending upon what is shipped in which quarter and where customer near term demand goes. ASML is expecting a slightly weak Q1 which appears to be due primarily to mix and normal lumpiness, we are not in the least concerned.

China remains a non-issue as semiconductors are a global zero sum game

We have repeated many times that the semiconductor industry is a zero sum game. That is that chip demand remains the same in the face of where the chips are made. If chips are not made in China (due to the embargo) they will be made elsewhere by others, and those others will need the same litho tools that China would have otherwise bought. The only impact is that China is kept out of the leading edge that other countries have access to.

ASML will still sell the same number of EUV tools just shipping them to other places. Although politically sensitive and much talked about, the actual impact on ASML is near zero.

ASML remains above the near term fray maintaining focus on long term

Management, while certainly cautious about near term issues, is rightly more focused on long term issues of capacity and technology. This 5 to 10 year focus is very appropriate given the business that they are in. We saw the lead time in EUV was decades as ASML struggled through advances but was rewarded in the long term for their long dedication to the cause of technology. Building capacity is a long term and costly struggle as is technology and ASML is investing for the future.

The stocks

We continue to view ASMLs valuation as well above the rest of the semi equipment makers, in a league of their own. They are also unique in that their view is of an up year versus everyone else’s expectation of a down year.

Although ASML talked about a potential recovery of the industry in H2 2023, we are a bit more cautious given the depth of this downturn being one of the worst we have seen in a long time. But none of this matters to ASML given their horizon.

We would remain an owner/buyer of ASML stock but would remain light on the rest of the group especially LRCX and AMAT given their shorter term equipment model in the face of the widespread weakness coupled with China issues, a double whammy that ASML does not see.

As with lenses and focus length business that ASML is well acquainted with, being focused on the long term means the short term is out of focus and less relevant to them…..

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Where there’s Smoke there’s Fire: UCTT ICHR LRCX AMAT KLAC Memory

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken

Micron Ugly Free Fall Continues as Downcycle Shapes Come into Focus


Designing a ColdADC ASIC For Detecting Neutrinos

Designing a ColdADC ASIC For Detecting Neutrinos
by Kalar Rajendiran on 01-26-2023 at 6:00 am

The DUNE Experiment

Cliosoft recently hosted a webinar where Carl Grace, a scientist from Lawrence Berkeley National Laboratory (LBNL) talked about a cutting edge project for detecting neutrinos. The project is called the Deep Underground Neutrino Experiment (DUNE) project. Many of us know what a neutron is, but what is a neutrino? Before we get to that, here is some background on Cliosoft and some insights into LBNL.

Cliosoft: The company has been serving the semiconductor industry for more than 25 years. Its product offerings fall into three main categories: hardware design data management, IP Reuse and highlighting differences between two designs directly on a schematic or layout. The relevance of Cliosoft to the DUNE project is directly tied to Cliosoft’s Hardware Design Data Management tool suite. This tool suite empowers multi-site design teams to efficiently collaborate on complex hardware designs and the DUNE project is quite a complex one with demanding requirements. The project involves collaboration among LBNL, Fermilab and Brookhaven National Laboratory national labs.

LBNL: Many of us have heard of LBNL but may not be aware of its expertise, excellence and diversity. With 3,500 employees and 1,000 students, it is much larger than many would have imagined. And it has 1,750 visiting researchers. With this much of brain power directed at the physical sciences, computing, biosciences, earth and energy sciences, material and nanotechnologies, it is the most diverse US National Laboratory. It offers the following user facilities for researchers to tap into: Advanced Light Source, National Energy Research Scientific Computing Center, Energy Sciences Network, Joint Genome Institute, Molecular Foundry including National Center for Electron Microscopy. The Lab has 14 Nobel prizes to its credit with the most recent one in chemistry by Prof. Jennifer Doudna (co-discoverer of CRISPR gene editing).

Whether one is in particle physics and/or semiconductors, there is something of interest and value in this webinar. To watch this on-demand webinar, go here.

What is a Neutrino and why study them?

Neutrinos are fundamental particles with very low mass, travel close to the speed of light and interact only through gravity and the weak nuclear force. Neutrinos could help answer questions such as, why is there matter in the universe, do isolated protons decay, how can we witness the birth of a blackhole, etc.,

How to detect Neutrinos?

Neutrinos can travel at almost the speed of light and can pass through 60 light years of water on average before interacting with any matter. This makes it very difficult to detect them. The solution is the DUNE detector, the largest cryogenic particle detector ever made that can detect neutrinos from an intense neutrino beam directed toward it from 800 miles away. A tight neutrino beam is developed when the neutrinos generated by a proton accelerator at Fermilab hit a target. This tight neutrino beam is sent over 800 miles through solid underground rock and earth material to a detector sitting one mile under the ground. This setup prevents cosmic rays from having any impact on the experiment. The detector itself is an extremely large tank filled with liquid Argon. Liquid Argon being very dense provides a lot of targets for the neutrinos to potentially hit. Being chemically inert, Argon does not cause any chemical reactions to disturb the experiment and pollute the collected data.

 

When a neutrino interacts

When a neutrino interacts with an atom of Argon, the atom is ionized. The freed electron generates an electric charge that travels through the liquid Argon in the tank. The tank is placed under an enormous electric field that drifts this charge on to planes of wires. When the charge reaches those wires, they generate very small currents that can then be recorded. Reading out and digitizing these tiny currents induced in these wires is a key part of the experiment. A key function of the detection electronics is the analog to digital conversion (ADC). Immersing the detector electronics in liquid Argon greatly reduces the cabling capacitance, allowing lower achievable noise, and serves as an enabling technology for DUNE project.

Cold ADC Requirements

  • 2MS/s sampling rate per channel and 16 channels at 12-bit resolution
  • Sub-LSB noise performance
  • 30 years reliability in cold environment (-184oC)
  • Operate at both room temperature and cold for testing purposes

Readily available off-the-shelf ADCs cannot meet the above requirements. Custom ADCs need to be built and integrated into ASICs implementing the detection electronics.

Collaboration among teams from the three labs

A small team from each of LBNL, Fermilab and Brookhaven National Laboratory collaborated to design the detection electronics for the DUNE project. With different pieces of the required design IP developed by teams geographically separated, the Cliosoft data management solution enabled an automated design-aware surgical data synchronization. This allowed fine-grained access controls for each participating National lab and provided a way for network storage optimization at each participating site.

Summary

The three-lab team has successfully developed the ColdADC ASICs to instrument the neutrino detector immersed in liquid Argon. Approximately 40,000 ColdADC ASICs will be deployed at the DUNE Far Detector complex and will be immersed in liquid Argon. Each ColdADC will be reading out 16-channels for a total of 640,000 wire channels. The detector electronics can be operated over a 250oC range and have achieved better noise performance than the commercial ADC solution used in the Short Baseline Neutrino Detector (SBND) experiment. The DUNE experiment will be conducted over a 30-year period.

Also Read:

Design to Layout Collaboration Mixed Signal

Webinar: Beyond the Basics of IP-based Digital Design Management

Agile SoC Design: How to Achieve a Practical Workflow


10 Impactful Technologies in 2023 and Beyond

10 Impactful Technologies in 2023 and Beyond
by Ahmed Banafa on 01-25-2023 at 10:00 am

10 Impactful Technologies in 2023 and Beyond

There are many exciting technologies that are expected to shape the future, the following are some of the technologies that will impact our lives in the coming 5 years at different levels and depth:

Generative AI, also known as generative artificial intelligence, is a type of #AI that is designed to generate new content or data based on a set of input parameters or a sample dataset. This is in contrast to traditional AI, which is designed to analyze and interpret existing data.

There are several different types of generative AI, including generative models, which use machine learning algorithms to learn the underlying patterns and structures in a dataset, and then generate new data based on those patterns; and generative adversarial networks (GANs), which are a type of machine learning model that consists of two neural networks that work together to generate new data.

Generative AI has a wide range of potential applications, including image and video generation, music composition, and natural language processing. It has the potential to revolutionize many industries, including media and entertainment, advertising, and healthcare.

A voice user interface (VUI) is a type of user interface that allows people to interact with devices, applications, or services using voice commands. VUIs are becoming increasingly popular due to their ease of use and the increasing capabilities of natural language processing (NLP) technology, which enables devices to understand and respond to human speech.

VUIs are used in a variety of applications, including smart speakers, virtual assistants, and home automation systems. They allow users to perform tasks or access information simply by speaking to the device, without the need for manual input or navigation.

#VUIs can be designed to understand a wide range of commands and queries, and can be used to control various functions and features, such as setting reminders, playing music, or turning on the lights. They can also be used to provide information and answer questions, such as providing weather updates or answering queries about a particular topic.

Edge computing is a distributed computing paradigm that brings computing and data storage closer to the devices or users that need it, rather than relying on a central server or cloud-based infrastructure.

In edge computing, data is processed and analyzed at the edge of the network, where it is generated or collected, rather than being sent back to a central location for processing. This can help to reduce latency, improve performance, and increase the scalability of systems that require real-time processing or decision-making.

Edge computing is used in a variety of applications, including the Internet of Things (IoT), where it allows devices to process and analyze data locally, rather than sending it over the network to a central server. It is also used in applications that require low latency, such as video streaming and virtual reality, as well as in industrial and military applications where a central server may not be available.

5G networks use a range of technologies and frequencies to provide coverage, including millimeter wave bands, which are high-frequency bands that can provide very fast data speeds, but have limited range. They also use lower-frequency bands, which can provide wider coverage but lower data speeds.

#5G networks are expected to offer data speeds that are much faster than previous generations of mobile networks, with some experts predicting speeds of up to 10 gigabits per second. They are also expected to offer lower latency, or the time it takes for a signal to be transmitted and received, which is important for applications that require real-time responses, such as video streaming and online gaming.

5G technology is still in the early stages of deployment, and it is expected to roll out gradually over the coming years. It is likely to be used in a variety of applications, including mobile devices, IoT devices, and a wide range of other applications that require fast, reliable connectivity.

A Digital Twin is a virtual representation of a physical object or system. It is created by using data and sensors to monitor the performance and characteristics of the physical object or system, and using this data to create a digital model that reflects the current state and behavior of the physical object or system.

Digital twins can be used in a variety of applications, including manufacturing, healthcare, and transportation. In manufacturing, for example, a digital twin can be used to simulate the performance of a production line or equipment, allowing manufacturers to optimize their operations and identify potential issues before they occur. In healthcare, digital twins can be used to model the body or specific organs, allowing doctors to better understand the patient’s condition and plan treatment.

Digital twins are created using a combination of sensors, data analytics, and machine learning techniques. They can be used to visualize and analyze the behavior of the physical object or system, and can be used to optimize performance, identify issues, and make decisions about how to improve the physical object or system.

Quantum Computers are different from classical computers, which use bits to store and process information. Quantum computers can perform certain types of calculations much faster than classical computers, and are able to solve certain problems that are beyond the capabilities of classical computers.

One of the key benefits of quantum computers is their ability to perform calculations that involve a large number of variables simultaneously. This makes them particularly well-suited for tasks such as optimization, machine learning, and data analysis. They are also able to perform certain types of encryption and decryption much more quickly than classical computers.

Quantum computing is still in the early stages of development, and there are many challenges to overcome before it becomes a practical technology. However, it has the potential to revolutionize a wide range of industries, and is likely to play an increasingly important role in the future.

A Chat Bot is a type of software that is designed to engage in conversation with human users through a chat interface. Chat bots are typically used to provide information, answer questions, or perform tasks for users. They can be accessed through a variety of platforms, including messaging apps, websites, and social media.

There are several different types of chat bots, including rule-based chat bots, which are designed to respond to specific commands or queries; and artificial intelligence (AI)-powered chat bots, which use natural language processing (NLP) to understand and respond to more complex or open-ended queries for example #ChatGPT.

Chat bots are commonly used in customer service, where they can handle routine inquiries and help customers resolve simple issues without the need for human intervention. They are also used in marketing, where they can help businesses to connect with customers and provide information about products and services.

XR is a term that is used to refer to a range of technologies that enable immersive experiences, including virtual reality (VR), augmented reality (AR), and mixed reality (MR).

Virtual reality (VR) is a technology that allows users to experience a simulated environment as if they were physically present in that environment. VR is typically experienced through the use of a headset, which allows users to see and hear the virtual environment, and sometimes also to interact with it using handheld controllers or other input devices.

Augmented reality (AR) is a technology that allows users to see virtual elements superimposed on their view of the real world. #AR is often experienced through the use of a smartphone or other device with a camera, which captures the user’s surroundings and displays virtual elements on top of the real-world view.

Mixed reality (MR) is a technology that combines elements of both VR and AR, allowing users to interact with virtual elements in the real world. #MR typically requires the use of specialized hardware, such as a headset with a built-in camera, which captures the user’s surroundings and allows virtual elements to be placed within the real-world environment.

Distributed ledger technology (DLT) is a type of database that is distributed across a network of computers, rather than being stored in a central location. It allows multiple parties to share and update a single, tamper-evident record of transactions or other data, without the need for a central authority to oversee the process.

One of the most well-known examples of #DLT is the blockchain, which is a decentralized, distributed ledger that is used to record and verify transactions in a secure and transparent manner. Other examples of DLT include distributed databases, peer-to-peer networks, and consensus-based systems.

DLT has a wide range of potential applications, including financial transactions, supply chain management, and identity verification. It is also being explored for use in the development of new products and services, such as smart contracts and decentralized applications (dApps).

The Internet of Things (IoT) is a network of connected devices that are able to communicate and exchange data with each other. These devices can range from simple sensors and actuators to more complex devices such as smart thermostats and appliances.

The #IoT is made possible by the widespread availability of broadband internet, as well as the development of low-cost sensors and other technologies that enable devices to be connected to the internet. These devices are often equipped with sensors that allow them to gather data about their environment or their own operation, and are able to communicate this data to other devices or systems.

The IoT has the potential to transform many aspects of our lives, including how we live and work. For example, smart home systems that use IoT technology can allow users to control and monitor their home appliances and systems remotely, and can provide alerts and notifications about potential issues. The IoT is also expected to play a significant role in the development of smart cities, which are urban environments that use technology to improve the quality of life for residents.

 Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

1. “Secure and Smart IoT Using Blockchain and AI “ Book by Prof. Ahmed Banafa

2. “Blockchain Technology and Applications “ Book by Prof. Ahmed Banafa

3. ChatGPT (Chat of Generative Pre-training Transformer)

Also Read:

CES 2023 and all things cycling

9 Trends of IoT in 2023

Microchips in Humans: Consumer-Friendly App, or New Frontier in Surveillance?


Effective Writing and ChatGPT. The SEMI Test

Effective Writing and ChatGPT. The SEMI Test
by Bernard Murphy on 01-25-2023 at 6:00 am

Writing and ChatGPT min

ChatGPT is a hot topic, leading a few of my colleagues to ask me as a writer what I think of the technology.  I write content for tech companies and most of my contacts freely confess that they or more often their experts struggle with writing. If a tool could do that job for them they would be happy and I would have to find a different hobby. Overlooking the usual hype around anything AI,  I have seen a few examples of ChatGPT rewrites which I found impressive. Since I can’t get onto the site to test it myself (overload?), I must base what follows on those few samples.

As a promoter of AI, I can’t credibly argue that my expertise should be beyond AI’s reach. Instead, I spent some time thinking about where it might help and where it probably would not be as helpful. This I condensed into four objectives I consider important in effective writing: style, expertise, message, and impact (SEMI, conveniently 😊). Think of these as layers which progressively build an impression for a reader, rather than sequential components.

SEMI

Style: Inexperienced writers commonly spend too much time here, suggesting a possible advantage for novices. Write a first pass in your own style, then run it through the tool. ChatGPT will output reasonable length sentences and paragraphs in a conversational style. Probably easier to read than your first-pass. I haven’t been able to check if it supports conference paper style (3rd person, passive voice, etc.). The technology seems like it could offer a real advantage to anyone agonizing over their awkward prose or endlessly circling around the right way to phrase a particular sentence. That said, I advise reading the output carefully and correcting as you see fit.

Expertise: AI isn’t magical. ChatGPT is trained over huge amounts of data but almost certainly not huge amounts in your specialized niche. It can provide well-written general prose embedding your technical keywords or phrases, but your readers are looking for expert substance or better yet novel expert substance, not prose decoration around tech/buzz keywords. Only you can provide that depth, through examples and analysis. ChatGPT can still help with style after you have written this substance.

Message: Your target readers are looking for articles with a point. What is the main idea you want to convey? Implicitly perhaps “buy my product”, but raw commercials have a small audience. The message should be a useful and informative review of a general opportunity, need or constraint in the market you serve. Something readers will find valuable whether or not they want to follow up. The message should shape the whole article, from opening to closing paragraph. I very much doubt that ChatGPT can do this for you unless that message is already written into the input text.

Impact: What should I remember the day after or a week after I have read your article? We don’t remember lists. Your article should build around one easily remembered message. We also don’t remember “more of the same” pitches. We remember novelty, a new idea or twist which stands out from an undifferentiated background of “me too” claims from others. Novelty can be in the message, in the expert examples you present, or in a (product independent) claim of the characteristics of a superior solution. You should also consider that your article leaves an impression about yourself and about your company, as a source to be trusted. Or otherwise.

One last note. Readers develop impressions in SEMI order. I don’t approach writing in this order. I first think about the message. For expertise, I specialize in a relatively narrow range of technologies, and I talk to client experts before I write to provide me with strong and current examples. Style is something I have developed over the years, though I will certainly experiment with ChatGPT when the site again becomes available. Finally, lasting impact starts with the message. I finish the first draft then move onto something else for at least a day. Coming back later gives me time to mull over and consider improvements to better meet each of the SEMI objectives.

I’d be interested to hear about your ChatGPT experiments 😊

Also Read:

All-In-One Edge Surveillance Gains Traction

2022 Retrospective. Innovation in Verification

Formal Datapath Verification for ML Accelerators