SNPS1670747138 DAC 2025 800x100px HRes

Lifecycle Management, FuSa, Reliability and More for Automotive Electronics

Lifecycle Management, FuSa, Reliability and More for Automotive Electronics
by Bernard Murphy on 04-22-2024 at 6:00 am

Lifecycle Management for Automotive Electronics min

Synopsys recently hosted an information rich-webinar, modestly titled “Improving Quality, FuSa, Reliability, and Security in Automotive Semiconductors”. I think they undersold the event; this was really about managing all of those things through the lifecycle of a car, in line with auto OEMs strategies for the future of the car. The standout message for me was total lifecycle management, from initial semiconductor architecture and design through end-of-lifecycle. I heartily recommend watching this webinar.

Heinz Wagensonner on an OEM perspective

Heinz is Manager of the Audi Progressive Semiconductor Program. He opened with a reminder of how an auto OEM sees the electronics future – advanced driving support, immersive experience and rethinking how to monetize added value options. One interesting set of stats is around mission profiles measured in hours of active operation over the car lifetime. For a traditional ICE car this has been 8000 hours (about 1.5 hours per day over a 15-year life). For an EV the mission profile extends to 55,000 hours, perhaps providing power to the house at night, and during the day charging or operating while supporting more functions than in earlier models. Heinz sees future EV profiles running to 130k hours, supporting multiple always-on functions such as face-id to enter and start the car, always on networks for OTA updates, and security to guard against threats.

Today advanced systems build on advanced processes (TSMC are already offering a 3nm early automotive development kit), very capable but with minimal track record in reliability (automakers used to require 5 years minimum). Domain specific devices with complex mission profiles compound the lack of track record. Mission profiles, advanced processes and advanced designs together point to a potential crisis for OEMs; an NHTSA report cites nearly 5 million ADAS-related recalls in 2022. At $1,000 per car, this is already a very expensive problem.

While mitigating the problem starts with strong design, Heinz also stresses in-service monitoring and compensation as an important part of the solution. On-chip sensors are central to these techniques. Such sensors play a role in preventive maintenance, perhaps warning the driver of an anticipated problem calling for a near term service visit. Or an imminent problem demanding the vehicle be switched to a safe state and the driver take immediate action (pull over to the side of the road).

Those features can prevent or mitigate a hazard in use before it happens. What happens when the car is taken in for a service? Heinz elaborated the highly complex and apparently quite brittle path to diagnose a root cause from initial service down through the value chain. As an example, 80% of ECUs assumed to be a problem root cause (and then replaced) prove on more detailed analysis not to have been the source of the problem! Yet following all diagnostic steps from initial service to a Tier 2 (semi supplier) can take 30 days if the root cause can be isolated. This overhead is unsustainable for managing warranty costs, potential for a recall, or worse.

He sees the path forward as a combination of on-chip sensor data, learning from prior problems through AI, combining in Signature Failure Analysis (SFA). Accumulating learned experience will lead to high confidence fixes which can be applied cost-effectively during a service call and can also provide effective and accurate feedback to Tier 1 and 2 suppliers. Some signatures may not map to a known problem and will still need to follow the long diagnostic path. However once resolved, they too can be added to the training database.

Alessandra Nardi on an EDA perspective

Alessandra is a Distinguished Architect in the Systems Solution group at Synopsys and a guru in automotive IMHO; every Alessandra talk I have attended has given me a better understanding of automotive system design and directions, with little to no product marketing. Her talk called for a holistic view of lifecycle challenges, starting with design then running through ramp, production, and in-field monitoring.

In-design optimizations for PPA and robustness are already well understood though still suggest opportunity for further advances. Here she highlighted need for improved modeling of uncertainty, through refined sensitivity analyses of variations based on different factors (voltage, temperature, etc) rather than blanket margins. Data gathered during ramp and production analyses through in-chip monitors placed during design will drive this learning. In turn that can drive yield and reliability optimizations and improvements to PPA, safety and other metrics. The same monitors can capture data during in-field analysis, feeding back information to the supply chain to drive additional optimizations while also enabling real-time tuning through techniques like adaptive voltage scaling.

The central component of in-chip monitor methods is a machine-learning system, gathering mission feedback from monitors to learn sensitivities/signatures for trends or outlier behaviors. In ramp or production these may suggest need for silicon or process revision fine-tuning. Similarly, an ML model can support in-field diagnoses and tuning.

Alessandra hinted that such lifetime optimization systems are not only important for automotive markets. Everything she and Heinz talked about is likely also important in aerospace and defense, industrial, medical and infrastructure markets though with different thresholds across the various metrics discussed in this webinar. I would imagine that even sustainability may play an increasing role, at least in product lifetimes and power consumption.

Fascinating discussion. Again, you can access the webinar HERE.

Also Read:

Early SoC Dynamic Power Analysis Needs Hardware Emulation

Synopsys Design IP for Modern SoCs and Multi-Die Systems

Synopsys Presents AI-Fueled Innovation at SNUG 2024


Podcast EP219: How Synopsys Addresses Debug and Coverage Closure Challenges with Robert Ruiz

Podcast EP219: How Synopsys Addresses Debug and Coverage Closure Challenges with Robert Ruiz
by Daniel Nenni on 04-19-2024 at 10:00 am

Dan is joined by Robert Ruiz, product management director responsible for strategy and business growth of several verification products at Synopsys. Robert has held various marketing and technical positions for leading functional verification and test automation products at various companies including Synopsys, Novas Software, and Viewlogic Systems. He has more than 30 years of experience in advanced EDA technologies and methodologies and spent several years designing ASICs.

Robert talks about the rising verification challenges for debug and coverage closure for advanced designs with Dan. The time spent on these activities is rising, with data suggesting debug and coverage closure can occupy 75% of the verification cycle.

Robert describes several approaches from Synopsys that can provide a 10X – 60X improvement in productivity for these activities. New software tools, methodologies and the application of AI are all discussed along with an overview of the new UI for Verdi and how it impacts the process.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


ASML- Soft revenues & Orders – But…China 49% – Memory Improving

ASML- Soft revenues & Orders – But…China 49% – Memory Improving
by Robert Maire on 04-19-2024 at 8:00 am

Fully assembled TWINSCAN EXE 5000

ASML- better EPS but weaker revenues- 2024 recovery on track
China jumps 10% to 49%- Memory looking better @59% of orders
Order lumpiness increases with ASP- EUV will be up-DUV down
“Passing Bottom” of what has been a long down cycle

Weak revenues & orders but OK EPS

Reported revenue was Euro5.3B and EPS of Euro3.11 versus expectations of Euro5.41B and EPS of Euro2.81.

Guidance was for revenues of between Euro5.7B and Euro6.2B versus street expectation of Euro6.49B.

While reported revenues were less than expected its obvious that Q2 outlook was more of a concern and significantly less than what was expected by the street.

A Lumpy business gets lumpier as ASPs increase

With High NA EUV systems costing many times the cost of an ArF tool, it should be no surprise that EUV and high NA EUV systems ordered or delivered in different quarters will cause significant variation in revenues and guidance. This is obviously exacerbated by the highly cyclical nature of the industry and fickle customers that can turn spending on or off very quickly.

In 2023 we saw some huge order numbers, way above expectations.

It would likely be better for investors to look at averaging orders and revenue over a longer time period.

At the very end of the day, the need for lithography systems is both increasing along with the average selling price.

We have covered ASML since working on its IPO in 1995 (almost 30 years!) and when we look back over the long term trend line of revenue, the story is quite amazing and not likely changing much going forward…

Memory will be up in 2024 and Logic will be down

There have been significant logic orders over the past year or more with very little memory business as memory had significant excess capacity. Going into 2024 we will see memory orders picking up as the memory industry continues to recover while we will go through a digestion period in Logic of all the equipment previously ordered and delivered.

Memory bookings jumped from 47% of orders to 59% in the quarter while logic dropped from 53% to 41%.

We have already heard from several memory makers that their overall Capex will start to recover in 2024. We would caution investors that while memory is getting better we still have strong supply and pricing is still a bit flakey.

High bandwidth memory will be a very bright point but investors still need to remember its only 5% of the overall memory market, although growing very quickly

China is up to 49% in revenues but down in actual amount

On face value 49% of revenue from China seems concerning but we would point out that this represents a smaller actual dollar amount than China’s peak business last year. China has increased as a percentage as the rest of the world has slowed more. The more interesting thing we would point out is that while China was 49%, the US was almost a rounding error at 6%, which continues to show how the US is being outspent by China by a huge margin. This is not something new but is a long term ongoing issue. It will be difficult for the US to catch up spending such a paltry amount.

2024 is second half weighted

Given the long lead times of equipment and production planning, ASML’s 2024 will be back end loaded. Overall we are still looking like 2024 will have similar business levels as 2023.

Essentially what we have is a U shaped curve with the end of 2023/beginning of 2024 being the bottom point of the somewhat symmetrical curve. While 2023 was logic dominated, 2024 will be more memory dominated

EUV will be up while DUV will be down in 2024

It should be no surprise that EUV will be up in 2024 as it is becoming the mainstay of lithography in the semiconductor industry.

Much as “G Line” and “I Line” lithography have become relics of the past that most current industry analysts have never heard of, so will DUV fade into history as EUV takes over.

We would point out that the wavelength to cost ratio of lithography systems is quite exponential when we compare the cost of G Line to I Line and DUV to EUV and finally High NA it is an exponential curve.

We wonder if a “Hyper NA” system could crack a Billion dollars?

Congratulations to Peter Wennink…Mr EUV

Peter Wennink, the CEO of ASML will retire after 10 years at the helm of the company. In our view he will clearly will be most remembered for navigating the company through the transition to EUV which was quite difficult and quite treacherous with many ups and downs. The final product is nothing short of amazing.

While Martin van den Brink was the technology visionary, Peter Wennink made it actually happen and turned ASML into the number one semiconductor equipment company in the world and the technology leader that is driving the industry creating many Billions of dollars in value.

The Stock

Investors will be disappointed with the weaker than expected revenues and the weaker outlook.

The stock looks to be down around 6-7% which we view as a bit of an over reaction and an opportunity for those investors with more of a longer term view past the lumpiness.

We remain positive on the stock and the story overall which has not changed.

We don’t see as much impact on other companies in the semiconductor space as ASML is a significantly different company with much longer lead times .

We expect most semiconductor equipment companies to be down in sympathy to ASML but we would remind investors that we have been saying for quite some time that the stocks had gotten way ahead of themselves with valuations that reflected a recovery that had already happened and quite strong.

The real reality is that we are at the beginning of a recovery that may not be as strong as expected and may take a while. As pointed out by ASML, we are just now passing the bottom of what we view as a “U” shaped downcycle and expect 2024 to be somewhat of a mirror to 2023 and not a significantly up year overall and stocks have to get in line with that thought.

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

ASML moving to U.S.- Nvidia to change name to AISi & acquire PSI Quantum

SPIE Let there be Light! High NA Kickoff! Samsung Slows? “Rapid” Decline?

AMAT – Flattish QTR Flattish Guide – Improving 2024 – Memory and Logic up, ICAPs Down


Semi Market Decreased by 8% in 2023… When Design IP Sales Grew by 6%!

Semi Market Decreased by 8% in 2023… When Design IP Sales Grew by 6%!
by Eric Esteve on 04-19-2024 at 6:00 am

Top10 Table 2023

Design IP revenues had achieved $7.04B in 2023, with disparity between license, growing by 14% and royalty decreasing by 6%, and main categories. Processor (CPU, DSP, GPU & ISP) slightly growing by 3.4% when Physical (SRAM Memory Compiler, Flash Memory Compiler, Library and I/O, AMS, Wireless Interface) slightly decreasing (-1.4%) and Digital (System, Security and Misc. Digital) was slightly growing by 4%. Clearly, wired Interface is still driving Design IP growth with 16% to reach almost $2 billion in 2023 (after growth in the 20% during 2022, 2021 and 2020). IPnest has released the “Design IP Report” in April 2024, ranking IP vendors by category and by nature, license and royalty.

The main trend shaking the Design IP in 2023 is clear to detect in the Top 10, as IP vendors targeting consumer applications, like smartphone, shows decreasing revenues, when those enjoying interface IP products and targeting HPC and AI are growing. There is one exception and not the least, ARM is moderately growing by 5%. In fact, ARM has compensated declining royalty revenues dues to smartphone weakness by a remarkable performance in license revenues growing by 28.6%. After years spent in following phantom IoT market, ARM has realized in 2023 that the real source of growth (and profit) was with HPC and AI, and in a certain extend Automotive to eventually improve their positioning and portfolio. Also to be noticed is a strong increase at the end of 2023 from “revenues from related parties”, we can translate by “ARM China”…

If we look at the #2, #3 and #4, Synopsys, Cadence and Alphawave, the last is 100% focused on Interconnect IP when for Synopsys it’s 71%, and both are strongly growing (by 23% and 17% respectively) when Cadence growth is moderate with 9%. Ceva and Rambus are both declining, in both cases they have re-engineered their portfolio, and stopped supporting product line (Ceva) or sold it, like Rambus did with Interface PHY IP to Cadence.

I have written since more than 10 years in Semiwiki about the importance of PHY IP (“Don’t mess with SerDes”), and you could challenge my position. But remember that I always said, supporting advanced and competitive SerDes product line requires large and talented engineering team, unlike with digital controller IP, where only one excellent architect is needed, completed by young engineers.

In other words, supporting PHY IP requires a consequent investment and being able to support multiples foundries and technology nodes, which is the key condition for high return. Rambus took the decision to move their focus in pure digital (controller for interface protocols) and security.

Synopsys, Alphawave and Cadence growth confirm again in 2023 the importance of the wired interface IP category aligned with the data-centric application, hyper scalar, datacenter, networking or IA.

Looking at the 2016-2023 IP market evolution can bring interesting information about the main trends. Global IP market has grown by 106% when Top 3 vendors have seen unequal growth. The #1 ARM grew by 78% when the #2 Synopsys grew by 245% and Cadence (#3) by 231%. Market share information is even more significant. ARM move from 48.1% in 2016 to 41.8% in 2022 when Synopsys enjoy a growth from 13.1% to 22% and Cadence is passing from 3.4% to 5.6%.

This can be synthetized with the comparison of 2016 to 2023 CAGR:

  • ARM CAGR 8.6%
  • Synopsys CAGR 19.4%
  • Cadence CAGR 18.7%

When the global IP market has seen 2016 to 2022 CAGR of 10.8%.

The strong information is that the Design IP market has enjoyed 10.8% CAGR for 2016-2023! Zooming on the categories (Processor, wired Interface, Physical, Digital), market share 2017 to 2023 evolution clearly shows interface category growth (18% to 28%) at the expense of processor (CPU, DSP, GPU) declining from 58% to 47%. When Physical and Digital are almost stable.

 

IPnest has also calculated IP vendors ranking by License IP revenues:

Synopsys is the clear #1 by IP license revenues with 32% market share in 2023, when ARM is #2 with 29%. Alphawave, created in 2017, is now ranked #4 just behind Cadence, showing how high performance SerDes IP is essential for modern data-centric application and can allow building performant interconnect IP portfolio supporting growth from 0 to over $200 million in 6 years… Reminder: “Don’t mess with SerDes!”

With 6% YoY growth in 2023 when the semiconductor market has declined by 8%, the Design IP industry is simply confirming how incredibly healthy is this niche within the semiconductor market and the 2016 to 2023 CAGR of 10.8% is a good metric!

Eric Esteve from IPnest

To buy this report, or just discuss about IP, contact Eric Esteve (eric.esteve@ip-nest.com)

Also Read:

Semidynamics Shakes Up Embedded World 2024 with All-In-One AI IP to Power Nextgen AI Chips

Silicon Catalyst partners with Arm to launch the Arm Flexible Access for Startups Contest!

Synopsys Design IP for Modern SoCs and Multi-Die Systems


ECO Demo Update from Easy-Logic

ECO Demo Update from Easy-Logic
by Daniel Payne on 04-18-2024 at 10:00 am

EasylogicECO Design Flow

I first met Jimmy Chen from Easy-Logic at #60DAC and wrote about their Engineering Change Order (ECO) tool in August 2023. Recently we had a Zoom call so that I could see a live demo of their EDA tool in action. Allen Guo, the AE Manager for Easy-Logic gave me an overview presentation of the company and some history to provide a bit of context.

The company started 10 years ago in Hong Kong by a professor and students, they even won an ICCAD competition for an ECO test case in China, a nice way to get noticed. Their approach addresses making an ECO in four different places:

  • Functional logic changes
  • Low power changes
  • Scan chain changes
  • Metal connection changes

The challenge is to make an ECO with the smallest impact in a design flow to save both time and money. With the EasylogicECO tool you can expect to see the smallest patch size with minimum user effort, getting results in hours not days. Here’s the flow for using their tool.

The tool compares two RTL netlists for differences, finds the modules with differences, and only modifies what is needed. By reading the entire design and only looking for what has changed enables EasylogicECO to be smarter than other ECO approaches, and there’s even formal checking of modules to ensure equivalence.

When making a Metal ECO there are lots of DFM and DRC rules to comply with, and EasylogicECO maintains logic levels in order to keep timing delays in place. There estimates to account for wire effects on delays, and the tool must pinpoint spare cells available to close timing and close routing. Users can run parallel ECO trials, then choose the best result. In the example below, versions 2 and 4 are better choices with the smallest patch sizes and smallest gate count changes.

I asked about the training time for an engineer to learn and become proficient at using EasylogicECO, and was surprised to hear that it only takes 30-40 minutes. Another question I had was about competition with other ECO tools, and they showed me a slide with multiple test cases that compared the patch size, where smaller is always better.

A smaller patch size greatly helps a project team to minimize the layers that need to be changed in metal, directly impacting the cost of mask rework. Each metal layer can cost in the millions for advanced nodes, so it’s important to use the minimum metal layers.

With other ECO tools a team has to add more spare resources to enable metal ECOs, which in turn causes a larger die size and higher silicon costs.

Demo

EasylogicECO is a batch tool run at the command line in a Unix environment. The first step is to generate script templates, then go to the scripts folder and decide which ECO script to run, and there are Readme files to explain the syntax and usage. Running each script will prompt the user for input files, like: Original RTL, revised RTL, module name, etc.

The demo test case took about one minute, running on a laptop computer. The script prompted for Verilog file names, module top, LEF file, DEF file, spare module name for metal ECO, spare cell naming and spare instance names. It then created scripts ready for logic synthesis and back-end tools like Innovus and ICC2.

Summary

All SoC projects experience last-minute changes which are threats to taping out on time and within budget.  Finding bugs in silicon that require another spin will be expensive, so anything that can make this process go faster and cost less is welcomed. If your ECO process is taking weeks or months, then it’s high time to consider a newer approach to save valuable time and money.

Consider an evaluation of EasylogicECO and compare their approach with your previous methods to find out how much quicker an ECO can be done. Their ECO flow works with Cadence and Synopsys tools, so there’s no need for a CAD team to integrate anything as you can get patch results in just hours. Stay tuned for an upcoming webinar and if you’re attending #61DAC in June, then stop by their booth to get all your questions answered in person.

Related Blogs


Cadence Debuts Dynamic Duo III with a Basket of Goodies

Cadence Debuts Dynamic Duo III with a Basket of Goodies
by Bernard Murphy on 04-18-2024 at 6:00 am

Dynamic Duo III min

I am a fan of product releases which bundle together multiple high-value advances. That approach reduces the frequency of releases (no bad thing) in exchange for more to offer per release, better proven through solid partner validation. The Dynamic Duo III release falls in this class, offering improvements in performance, capacity, and solution support across this matched set of hardware-assisted verification engines (Palladium for emulation and Protium for prototyping).

Capacity and performance advances

It’s a very worn marketing cliché but still true that design sizes keep growing hence the tools to support verification must continue to grow with them. The new generation Palladium Z3 and Protium X3 systems have increased total supported capacity, to 48 billion usable gates and each offer a 50% boost in performance. The Palladium platform is based on a new generation of the Cadence custom emulation processor and the Protium platform is based on the recently released AMD VP1902 device.

Compile times have improved dramatically on large designs through a new modular compiler, delivering near constant compile times independent of design size. For Palladium this maxes out at 8 hours per compile, making 1-2 verification turns per day a reality in early-stage system verification runs. Protium compile times have also dropped, to under 24 hours, speeding prototyping turns in late-stage hardware/firmware validation. Naturally the signature tight coupling between platforms continues with Z3 and X3, allowing for example a run exhibiting a bug in X3 prototyping to be flipped over to Z3 emulation for detailed debug drill-down.

Both platforms continue to deliver form factor and power optimization suitable to enterprise resources, allowing for maximum utilization whether verifying IP, subsystem/chiplet, or full system scale while packing as many jobs as will fit into available resource given job sizes. Both are also available as cloud-based resources.

Since Nvidia has been a long-time fan, I have to believe hardware development for LLMs is among leading drivers motivating these improvements.

Solution apps

Bigger and faster are always important but what really caught my attention in this release are the apps. First, Cadence have spun a new power estimation/analysis app (DPA 3.0), claiming 95% accuracy compared to implementation-level static power analysis (the pre-silicon power signoff of record). Not a new capability of course but sounds like it is much improved and of course running on a platform which can run very big designs with serious use-cases, always important when teasing out power bugs in big systems.

The 4-state emulation app is particularly interesting. Samsung presented a paper at DVCon this year on how they use this capability (currently unique to Palladium apparently) for low power debug. As an example, when switching power states, there are numerous opportunities for bugs to arise around incorrectly enabled isolation logic. X-propagation tests are a good way to catch such bugs but classic X-prop verification using simulation or formal is limited to relatively small design and test sizes. Emulation has the necessary capacity and speed but has historically only supported 0/1 modeling. Now Palladium Z3 also supports 0/1/X/Z as an option, making X-prop testing a very real option on big designs and tests. Samsung were able to show 100X performance improvement in this analysis over a simulation-based equivalent.

In mixed signal emulation, ADI presented an award-winning poster at the same DVCon on their use of the new Palladium app for digital mixed signal (DMS). I believe DMS emulation will become a must-have for 5G, 6G and beyond, to verify correctness between RF and digital stages as software-dependent coupling between stages increases. ADI say their testing shows the methodology is ready for production use, with some limitations and workarounds. Not surprising when forging a new frontier.

The Palladium safety app brings fault simulation to emulation – now we can talk about fault emulation 😀. Michael Young (Sr. Product Management Group Director, Cadence) tells me that speedup versus heavily parallelized software-based fault sim is typically 10-100X. He adds that a common use model is to do most of the relatively short sims using the software platform and to port longer analyses (1 hour or more) to the emulator. The Xcelium safety app and the Palladium share the same fault campaign model so switching between platforms should be simple.

Good fundamentals and good new features in this release. You can read more HERE.

Also Read:

Fault Simulation for AI Safety. Innovation in Verification

Challenge and Response Automotive Keynote at DVCon

Automotive Electronics Trends are Shaping System Design Constraints


CEO Interview: Khaled Maalej, VSORA Founder and CEO

CEO Interview: Khaled Maalej, VSORA Founder and CEO
by Daniel Nenni on 04-17-2024 at 10:00 am

Khaaled Maalej

Khaled Maalej is founder and CEO of VSORA, a provider of high-performance silicon chips for GenerativeAI and L4/L5 autonomous driving (AD) applications based in France. Before founding VSORA in 2015, Maalej was CTO at DiBcom, a fabless semiconductor company that designed chipsets for low-power mobile TV and radio reception acquired by Parrot. He graduated from Ecole Polytechnique & Ecole Nationale Superieure des Telecommunications in Paris.

Tell us about your company.
Drawing on more than a decade of expertise in chip architecture initially refined targeting DSP applications in radio communications, VSORA envisioned a processor architecture aimed at delivering exceptional performance with superior efficiency. In today’s computing landscape, while leading processors boast significant computing power, they falter in efficiency, particularly as software workload expands.

We were successful and caught the attention of The Linley Group (now TechInsights). In 2021, our AD1028 architecture clinched the prestigious 2020 Linley Group Analysts’ Choice Awards for Best IP processor.

Over the past two years, we fine-tuned our foundational architecture and created an on-the-fly scalable and reprogrammable computing core. It can perform AI and general-purpose computing or other functionality to target two pivotal and demanding domains through two distinct families of devices. The Tyr family comprises three scalable devices designed to execute the perception and motion planning tasks in L4 (highly automated) and L5 (fully automated) autonomous driving (AD) controllers. The Jotunn family features two scalable devices tailored to meet the demanding generative AI (GenAI) applications.

Save for actual silicon, we have simulated our processors at different abstraction levels all the way into FPGAs via Amazon AWS. Across the board, the results showcase unparalleled processing power (6 petaflops), computing efficiency (50% on GPT-4), minimal latency, restricted energy consumption (40 watt per petaflops), and small silicon footprint.

What problems are you solving?
About a decade ago, Marc Andreessen authored an article titled “Why software is eating the world.” Today, we might assert that the software is eating the hardware. The relentless pursuit of higher processing power by applications such as autonomous driving and generative AI remain unquenchable. While CPUs, GPUs, FPGAs strive to bridge the gap, they fall short of meeting the demands of cutting-edge applications.

What’s needed is a revolutionary architecture capable of delivering multiple petaflops with efficiencies surpassing 50%, while consuming less than 50 watts per petaflops, boasting minimal latencies, and selling at competitive pricing.

That is the challenge that VSORA aims to tackle head-on.

What was the most exciting high point of 2023 for your company?
2023 marked a turning point for VSORA as we achieved a significant milestone. Out of 648 applicants, we were chosen as one of 47 startups to benefit from the 2023 European Innovation Council (EIC) Accelerator Program. This annual event represents a beacon of innovation within the entrepreneurial ecosystem. The selection validates our vision and rewards our efforts with a combination of grants and equity investments to fuel our growth.

What was the biggest challenge your company faced in 2023?
Our goal is to tape our architecture onto silicon. This endeavor requires a substantial investment of up to $50M. In 2023, apart from securing the EIC grant and investment equity, we worked with several VC firms, investment funds, and banks and are optimistic that our efforts will yield fruitful results in 2024.

What do you think the biggest growth area for 2024 will be, and why?
The exponential success of Nvidia underscores the unstoppable ascent of GenAI. Nvidia dominates the learning phase of AI applications executed in large data centers around the world. However, GPUs prove inefficient for edge inference. To mitigate this inefficiency when running ChatGPT-4, extensive arrays of GPUs must be deployed, resulting in exorbitant energy consumption and substantial latency issues. This setup not only entails significant acquisition costs but also proves expensive to maintain and operate.

Another promising area for growth lies in AD. Over the past three to four years, the push to implement level 4 and 5 AD controllers has somewhat lost intensity, primarily due to the absence of viable solutions in the market. We anticipate a resurgence of momentum in 2024, fueled by a better understanding of the requisite specifications and the emergence of advanced digital processing capabilities.

How does your company address this growth?
In advanced algorithms like transformers, relying solely on pure AI instructions is no longer adequate. Consider the PointPillars algorithm, which incorporates pure AI functions and DSP functions within its code. Or in the case of Mask R-CNN that mixes general processor instructions and pure AI functions. At VSORA, we integrate MAC and ALU functions within our compute cores, and transfer data with a high-bandwidth, on-chip memory system through a proprietary scheme engineered to overcome the challenges posed by “memory wall.”

Moreover, we enable layer-by-layer specific any-bit floating point quantization and support sparsity both in weights and data on-the-fly. The approach frees developers from dealing with code details by automatically determining the optimal configuration for each task.

The tangible results of these innovations are evidenced in the specifications for Jotunn.

What new features/technology are you working on?
We believe our hardware architecture is robust and performing. We are now focusing on enhancing our software capabilities.

Our newly developed software offers a distinct advantage over competitors. Unlike solutions based on CUDA-like, low-level programming languages where developers must specify loops for matrix multiplication, VSORA operates at the algorithmic level (Matlab-like, Tensorflow-like, C++) avoiding the need to engage in low-level programming and optimization that may demand significant vendor attention. The VSORA software environment shields users from dealing with these lower-level intricacies, enabling them to focus solely on the algorithms.

As for algorithms validation, the VSORA development environment encompasses a suite of simulation tools to verify code at high-level, transaction-level-modeling (TLM) and register transfer level (RTL) model, as well as on AWS FPGAs.

How do customers engage with your company?
First contact might be through our website (VSORA.COM), and I encourage readers to visit it. We can always be reached via email at info@vsora.com.

Also Read:

CEO Interview: Larry Zu of Sarcina Technology

CEO Interview: Michael Sanie of Endura Technologies

CEO Interview: Vincent Bligny of Aniah


Podcast EP218: How Dassault Systémes is Helping to Create the Workforce of the Future with Bill DeVries

Podcast EP218: How Dassault Systémes is Helping to Create the Workforce of the Future with Bill DeVries
by Daniel Nenni on 04-17-2024 at 8:00 am

Dan is joined by Bill DeVries, Vice President of Industry Transformation and Customer Success at Dassault Systémes. Bill is responsible for revenue growth and driving the use of the 3DEXPERIENCE platform. Additionally, Bill is the Senior Director of Academic and Education in North America, where he leads the 3DEXPERIENCE EDU Sales and Workforce of the Future efforts by working closely with prominent Universities, Colleges and technical institutions.

Dan explores some of the ways Dassault Systémes is impacting workforce development with Bill. During this broad discussion, Bill describes some of the partnerships Dassault Systémes has with entities such as Purdue University and Lam Research. Using a technology called virtual twin, a complete design and manufacturing environment can be created virtually to facilitate the development of new skills in both design and semiconductor fabrication. The technology is also quite useful for commercial customers who would like to optimize workflows.

Bill discusses the CHIPS Act and how this work will help to develop the significant number of new skills required to staff the new facilities that are planned. Bill also describes how expanded ecosystem collaboration will help to create the workforce of the future.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Soitec Delivers the Foundation for Next-Generation Interconnects

Soitec Delivers the Foundation for Next-Generation Interconnects
by Mike Gianfagna on 04-17-2024 at 6:00 am

Soitec Delivers the Foundation for Next Generation Interconnects

Soitec is a unique company that is at the center of major changes in our industry. Technology megatrends are fueling massive demand for semiconductors and this has increased the adoption of engineered substrates. As a global leader in the development of engineered substrates, Soitec is a company to watch. While this technology finds use across many areas of semiconductor application, photonics is a particularly important area that is enabled by Soitec and its engineered substrates. The company recently published a very informative white paper on the topic. A link is coming so you can get your own copy. First, let’s explore a bit about the company and its strategy to see how Soitec delivers the foundation for next-generation interconnects.

About Soitec

With the demands of density, performance and power efficiency required for advanced semiconductors, it turns out that silicon in its purest form often falls short to deliver on all the requirements. Adding additional materials to the silicon can enhance its capabilities, but adding an epitaxial layer of new material to silicon can be both difficult and unpredictable. Soitec has developed a process to deliver engineered substrates that addresses these challenges, opening new opportunities for innovation. You can learn about some of the things Soitec is doing here.

Thanks to the increasing adoption of engineered substrates, the company expects its addressable market to grow by 3X between 2022 and 2030. The breadth of Soitec’s impact is illustrated in the figure below.

Soitec’s impact

About Photonics

As with many trends, AI/ML is a main driver for photonics adoption. The current infrastructure for these applications is bandwidth and distance limited. A move to optical interconnect is on the horizon that will open new possibilities. The figure below summarizes these trends.

AI Enablement as a Network Solution

To address these opportunities, Soitec has a roadmap that is summarized below.

Smart Photonics SOI Roadmap

With this backdrop, I’ll provide a summary of the new white paper.

About the White Paper

The new white paper is appropriately titled, Has Silicon Photonics Finally Found It’s Killer Application? The piece explains how engineered silicon substrates are providing the foundation for the cutting-edge photonics engines that data centers will need to usher in the era of artificial intelligence.

The piece talks about the onset of artificial intelligence and machine learning that leverage large language models (LLMs) for both AI training and inference. These models exhibit a super-exponential growth in modeling parameters. As a result, inter-data and especially intra-data center traffic has exploded, requiring the need for high-speed optical pluggable transceivers. These devices are currently transitioning from 100 Gbps to 400 Gbps. Some shipments of 800-Gbps devices already started in 2023 and even 1.6-Tbps pluggables are also available today for pre-sampling.

The piece goes on to explain that optical transceivers must address three different key requirements: high speed, low power, and minimized cost. Regarding power, server clusters in a data center deliver power densities between 50 and 100 kW to meet new AI requirements. However, the share of AI workloads in a data center is expected to more than double between 2023 and 2028. How these trends impact power consumption is illustrated in the table below.

Data Center Power Consumption Trends

This means there is a significant need for lower-power, higher-speed optical transceivers as data volume grows, which is driving pluggable form factors to evolve. The piece points out that the digital signal processing (DSP) chip inside pluggable transceivers is one of the main sources of power consumption. This has led to exploration of novel transceiver designs, such as linear-drive pluggable optics (LPOs), half-retimed linear optics (HALOs), and co-packaged optics (CPOs), that use advanced device design and photonics-electronics co-integration. This would enable future pluggables to operate in direct-drive, without a stand-alone, dedicated DSP component.

The figure below illustrates this evolution.

Evolution of Optical Interconnect

The white paper then discusses the changes on the horizon to optimize power, performance and cost of AI architectures, with a focus on transceiver design.  As shown earlier, silicon photonics will play a major role in these changes. And silicon-on-insulator technology has unique properties to address the demanding requirements of silicon photonics.

The details of Soitec’s engineered substrates to address these requirements is presented in detail. There is a lot of great information presented – you should get a copy of this white paper. A link is coming.

The Executive Viewpoint

René Jonker

René Jonker has recently been named as the SVP & GM of Smart Devices Division at Soitec. He oversees Imager, FD-SOI for IoT applications and silicon photonics. I had the opportunity to speak with René recently to get his view of the trends regarding silicon photonics and Soitec’s position in this growing market.

René began by discussing the mega-trends that are creating the disruption we are currently seeing. He cited the growth in size and scale of data centers and the associated increased demand for bandwidth as important drivers. He also mentioned the power consumption that comes along with these changes; this was a big topic at the recent OFC Conference in San Francisco.

He commented that electrical interconnects will still have a place – primarily in server backplanes where the technology can deliver cost-effective performance. From a system perspective, he felt that photonics and optical interconnects are really the only technology to address the previously mentioned demands and manage power consumption at the same time. René mentioned the discussion of 1.6T and 3.2T deployments at OFC; the world of interconnects is clearly changing in performance and implementation approach as these levels are simply not possible in the electrical domain.

René then discussed implementation approaches for optical interconnect. He pointed out that silicon photonics is a main focus today, primarily because of the familiarity the entire supply chain has with silicon devices. He explained that as system demands increase, modified substrates play a key role to unlock next-generation performance to deliver on key parameters such as insertion loss. He went on to explain that the uniformity of these substrates is critical to deliver high yielding, high performance devices. The surface smoothness and robustness of the substrate are also critical. These are areas where Soitec has a very strong position.

He explained that Soitec’s product roadmap is delivering advanced capabilities for both 200mm and 300mm wafers (see roadmap diagram above). We then talked about the drivers for all the bandwidth requirements, and simply put AI/ML is the main driver, both inference and training. René discussed co-packaged optics as a way to bring the networking layer closer to the processor to reduce loss/power and increase bandwidth. With regard to new materials, he mentioned thin film lithium niobate as one promising approach, there are others.

We concluded our discission by observing that Soitec is at the epicenter of trends like new substrates and co-packaged optics thanks to its engineered substrate technology and experience. I summarized the position of the company as “right place, right time”.

As a final question, I asked René when we could see substantial changes in optical interconnect deployments begin to take hold. He was quick to point out he didn’t have a crystal ball, but he felt 2027/2028 would be an exciting time. This is right around the corner.

To Learn More

It appears that silicon photonics will have a major impact on many new systems going forward. Soitec is a key player in this emerging market and the recent white paper will give you great insight into the relevant trends and opportunities. I highly recommend getting a copy. The white paper is part of the March edition of PHOTONICS spectra magazine. You can get your copy here. The white paper begins on page 38. And that’s how Soitec delivers the foundation for next-generation interconnects.


Electrical Rule Checking and Exhaustive Classification of Errors

Electrical Rule Checking and Exhaustive Classification of Errors
by Daniel Payne on 04-16-2024 at 10:00 am

Aniah tool flow min

The goal of SoC design teams is to tape-out their project and receive working silicon on the first try, without discovering any bugs in silicon. To achieve this lofty goal requires all types of specialized checking and verification during the design phase to prevent bugs. There are checks at the system level, RTL level, gate level, transistor level and physical layout levels. One newer EDA company is Aniah, and their focus is on checking the correctness of IC designs at the transistor level through Electrical Rule Checking (ERC) by employing formal methods and smart clustering of errors

During ERC a formal tool can mistakenly report “false positives”, and these are false errors that shouldn’t have been reported. Real design errors that are not detected are called “false negatives”, so the ideal formal tool has zero false negatives. The Aniah formal ERC tool is called OneCheck, and I’ve just read their White Paper to get up to speed on how it works.

The Aniah OneCheck ERC can be run on a design in several places for IC flows to verify both analog and digital circuitry:

Aniah Tool Flow

Some common design flaws caught by formal checkers include:

  • Missing Level Shifters
  • Floating Gates
  • High Impedance states
  • Floating Bulk
  • Diode Leakage
  • Electrical Overstress
False Errors

There are four typical classes of false errors that an ERC tool can be fooled by, so the following examples illustrate the challenges.

1. Topology Specific

The following circuit has two power domains – VDD, Vin; a level shifter is expected between them, and here the false error flags transistors M2 and M3, because their gates are connected to net A and Net 1 which are powered by Vin, not VDD. Transistors M0 and M1 actually control the “1” level.

False Error: Missing Level Shifter

2. Analog Path

A differential amplifier has devices M1 and M2 that are biased to act as an amplifier with current provided by M3, yet a false error reports an analog path issue.

False Error – analog path

3. Impossible Path Logically

An inverter of M1, M2 is driven by a lower range signal. When net 3 is ‘1’, then M2 pulls down output net 2 to a ‘0’, but the false error reports a logic path through M3 and M1.

False Error – Impossible path

4. Missing supply in setup

When a ring oscillator circuit requires a regulated supply value of 1.2V, but the regulator has a supply value of 2.5V, then a false error can be reported for electrical overstress.

False Error – Missing supply in setup

OneCheck

The good news is that OneCheck from Aniah has a smart Clustering Root-Cause analysis methodology to handle these four types of false errors. This formal circuit checker doesn’t use any vectors because all circuit states are verified in just one run, which includes verification of all power states of each circuit. Commercial circuits on mature or latest generation nodes have been run through OneCheck, so it’s a reliable tool.

Your circuit design team can start using OneCheck after the first schematic netlists are entered, even before any simulations have been run. The actual run times of OneCheck are quite fast, typically just a few seconds on a mixed-signal designs with over 10 million transistors and more than 10,000 different power scenarios.

1. Topology Specific
OneCheck detects topology-related false errors like missing level shifters by performing pseudo-electrical analysis to model voltages and currents.

2. Analog Path
With Aniah OneCheck a user can identify and filter false errors with any current or voltage reference net.

3. Impossible path logically
The OneCheck tool finds all tree-like paths used by analog multiplexors, and the user can reject thousands of false errors quickly.

4. Missing supply in setup
All errors corresponding to a missing supply are clustered together, so users can easily update the power supply setup.

Summary

Finding circuit bugs before manufacturing is the preferred method to ensure first silicon success, so ERC is another tool for chip design teams to use. Other ERC tools report way too many false errors, so that his limited their acceptance in the design community. Aniah has delivered new formal technology to combat this issue of false errors for ERC.

Why not give OneCheck a try on some of your biggest IC designs, as the evaluation process is free and easy.

Read the full 11-page White Paper from Aniah online.

Related Blogs