DAC2025 SemiWiki 800x100

Aniah and Electrical Rule Checking (ERC) #61DAC

Aniah and Electrical Rule Checking (ERC) #61DAC
by Daniel Payne on 08-06-2024 at 10:00 am

Aniah #61DAC min

Visiting a new EDA vendor at #61DAC is always a treat, because much innovation comes from the start-up companies, instead of the established big four EDA companies. I met with Vincent Bligny, Founder and CEO of Aniah on Wednesday in their booth, to hear about what they are doing differently in EDA. Mr. Bligny has a background working at STMicroelectronics and started Aniah for Electrical Rules Checking (ERC) in 2019. Their tool is called OneCheck, and it operates on transistor-level netlists like CDL formats, then reports on several issues like: domain crossings, floating gates, electrical overstress, diode leakage, conditional HiZ nodes, ESD, and missing level shifters between voltage domains.

Aniah at #61DAC

The OneCheck tool has a UI that enables an IC designer to load a netlist and get analysis results quickly. I heard that the Calibre PERC tool required 2 days to load and analyze a customer design, but OneCheck was much quicker, taking only few minutes. Vincent showed me a live demo running on a laptop where the design had 19 million transistors, and it was for a camera sensor chip. OneCheck detected all the different power regions, then completed all analysis in under 5 seconds. Initial results reported 2,257 missing level shifters in the netlist, and the errors can be clustered by priority then root cause. In general, there are four categories of errors:

  • CMOS logic
  • Mixed-signal topologies
  • Complex propagation paths
  • False errors

The root cause of the 2,257 errors was a CMOS inverter, under a specific power scenario, so clicking on this error in the GUI automatically created a schematic showing the propagation path. Engineers can continue to navigate forward or backward through the auto-generated schematic to understand the context of each issue reported. Vincent then cross-probed with Cadence Virtuoso Schematic Editor to better see the issue identified.

Inside of the OneCheck GUI you can use it in Standard or Advanced mode, where Advanced mode allows one to cluster errors by property, like voltage, then filtering by cell names. Users can make all circuit properties visible, making it easier to pinpoint and fix each issue found. There are many pre-packaged checked for all types of design on processes from .25um BCD to 3 nm CMOS, and engineers can develop and code their own unique checks to identify trouble topologies. Most checks take about 5-10 lines of code. Stay tuned for using Python functions to write your own checks, it’s coming soon.

Analyzing all electrical errors for IP blocks, sub-systems and SoC at the top-level really require an automated approach, because you cannot have circuit designers manually look at netlists to enforce best practices. If your electrical rule checking tool is producing too many false errors, then you’re wasting valuable time. The secret sauce with Aniah is the use of formal technology during analysis of the transistor-level netlists, thus reducing false errors and speeding the identification of root causes.

Aniah has attended several events recently: CadenceLIVE Taiwan, DVCon, DATE 2024. You can learn more at their next webinar, September 26th, 9:00AM – 10:00AM PDT, hosted by SemiWiki. Distribution partners include Saphirus in the US, Kaviaz Technology in Taiwan, and Lomicro in China, Micon Global in Europe and Israel.

Summary

My engineering background is transistor-level circuit design, so viewing what Aniah has done with OneCheck was quite encouraging. Quickly identifying circuit design issues with a minimum of false errors by using formal techniques looks very promising and improves first silicon success. The speed of OneCheck running on a laptop was also impressively fast, while most EDA vendors don’t even run their tools live at DAC any more.

Related Blogs


Writing Better Code More Quickly with an IDE and Linting

Writing Better Code More Quickly with an IDE and Linting
by Tom Anderson on 08-06-2024 at 6:00 am

Lightmatter

As a technical marketing consultant, I always enjoy the chance to talk to hands-on users of my clients’ electronic design automation (EDA) tools to “see how the sausage is made” on actual projects. Cristian Amitroaie, CEO of AMIQ EDA, recently connected me with Verification and Infrastructure Manager Dan Cohen and Verification Engineers Xiyu Wang and Nick Sarkis from Lightmatter. I had the pleasure of speaking with them about their experiences using AMIQ EDA products.

Can you please tell us a bit about your company and your projects?
We specialize in developing advanced computing solutions using photonic technology. We have projects focusing on data transport and optical processors that leverage the unique properties of light to achieve high-performance computing with significantly lower power consumption.

So this involves designing some big chips?
That’s for sure! At the heart of our solution are huge chips with a great deal of both digital and analog content. They present an interesting verification challenge with some unique features, so using the best tools helps us move quickly with a fairly small team.

What led you to look to AMIQ EDA for help with your verification?
Lightmatter: Several members of our team used DVT Eclipse IDE and Verissimo SystemVerilog Linter at our previous companies, so we planned to use it here as well. Verissimo is the only tool that can effectively lint UVM code. We use the DVT editor extensively, and it’s tightly linked with the linter. All of our team now uses the DVT IDE for VS Code, although some of us were Eclipse based in the past.

How do DVT IDE and Verissimo fit into your process?
We encourage everyone to write their code in the IDE and to run lint periodically. In addition, we automatically run Verissimo whenever an engineer pushes new or changed code into our Git repository. We don’t accept code into the build of our verification environment until it passes the lint checks. We don’t ask for a manual code review until after a clean lint run. We don’t want to waste our time finding typos, style differences, or well-known language pitfalls when Verissimo can find these.

How have the engineers responded to this process?
Every member of the verification team (ten and growing!) has embraced the IDE and linter. Our observation is that engineers don’t mind having style rules enforced by a machine. Having a single set of rules that everyone must follow saves a lot of time and effort. This is especially true since we hire engineers from many different companies; the AMIQ tools help us quickly align them to a common coding style. This also keeps the humans focused on finding high level issues in the code, rather than focusing on style.

Do the hardware designers also use the tools?
Our more forward-thinking designers do, and we encourage them all to try the tools. Not linting parts of the RTL code is missing opportunities to catch design bugs early. Personally, I can’t imagine writing SystemVerilog with only the simple checks available in Vim/vi or Emacs.

Can you describe some of the benefits provided by the tools?
Verissimo has caught some tricky variable bit width issues where a macro hid that only the lowest order bit was being compared. Both DVT IDE  and Verissimo also save a lot of time. In the past, we found that when we wrote or edited code we spent too much time in a compile-debug-fix loop until it was clean. This happened hundreds, maybe thousands, of times on every project. With the ability to find and fix errors interactively within the IDE, that loop is significantly shorter. We get great team style alignment, with more efficient code reviews. So clearly we save many weeks of effort.

Are there any additional features you would like in the tools?
There are some specific things we’ve asked AMIQ to add, such as the ability to use relative path names in compiler waiver include statements. We’ve also asked them to increase their support for the Verilog-A language, which we use in the testbench for the analog and mixed signal (AMS) portions of our design. In addition, we’d like more flexibility in applying style rules and guidelines.

What are your plans going forward?
We’ve used DVT IDE and Verissimo on every project at Lightmatter, and that’s not going to change. We know that AMIQ constantly adds new lint rules, and we need to review these and decide which ones we want to enable. We have found that some rules, such as those for whitespace and length restrictions, are only marginally useful and take too much time to resolve. We also intend to investigate some new Verissimo auto-correct features and the Specador documentation generator when we have time.

How was your experience working with AMIQ EDA?
They’ve been great to work with. We push our EDA vendors hard, and the folks at AMIQ have been amazing, always responsive.

Is there anything else you’d like to add?
We are surprised that so many engineers in the industry are still using crusty old editors like Vim or Emacs when there is such a productivity gain from using a modern editor with DVT. It seems that some engineers are more likely to change their company, spouse, or country than their editor. We’re not about to give relationship advice, but it’s definitely time to revisit the editor you are using if you haven’t looked at DVT IDE. Verissimo goes hand in hand with the IDE by automating code reviews and allowing you to focus on the important issues.

Engineers join us because they are excited to work on new technology. We need a great working environment to keep them engaged. We have a state of the art setup with a lot of automation and the freedom to deploy the resources they need. If you’re an engineer looking to leave behind a fossilized environment and a team that still uses tools from the 90s, then please look at our job postings!

Thank you for your time and your insights.
Thank you for the opportunity to share a bit about our flows and company.

Also Read:

AMIQ EDA Integrated Development Environment #61DAC

AMIQ EDA at the 2024 Design Automation Conference

Handling Preprocessed Files in a Hardware IDE


3D IC Design Ecosystem Panel at #61DAC

3D IC Design Ecosystem Panel at #61DAC
by Daniel Payne on 08-05-2024 at 10:00 am

bits per joule min

At #61DAC our very own Daniel Nenni from SemiWiki moderated an informative panel discussion on the topic of 3D IC Design Ecosystem. Panelists included: Deepak Kulkarni – AMD, Lalitha Immaneni – Intel Foundry, Trupti Deshpande – Qualcomm, Rob Aitken – CHIPS, Puneet Gupta – UCLA, Dragomir Milojevic – imec. Each panelist had a brief opening statement, then Daniel guided them through a series of questions, so I’ll paraphrase what I learned.

Deepak Kulkarni, AMD – there are big challenges in AI and the data centers caused by power consumption, because it’s taking megawatts to train a model with a trillion parameters over 30 days, and power projections of 100MW to train 100 trillion parameters.

For 3.5D packaging the motivation is improved power efficiency, where 3D Hybrid bonding has the densest and most power-efficient chiplet interconnect. Using 2.5D helps package HBM and compute together, and the goal is better system-level efficiency.

Power Efficiency. Source: AMD

 

Lalitha Immaneni, Intel Foundry – We’re taking a systems approach to integrating 3D IC products and architected the first Chiplets at Intel, we want to move to a CAD agnostic tool flow.  To improve our architecture, we need System Technology Co-Optimization (STCO), allowing all of the silicon-package-board trade-offs, so this is a multi-disciplinary task. We are combining key partners in industry and academia to collaborate, then we will pick the best point tools with data flowing through them, and we need a digital twin to help optimize our goals.

System approach to 3D IC, Source: Intel

Trupti Deshpande, Qualcomm – How do I co-optimize and shift-left? Through early analysis, and we want to use the best tools, and stay EDA vendor-agnostic, to tackle this multi-physics challenge.

Co-design and co-optimization challenges, Source: Qualcomm

Rob Aitken, CHIPS – recently joined and came from Synopsys. 3D stacked die is inevitable, just look at the analogy to cities as they are similar to IC challenges. The transportation bottleneck at Moscone is the escalators. 3D Stacked die has similar challenges, vertical and thermal issues, new EDA requirements, and the bandwidth requirements are going up, so how do we solve all these challenges simultaneously?

Punnet Gupta, UC Los Angeles – Where are the system bottlenecks, hardware or software?

Software improvements can sometimes be larger than hardware improvements. We expect chiplets to be the next IP approach, and the chiplets must be large enough to be practical. Right now, the average chiplet is about 100mm2 to be economically feasible, so not tiny sizes.

Cost vs Chiplet Size, Source: UCLA

Dragomir Milojevic, IMEC – Looking at the scaling roadmap for CMOS it is slowing down, so multi-layered ICs are the future, let’s call it CMOS 2.0, where STCO is the new challenge.

3D-IC Cross-section, Source: imec

Q&A

Q: How do you make the connections between 3D IC layers?

Dragomir – It’s with layer-to-layer wires, not monolithic wires.

Q: What is the motivation for Chiplets? AMD and Intel have had Chiplets for 7 years. For 3DIC what is the best method?

Rob –It’s the pressure from AI accelerators that drives these approaches, and monolithic chips are reticle size limited. 3D stacking of memory on logic is the easiest place to start. The 3D pressure is relentless, as the reward is so great.

Q: Dan – when did Arm start stacking?

Rob – In the mid-2000s there was a project started at Arm, but soon killed by the CTO.

Dragomir – The early benefits were not cost effective for stacking ICs.

Deepak – The motivations are economic and also the2X performance goals, no other solutions are out there.

Lalitha – We’ve been using traditional organic packages, then HBM, and now new substrates, as HPC and AI segments require new approaches.

Punnet – 3D helps you to get smaller areas for more cost-effective designs

Trupti – Mobile requires a good ROI to be pursued as an approach.

Rob – Airplanes require a retrofit to fit new equipment into the existing cabinet space, so less cost constrained approaches are welcomed.

Q: Dan – TSMC entered packaging 15 years ago, and Intel has also opened their packaging to customers. How will this work out?

Rob – The new packaging technology advances inside of IDM companies and the foundries.

Lalitha – There’s no limit to bridges added for EMIB by Intel Foundry, and it’s a huge differentiator.

Q: Dan – How heat will be dealt with?

Dragomir – We’ve done lots of experiments on multi-die stacking, and it’s not as bad as you may think. The speed of circuits dictates both power and thermal, so slowing speeds achieves thermal requirements.

Lalitha – We need to work with material science, heat spreaders, and new thermal cooling technology. There must be an architecture for co-optimization, where we find hot spots in each tile layer, then keep the hot spots apart, requiring a thermal-aware tool flow.

Rob – Through architecting, planning, designing, monitor it while it’s running for thermal, and then tune the voltage to keep thermal within limits.

Puneet – Even photonic circuits are quite sensitive to thermal coupling.

Q: What does an AI accelerator designer need to think about for 3D IC?

Trupti – They need to look at the entire system, not just the pieces, so the power per chiplet, then identify bottlenecks, and even considering mechanical aspects.

Rob – We’re in the early stages of 3D IC design, so we’re not so sure, eventually we will settle on a methodology, say in 10 years.

Lalitha – The boundaries between IC, package and board will merge with co-optimization. Tiles share the same package and substrate, and adding more tiles will add warpage. Silicon to package to platform all need to be co-designed. Better planning makes this process OK.

Deepak – What do I want with my AI accelerator? I want to double compute and double memory every two years to meet my requirements. Bringing compute and memory closer together helps to keep me within the power budget. Networks used to have 10% in data center power, but now its grown to 20%.

Q: Dan – How is power delivery from board to substrate to stack?

Puneet – Yes, power delivery is challenging, and requires backside power delivery in the stack. For thermal reasons, I want my highest power die to be at the top of stack. For delivery, I want highest power to be at the bottom of the stack.

Deepak – The total power delivered to a data center is our goal for reductions. Power at the data center is fixed, so how to get efficient enough through 3D stacking is the challenge.

Dragomir -Backside power delivery is required.

Q: Dan – What is backside power delivery?

Rob – Transistors with metal layers on top have to reach the lower layers. So, backside power comes from the other direction, the bottom of the die.

Dragomir – By putting PDN on the backside it then frees up the top side for interconnect.

Q: Dan – Where is EDA at now to support 3DIC?

Lalitha – The 3D design complexity is growing, so EDA vendors have responded, and we need early estimates from EDA planning tools in the presence of few initial details. We want a lightweight STCO tool for early estimates, then we want to choose multiple vendor tools using agnostic tools.

Q: Dan – The backside power has worse thermal issues for signals on topside routing. How should that be dealt with?

Rob – Yes, there are unintended consequences to backside power. My question is how does it work for EDA vendors to interoperate for 3DIC design flows?

Lalitha – It does work to be EDA agnostic.

Q: Dan – EDA tool companies have full-breadth flows, so how do they make their tools interoperate with competitors?

Deepak – The trend is more package-oriented EDA tools, so we still have to piece together and EDA flow for 3D IC.

Puneet – Don’t break an IP block into multiple layers.

Q: Dan – Every stage needs to change for 3DIC in your EDA tool flows, starting at design entry stages. Are the packages going to segregate into hot and cool regions? What about liquid cooling inside the package?

Deepak – Yes, cooling is an active research area.

Rob – The Aurora machine has power entering one side and a fire hose for cooling on the other side, so yes, liquid cooling makes sense.

Q: Dan – Will there be 200nm pitch used in packages?

Dragomir – Yes, dense interconnects between package layers is coming.

Puneet – The dimensions of pitches in packages are not changing that rapidly.

Rob – With a 200nm pitch it creates challenges for alignment, etc. Existing 3D stacks use memory bandwidths in small areas to maintain active high density, and high yield.

Deepak – 3D stacks more than two dies are limited by TSV interconnects. Two dies face to face are easier to accomplish.

Q: Dan – what about AI and ML trends, are they effecting your jobs today? Any co-pilot tools?

Trupti – We’re not seen many AI/ML tools for our jobs, but maybe a design of experiments would help us out on placement options.

Dragomir – We’re not using AI in STCO today yet for exploration.

Q: Dan – haven’t we been using ML in simulations?

Deepak – Yes, some AI/ML is used during the design phase, but not sign-off stages much yet.

Lalitha – AI/ML can help out in 3D IC testing.

Rob – There are EDA tools with AI/ML now, like for test.

Audience – Siemens has AI/ML in test tools now.

Rob – 10 years ago for image detection problems, the 1’s and 0’s in a test looked like an image, so AI can work for reducing that problem. Make your EDA problem look like something that AI/ML can solve.

Audience – Are my AI/ML results valid and accurate?

Rob – In the 80s we had Expert systems, but they weren’t AI. Is ML really AI?

Puneet – Where is the low-hanging fruit for generative AI? EDA tool documentation is quite poor, so why not use generative AI for documentation, like Copilot.

Q: Dan – What is the future of SoC designs?

Dragomir – On chip optical is one direction, having multiple layers.

Puneet – My optics colleagues say that Terabits/second on a single lane is coming.

Rob – Photonics is an obvious approach for longer distances, while shorter distances are going to use vertical metal connections.

Trupti – It’s the ROI that drives our choices, so academia needs to invent something optical that is cheap and reliable, then we will use it.

Lalitha – Optics still has a way to go.

Deepak – Sooner or later photonics will be used, but the costs are the main issue, pJ/bit is the driver.

Q: Dan – co-optimization and design exploration, what can you do well today? What do you really want to do soon?

Trupti – A lightweight system is wanted for multi-physics exploration, how thermal/power/warpage are co-optimized. Four or five area optimization is what’s really wanted.

Lalitha – We work with Cadence, Siemens and Synopsys on co-optimization for 3DIC, so it is progressing.

Rob – DTCO is working well today, as everyone is talking together, and the decisions per domain are now answered across domains.

Q: Dan – what is the role of standards today, like UCIe?

Lalitha – New standards are critical, and we should learn from the motherboard vendors. Chiplets across different process nodes are not standardized, it’s all manual. The utopia of mixing and matching chiplets will help, so the UCIe spec has all the details in it for a new standard.

Deepak – Standards are critical for Chiplets in UCIe. The shortcoming is more than die to die, so getting out of chiplet is important too.

Rob – Standards come from committees and take too much time, or there are Defacto standards. Most success is through Defacto standards. Existence proofs are much better than committees.

Lalitha – Let’s come up with a test case to help drive standards.

Puneet – New standards need to have open access for free sharing of information, like OpenAccess being free to use.

Q: Dan – What about 3D IC yield?

Dragomir – In research we have come up with ideas and solutions, but not a focus on the cost of 3DIC. Some cost models are built for 3DIC, taking into account yields.

Puneet – Yield and cost are tied closely to test, and the known good die problems, so it all depends on how each layer is tested, but it’s still an open question.

Q: Dan – What are the drivers for the semiconductor industry today? Is it AI?

Dragomir – Smart phones have been drivers for awhile. Many different segments are driving, like medical.

Puneet – AI is a big driver for advanced packaging today.

Rob – AI is driving semiconductor per TSMC (50% HPC and Mobile, both AI-driven).

Trupti – It’s mostly AI and mobile as the semi drivers, along with IoT, driving advanced packaging.

Deepak – AI for data centers and the accelerator market.

Q: Dan – Any comment on timelines for these advancements?

Dragomir – Hybrid bonding and stacking are done technologies. Within one year we will see things better, like DTCO and STCO.

Puneet – 3DIC for low-cost is something we haven’t discussed yet, as we are very focused on the data center today.

Rob – What was leading edge 15 years ago? 45nm. Now, that’s low-end, a mature technology.

Lalitha – EMIB package design kits are available now, and the EDA vendors are using this now. Co-optics and glass substrates are coming. STCO is analysis paralysis for now.

Deepak – Chiplet growth is driving fast, so expect more advanced packaging coming.

Q: Dan – What about sustainability, carbon footprints, dumping old electronics, how will this change the lifecycle?

Dragomir – It used to be that every iPhone lasted 4-5 years, but children’s phones last shorter, so we need more education of consumers for recycling.

Puneet – 3DIC shrinks the footprint for recycling, but there’s no repairability.

Trupti – How about energy efficiency, like for a server farm? Look at energy efficiency goals

Rob – Dumping heat into the atmosphere should be priced and taxed somehow.

Summary

It’s kind of rare that a group of panelists from the semiconductor industry, academia, design, government and research assemble together, but this group at DAC certainly took on the challenges of the emerging 3D IC design ecosystem. There were plenty of questions raised and answered about the current state of the 3D design environment, and the future directions being pursued.

Audience members were actively engaged and asked good questions, even the panelists raised their own questions to get feedback. I was typing at maximum speed to catch the gist of the conversations, and learned much from this gathering.

Related Blogs


Mitigating AI Data Bottlenecks with PCIe 7.0

Mitigating AI Data Bottlenecks with PCIe 7.0
by Kalar Rajendiran on 08-05-2024 at 6:00 am

Mitigating AI Data Bottlenecks with PCIe 7.0 LinkedIn Event

During a recent LinkedIn webcast, Dr. Ian Cutress, Chief Analyst at More than Moore and Host at TechTechPotato, and Priyank Shukla, Principal Product Manager at Synopsys, shared their thoughts regarding the industry drivers, design considerations, and critical advancements in compute interconnects enabling data center scaling to support AI demand.

Understanding AI Data Bottlenecks

Artificial Intelligence (AI) has grown exponentially over the past decade, driving innovation across multiple industries. AI workloads, especially in deep learning and large-scale data processing, require vast amounts of data to be transferred between CPUs, GPUs, and other accelerators. These data transfers often become bottlenecks due to limited bandwidth and high latency in interconnect technologies. As AI models grow in size and complexity, the demand for higher data transfer rates and lower latency increases, making efficient data movement a critical factor for performance. Ever increasing complexity and scale of AI models have led to significant data bottlenecks, hindering performance and efficiency.

Memory and Interconnect Technologies

To process data, we need to store it in memory. Consider a large dataset that must be stored in a chip, processed, and then stored again in memory. This necessitates a wider and faster signal chain or data path. Next-generation memories, such as HBM, which is on-chip or nearby, and DDR, which is in the same rack but not on the same package are utilized. Additionally, multiple chips, like GPUs or various accelerators, must communicate with each other, and that’s where technologies like Peripheral Component Interconnect Express (PCIe) are essential.

Key Challenges in Addressing Data Bottlenecks

Whenever people talk about data bottlenecks, it directly highlights the need for faster interconnects. Current PCIe versions (such as PCIe 4.0, 5.0 and 6.0) do provide substantial bandwidth but fall short for future AI workloads that require even higher data throughput and lower latency. High latency in data transfer can significantly degrade the performance of AI applications, particularly those needing real-time processing. As AI systems scale, interconnect technologies must support a larger number of devices with minimal performance degradation.

The Value of PCIe in AI and Machine Learning

An open standard such as PCIe fosters innovation as the whole ecosystem comes up with their best technologies. It allows for system-level optimization, addressing power efficiency and other challenges. The industry makes better decisions with open standards, which is crucial as data center power consumption is becoming a growing concern. The future involves integrating photonics directly into the chip, leveraging optical channels for faster and more efficient data movement. The industry is adopting optics, and PCIe will continue to evolve with these advancements. The standard’s predictable path and continuous innovation ensure it remains relevant and beneficial.

PCIe 7.0 and Practical Implications for AI

PCIe 7.0, the latest iteration of the PCIe standard, offers significant improvements in bandwidth, latency, power efficiency and security. This evolution, facilitated by collaborative industry efforts and open standards, enables better system-level optimization and addresses challenges such as data center power consumption. AI training processes, particularly for large neural networks, require substantial computational power and data movement. PCIe 7.0’s increased bandwidth and reduced latency can significantly speed up the training process by ensuring that data is readily available for computation. Similarly, inference tasks, which often require real-time processing, will benefit from the quick data transfers and low latencies facilitated by PCIe 7.0. The future of PCIe also involves integrating photonics for faster and more efficient data movement, ensuring its continued relevance and benefit to AI advancements.

Summary

PCIe 7.0 represents a significant advancement in addressing the data bottlenecks that impact AI performance. By providing higher bandwidth, lower latency, and improved efficiency, this interconnect standard helps ensure that AI systems can handle the increasing demands of data-intensive applications. As organizations continue to push the boundaries of AI, PCIe 7.0 will play a vital role in enabling faster, more efficient data processing and supporting the next generation of AI innovations.

In June 2024, Synopsys announced the industry’s first complete PCIe 7.0 IP solution. You can also refer to this related SemiWiki post and for more details, visit the Synopsys PCIe 7.0 product page.

Also Read:

The Immensity of Software Development the Challenges of Debugging (Part 1 of 4)

LIVE WEBINAR Maximizing SoC Energy Efficiency: The Role of Realistic Workloads and Massively Parallel Power Analysis

Synopsys’ Strategic Advancement with PCIe 7.0: Early Access and Complete Solution for AI and Data Center Infrastructure


Intel’s Death Spiral Took Another Turn

Intel’s Death Spiral Took Another Turn
by Claus Aasholm on 08-04-2024 at 8:00 am

Intel Death Sprial 2024

Does this justify the widespread Intel bashing?
The latest Intel earnings release was another sharp and deeper turn into the company’s death spiral. On the surface, it is just a whole load of bad news, and the web has been vibrating with Intel bashing since the release.

So what are the facts?
From a revenue perspective, Intel was inside the guidance, but the $12.8B was less than the midpoint guidance of $13B.

The Gross Margin was a miss. Intel delivered 35.4% versus the guidance of 40.2%, resulting in a Gross Profit of $4.5B versus guidance of $5.2B midpoint guidance. While 700M$ less gross profit is significant, the context is that Intel is in a pickle; Intel has clearly stated that 2024 is not the recovery year.

An internal move of product between factories impacted the result negatively but will bring long term benefits – a sign Intel is now ready to make tough decisions.

David Zinsner, Intel’s CFO, commented on the miss:

“Second-quarter results were impacted by gross margin headwinds from the accelerated ramp of our AI PC product, higher than typical charges related to non-core businesses and the impact from unused capacity,” said

The next two quarters will show minimal revenue growth and further profitability challenges.

I will let people respond the way they want, but from my perspective, this miss does not change anything about Intel’s current situation. The company was in a death spiral before and is still in a death spiral.

Intels Death Spiral

Intel’s current situation is not rooted in the company’s near-term strategy but rather due to historic decisions coming home to roost.

In late 2019, the long-term revenue growth stopped, while Intel continued to deliver rich results that kept everybody satisfied while milking the cow. In 2022, the cow got sick and was struggling to provide the necessary nutrition to Intel.

Intel is a manufacturing company that needs significant profits to fund the Capital Investments required to maintain and expand its manufacturing assets.

Q3 and Q4-24 Forecast

From a manufacturing technology perspective, Intel chose the wrong path, believing it could maintain its technology leadership without using the Deep UV tools that other companies, most notably TSMC, had decided to use.

Even though the IFS revenue increased, the external IFS revenue is still embryonic. Intel disclosed that the external deal funnel is meaningful at $15B, although this is hard to resolve regarding revenue.

Combining the Internal and External IFS revenue still leaves a distance to the 500-pound gorilla in the Foundry.

The Intel IFS challenge is still massive

From a product perspective, Intel has been blessed by owning the largest cow in the industry. The client business supports CPUs for the important PC segment. The drawback is that the client division has been far too powerful and dominating the other divisions.

This is a classical corporate problem: corporations are locked into a single category despite having the resources to divest into other market areas.

Intel has often tried to divest into other areas but has failed. The company completely missed or failed to address the transformation of the Data Center market from CPUs to GPUs.

The Client Computing Group’s share of business has been reasonably stable over time, which also means that the Intel boat rises and falls with the PC market plus the CPU share of the data centre market.

The situation is bad but, in reality, unchanged from last quarter (within a few hundred M$). So, is the market’s response out of touch with reality?

Intel is taking action

Pat Gelsinger, in an internal memo, noted that headcount and results had drifted away from each other and it was necessary to take action.

“Our costs are too high, our margins are too low,” and explained why the workforce reduction is necessary. He noted that Intel’s annual revenue in 2020 was about $24 billion higher than the previous year, yet the workforce has grown by 10%.

The missed result and a visit from the head cutting experts McKinsey and friends have likely accelerated what was already brewing in the corner offices. It is time for difficult decisions:

Intel will reduce headcount by 15% during the next 6 quarters, which will significantly reduce Operational Costs. Capital Expenditures will also be reduced, which likely means delays to some of the investment activities. Fixed sales costs will be reduced by 1B$, and dividends will be suspended for now.

The Chart below shows our analysis of the Revenue and Operating Profits per employee with the projected effect of the headcount reduction program. We assume revenue growth is in line with the market, which certainly can be challenged both upwards and downwards.

The headcount cut is expected to bring Intel to a $20B$ capex in 2024, which suggests a 32% decrease from Q4-23 to Q4-24, as seen below.

This is significantly steeper than the 15% headcount cut suggested, which does not indicate it is the average wage employee who is getting fired.

There are other elements to OpEx besides pay, but it is safe to assume that pay is the major element. Not all pay is included in OpEx. The pay associated with the manufacturing of the products is included in the Cost of Goods Sold.

This discrepancy suggests that there is something else at play. It could be that the firing is higher in the ranks or people with longer tenure as both groups have higher than average pay. It could also be that Intel expects more people to leave than those fired. This could be done through a voluntary program that runs parallel with the firing process. Lastly, there could be serious pay cuts happening. When you are in a firing season, you could be very motivated to accept a pay cut.

Also, the decline in OpEx is happening a lot faster than generally during headcount reduction. It takes time for people to leave the payroll. Again,like some deal-m here, it seems aking is going on to get people off the payroll faster, potentially in the form of stock-related severance packages.

If Intel can pull it off without too much impact on revenue, the company will be back in the black and able to contribute to the long-term capital expenses that Intel is committed to.

The reduction in Capital Expenses

In the conference call, Intel revealed that the CapEx budget would be cut by 20% in 2024 to $26B in 2024 and $21.5B in 2025. The new budget was presented interestingly as a Gross and Net CapEx budget.

As Intel’s profitability started the dance around the zero line beginning in 2022, the company was unable to finance the CapEx budget of the IDM 2.0 strategy through retained earnings and had to find alternate ways of funding the plan.

Banks would want a premium interest rate because of Intel’s risky situation, and diluting the stock while make the come deflated would back much more difficult. It was time to get creative.

Intels Masterstroke

The first part of Intel’s plan was likely political. It would be surprising if Intel were not the cornerstone of the 2022 Chips Act of the Biden Administration. The relationship between Intel and the administration was very visible, and Intel probably had an excellent idea of what they would get out of the Chips Act early.

A prerequisite for getting funding for a project was that it was new and on US soil, but the political relationship was important. Recently, AMAT has been denied funding, which could be for several reasons but coinciding with an investigation of back-door sales of Semiconductor tools to China without export licenses.

In short, Intel’s IDM 2.0 plan was perfectly aligned with the US administrations and like-minded governments in Europe.

The second part of Intel’s plan was to attract equity investors without diluting the company’s value. Enter the SCIP: Semiconductor Co-Investment Program.

The SCIP program provides financial flexibility and strategic funding to support Intel’s manufacturing and expansion plans. It involves strategic partnerships with financial firms to co-invest in Intel’s semiconductor manufacturing facilities. This helps Intel manage its capital expenditures and maintain a strong balance sheet while expanding its production capabilities.

Intel has made two SCIP deals:

Fab 34 in Leixlip, Ireland, with Apollo, and the expansion of the Ocotillo Campus in Arizona with Brookfield. Both deals involve a 51/49% ownership structure in Intels favour.

These deals, combined with the Chips Act funding and other subsidies, are a complete master stroke for which Intel and Pat Gelsinger have not received sufficient credit. In a situation where Intel lacked funding, and it would be expensive to borrow, the SCIP rabbit was pulled out of the hat:

The illustration above illustrates the outcome of the $53B$ investment in the two fabs. For an investment of approximately 1/3rd of the total needed for the two fabs, Intel managed to get 51% ownership and 100% control. That can only be seen as a very impressive way of getting out of the corner Intel was in.

The bill for these deals will be the sharing of the manufacturing margin with the two financial partners – this is a small price compared to failing the manufacturing strategy.

Intel is still in a Death Spiral, but management has shown the ability to make tough decisions and has found creative ways to finance the incredibly expensive fabs. This is very important as there will be more difficult decisions ahead.

A key element will be Intels ability to execute the company’s product strategy in the AI PC market and in both the CPU and GPU parts of the Datacenter market. Next weeks post will be a business overview of the Datacenter market.

Also Read:

TSMC’s Business Update and Launch of a New Strategy

Has ASML Reached the Great Wall of China

Will Semiconductor earnings live up to the Investor hype?


LRCX Good but not good enough results, AMAT Epic failure and Slow Steady Recovery

LRCX Good but not good enough results, AMAT Epic failure and Slow Steady Recovery
by Robert Maire on 08-04-2024 at 6:00 am

Chips Act 2024
  • Lam reported good numbers with slightly soft guide
  • Investors are figuring out the up cycle will be slower than thought
  • Looks like AMAT won’t get CHIPS act money for Epic facility
  • Steady improvement but stocks still ahead of themselves-Correction?
Lam reports good numbers (as usual) but guide not good enough

Lam reported a good quarter with a standard “beat” (as expected) but the guide was a bit on the weak side. The stock is trading off as investors are clearly disappointed with the softer outlook.

This should not be a big surprise as we have been warning that this down cycle recovery will be slower than prior up cycles as there is still a lot of weakness in areas and not all segments are recovering.

We have mentioned in almost everything we write that while AI & HBM are nothing short of fantastic , HBM is a single digit percentage of the overall memory market.

AI takes a big slug of bleeding edge capacity but the ramp of leading edge is limited. Trailing edge is not so great and the rest of memory (non HBM) is also “muted” especially DRAM

Its clear that as we have been saying, the stocks have been ahead of themselves as the ramp of the upcycle , off of the bottom, will not be as steep an incline as in previous cyclical recoveries.

The word “modest” used way too many times in the call

Management was clearly sending a “calm down” message to investors as they overused the word “modest” in too many circumstances . The word “soft” was also used a few times.

So while things continue to get steadily better, its not in a hurry and 2024 looks OK but far from a “bounce back” that investors have seen in past cycles

Margin “headwinds” coming soon due to customer mix. Read that as less Chinese “suckers” paying inflated prices

As China percentage of revenues come down, so will the high margins associated with that business which will cause margins to fall faster than expected.

We mentioned this issue in our previous note and we were somewhat surprised to hear a company admit to “margin headwinds” due to customer mix coming in future quarters.

This will be an outsized impact on profitability as China may account for over 50% of profitability due to these outsized margins.

This will add to the slowness of profitability recovery. Revenues will likely recover faster than profitability

AMAT won’t get CHIPS Act money for EPIC facility

We have been highly critical of Applied Materials asking for $4B in CHIPS Act funding with one hand while on the other hand shipping jobs out of the US to Singapore as they are building a record breaking facility there.

We have viewed it as super hypocritical.

In addition Applied seemingly put a gun to their own head threatening “suicide”, back in April, to not build the Epic facility if they didn’t get CHIPS Act money.

It looks like the government called their suicide bluff and Applied won’t be getting CHIPS Act funding for Epic.

Link to AMAT CHIPS Act denial for Epic

Applied has more than enough cash to build the Epic center, without government help, and it is clearly a false threat to not build it if they don’t get the money as they would only be shooting themselves in the foot.

ASML is doubling down in their R&D in Eindhoven and AMAT needs to increase R&D to keep up with the rest of the industry with ASML already passing Applied.

Applied is not the only company shipping semiconductor jobs out of the US as Lam announced on their call their pride in achieving 5000 chambers being shipped out of Malaysia. But we haven’t heard of Lam asking for $4B from the CHIPS Act either…

The Stocks

We continue to view the stocks as being ahead of themselves. Investors assumed we would be off to the races in a normal “bounce back” cyclical recovery, but that’s not happening….

Its going to be a slow and steady recovery with both positives (AI & HBM) and negatives (falling China related margins)….

The balance of 2024 looks OK but not great, there are more hopes being pinned on 2025 but its far from certain just yet. Companies are not willing to go out on a limb as to 2025 just yet.

There will be some rationalization between the stock prices, multiples, expectations and reality and we have clearly seen the first leg of that rationalization.

The future of the industry is certainly very bright and the key questions are both how bright that light is and how far away it is….and it may be a bit further away than investors had assumed…..

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

About Semiwatch

Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects. Please contact us for these services as well as for a subscription to Semiwatch

Visit Our Website

Also Read:

The China Syndrome- The Meltdown Starts- Trump Trounces Taiwan- Chips Clipped

SEMICON West- Jubilant huge crowds- HBM & AI everywhere – CHIPS Act & IMEC

KLAC- Past bottom of cycle- up from here- early positive signs-packaging upside


Podcast EP238: Intel Capital’s Focus on Improving the Future of Semiconductors with Jennifer Ard

Podcast EP238: Intel Capital’s Focus on Improving the Future of Semiconductors with Jennifer Ard
by Daniel Nenni on 08-02-2024 at 10:00 am

Dan is joined by Jennifer Ard, Intel Capital Managing Director and Head of Investment Operations. In her role, Jen is responsible for managing Intel Capital’s investment-related operations. Additionally, she is primarily focused on investing in silicon-related companies and has been involved in multiple deals including Intel’s $3.2B investment in ASML.

Dan explores the breadth and impact Intel Capital is having on the semiconductor industry with Jen. She explains the broad focus of the venture capital arm of Intel that has existed for over 30 years. Areas of development include cloud, devices, frontier and silicon, which is the main focus for Jen. In the silicon area, Intel Capital invests in tools for the fab, materials, AI software for the fab and EDA. Recent focus areas include sustainability and environmental improvements. The organization is quite aggressive, having led over 75% of the deals they’ve participated in.

The breadth and scope of this organization is substantial. Intel’s entire resource pool is leveraged to improve the future of semiconductors.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Pim Donkers of ARMA Instruments

CEO Interview: Pim Donkers of ARMA Instruments
by Daniel Nenni on 08-02-2024 at 6:00 am

Pim Donkers

Pim Donkers is the co-founder and Chief Executive Officer of Switzerland-based ARMA Instruments, a technology company which produces ultra-secure communication devices.

Pim is a technology serial entrepreneur operating internationally with several successful companies in his portfolio such as his recent IT outsourcing company Binarylabs.

His interests are in geopolitics, technology, and psychology. His diverse background allows him to observe things differently and drive unconventional and non-linear solutions to market.

Tell us about your company

In 2017, when we looked at available secure communications technology, we saw products that were technologically incapable of dealing with advanced adversaries, and products challenged by the involvement of Nation State actors. We saw a need for a different kind of technology, and we knew it would take a different kind of company to build it. That’s why we founded ARMA Instruments.

Our company mission is to provide absolute secure communications and data security, based on zero-trust principles, while being transparent in organizational infrastructure and technology. Our corporate tagline is “Trust No One”. This philosophy has allowed us to create the most secure handheld personal communicator available.

ARMA products include an ultra-secure mobile personal communicator with several industry-first features for secure communications. The ARMA G1 MKII for example is classified as a “Dual Use Good” for both military and civilian use applications operating server-less over a cellular network. ARMA developed the entire system from the ground up, from the message application and the ARMA Linux operating system to the hardware electronics boards.

There are no commercial processors or third-party software in our products. Everything is proprietary ARMA technology. Our personal communicator has no ports of any kind. No microphone and no camera. Charging is done wirelessly. This architecture dramatically reduces attack surfaces and renders malware that exploits known OS weaknesses useless.

Digging a bit deeper, our patented Dynamic Identity, working with ARMA’s VirtualSim feature, prevents targeted cyberattacks and anti-personnel attacks by changing the device’s cellular network identity at non-predictable intervals. We say this creates an “automated burner phone”.

Data from communication sessions is stored only on the ARMA device, never in the cloud. The device can sense attempts to access this data either through physical means or electronic disruption such as side-channel attacks. In these cases, the device will execute a self-destruct sequence. These are just a few of the capabilities of the device. There are many other secure features deployed to adhere to the highest security levels in the industry.

What problems are you solving?

Mobile phones, including secure ones, essentially act as personal location beacons on the global cellular network. Eavesdropping on anyone has become remarkably easy. A high-profile example of this is Israel’s NSO Group Pegasus spyware that can be installed on iPhones and Android devices, allowing operators to extract messages, photos and emails, record calls and even secretly activate microphones and cameras.

As mentioned, our device runs a proprietary OS and we have no external ports, microphones or cameras.  Users are anonymous on the network by changing IMSI and IMEI numbers through our virtual SIM environment. Pegasus, and all other spyware of this type presents no threat to us.

Smartphone security weaknesses can create life-threating situations. For example, with forward-deployed engineers in Ukraine, we’ve seen smartphones used as sophisticated homing devices for ground or airborne attacks, such as drones rigged with explosives or passive listening detonators. The decreasing costs and increasing availability of such technology make smartphone-assisted attacks a very real threat. This is why broadcasting your location on the network doesn’t make sense. Our technology, Dynamic Identity, is patented in the US and helps mitigate these risks.

Additionally, any phone call, message, or media sent is typically stored on a server to ensure delivery if the recipient is offline. This data at rest outside user devices is subject to zero oversight, leaving it vulnerable to being stored indefinitely and decrypted in the future. As I mentioned, our server-less protocol ensures data at rest is only on user devices, with phone calls made directly from device to device. And this data is encrypted and protected with our self-destruct capability.

What application areas are your strongest?

Our technology secures communications universally, making its applications vast. We’ve seen significant interest from branches of governments, defense/military organizations, intelligence contractors, industrial markets, nuclear power facilities, emergency services, financial organizations, healthcare, and the high-tech industry to name a few. Interest is strong, and we are growing rapidly across the world.

What keeps your customers up at night?

Our customers are aware that modern technology often exploits their data. They understand the trade-off between convenience and the security of their intellectual legacy, knowing that no bulletproof solutions exist. Distrust and espionage are occurring throughout the world at all levels. This awareness keeps our customers vigilant and concerned. 

For example, government officials and corporate executives are primarily concerned about the security and confidentiality of their sensitive communications, fearing interception or espionage. Meanwhile, security personnel and field agents are often more focused on the evolving landscape of cyber threats and ensuring their data remains unaltered and trustworthy.

Overall, whether it’s about protecting privacy, adhering to legal regulations, or ensuring operational continuity during critical times, ARMA G1 customers share a common need for robust, reliable, and secure communication solutions to mitigate diverse concerns.

What does the competitive landscape look like, and how do you differentiate?

Many competitors focus solely on adding security layers to commercial software, overlooking that secure communication requires more than just software. This is partly because hardware development is unpredictable and time-consuming, making it less attractive to VCs. Those who claim to develop their own hardware often just repurpose existing mobile boards.

Purpose-built phones from companies like Airbus, Sectra, Thales, and Boeing use outdated technology due to the popularity of BYOD and the high costs and time involved in obtaining new certifications for innovations. We differentiate by offering genuinely innovative, purpose-built solutions. 

In addition to new and unique purpose-built hardware, ARMA provides differentiating technology with our Dynamic Identity VirtualSim environment, and server-less Infrastructure designed to comply with Top Secret classification levels.

What new features/technology are you working on?

ARMA will introduce its second generation ARMA G1 MKII in Q3 this year, which is a secure text only device. It will soon be followed by the enhanced G1 with secure voice capability.

There are many other ARMA products currently under development and they will be announced as we bring them to market over the next 12 months.

How do customers normally engage with your company?

At present, the best way to contact us is through our website here. You can also email us at sales@armainstruments.com. Soon, we will expand access to our technology through strategic partnerships and resellers with organizations that have a worldwide footprint.

Also Read:

CEO Interview: Orr Danon of Hailo

CEO Interview: David Heard of Infinera

CEO Interview: Dr. Matthew Putman of Nanotronics


Easy-Logic and Functional ECOs at #61DAC

Easy-Logic and Functional ECOs at #61DAC
by Daniel Payne on 08-01-2024 at 10:00 am

gtech min

I first visited Easy-Logic at DAC in 2023, so it was time to meet them again at #61DAC in San Francisco to find out what’s new this year. Steven Chen, VP Sales for North America and Asia met with me in their booth for an update briefing. Steven has been with Easy-Logic for six years now and earned an MBA from Baruch College in New York. This was the fifth year that they exhibited at DAC.

A functional Engineering Change Order (ECO) is a way to modify the gate-level netlist, post-synthesis with the least amount of disruption and effort to minimize costs. Fixing a logic bug post-silicon can often be remedied with a Post-Layout ECO, where spare cells can be connected with updated metal layers, keeping mask costs low for a new silicon spin.

Their booth display showed an EDA flow and where their four ECO tools fit into the flow.

Easy-logic Exhibit at #61DAC

Something new in 2024 is the GTECH design flow from Easy-Logic, it’s a way to simplify the functional ECO flow for ASIC designs. With GTECH the user can quickly identify the ECO points, the gate-level circuits to be modified, over the traditional RTL-to-RTL design flow. With the EasylogicECO tool you continue to employ the smallest ECO patch size by refining the ECO point, reducing design complexity and speeding the time required. Using the GTECH design approach fits into your existing EDA tool flows, making it quick to learn and use. There was a press release about GTECH in May 2024 and at DAC they were showing demonstrations of this capability.

Steven talked about how even a junior engineer can use this tool easily, and that when silicon comes back from the fab and isn’t working 100% that a full re-spin can require 100 mask changes, while a metal ECO can use only 30-40 mask changes, so a much lower cost to implement. In the old days engineers used to do manual ECO changes, but that approach required too much engineering effort and was error prone. With an automated approach it dramatically improves the chance of success. In most cases where a designer may give up on a manual metal ECO if the spare cells needed are expected to exceed 50 because the signal was flatten and is too hard to trace.  With the capability of producing a smaller patch with quicker runtime, EasylogicECO can help designers increase their success rates on metal ECO projects.  This approach is widely adopted by most IC design houses in Asia, focusing on cost reduction by having fewer metal layers change and produces the quickest product launch.  EasylogicECO is playing an important role driving this metal ECO approach.

The EasylogicECO tool works in process nodes that are planar CMOS, FinFET, even with leading processes like 3nm.  Semiconductor Review APAC magazine recognized Easy-Logic as one of the top 10 EDA vendors in July 2023.

Summary

Easy-Logic was founded in 2013, based in Hong Kong and had their first order by 2018 for ECO tools. By 2021 they expanded into an R&D center in Shenzhen, adding new ECO products. Today they have four ECO tools and over 40 happy customers from around the globe. Adding the GTECH design flow this year makes it even easier to use their ECO tool, so their momentum continues to grow in the marketplace. I look forward to watching their technology and influence expand.

Related Blogs

 


proteanTecs Introduces a Safety Monitoring Solution #61DAC

proteanTecs Introduces a Safety Monitoring Solution #61DAC
by Mike Gianfagna on 08-01-2024 at 6:00 am

DAC Roundup – proteanTecs Introduces a Safety Monitoring Solution

At #61DAC it was quite clear that semiconductors have “grown up”. The technology has taken its place in the world as a mission-critical enabler for a growing list of industries and applications. Reliability and stability become very important as this change progresses. An error of failure  is somewhere between inconvenient and life-threatening. The field of automotive electronics is a great example of this metamorphosis. We’ve all heard about functional safety standards such as ISO26262, but how do we make sure these demanding specs are always met?  proteanTecs is a company offering a unique technology that provides a solution to these growing safety demands. During DAC 2024, you could see product demos showcasing automotive predictive and prescriptive maintenance. Read on to learn how proteanTecs introduces a safety monitoring solution.

Making the Roads Safer

proteanTecs defines a category on SemiWiki called Analytics. The company was founded with a mission to give electronics the ability to report on its own health and performance. It brings together a team of multidisciplinary experts in the fields of chip design, machine learning and analytics software with the goal of monitoring the health and performance of chips, from design to field. Its products include embedded monitoring IP and a sophisticated array of software, both embedded on chip and in the cloud. All this technology works together to monitor the overall operating environment of the chip to ensure top performance and to spot or predict problems before they become showstoppers.

News from the Show Floor

At the show, I was fortunate to have the opportunity to meet with Uzi Baruch, Chief Strategy Officer and Noam Brousard, VP of Solutions Engineering. It was a memorable and far-reaching discussion of the contributions proteanTecs is making to facilitate continued scaling for the electronics industry.

One comes to expect polished presentations at a show like #61DAC and indeed that was part of the meeting. I was also treated to a very entertaining and informative video; a link is coming. But perhaps the most impressive part was the live demonstration of the company’s technology. This is a brave move for any company at a major trade show. The solid performance of the demo spoke volumes about the reliability of the technology. Let’s look at some of the details.

proteanTecs RTSM™ (Real Time Safety Monitoring) offers a new approach to safety monitoring for predictive and prescriptive maintenance of automotive electronics. The application monitors the timing margin of millions of real paths of the chip with very high coverage in real-time, under real workloads, to alert the system before the lowest point that still allows error-free reaction. More details of this approach are shown in the figure below.

There are many aspects of system operation that must be monitored and analyzed to achieve the required balance for system reliability and performance. A combination of embedded sensors, sophisticated software and AI make it all work.  The following list will give you a feeling for the completeness of the solution:

  • Monitor non-stop: Remains always-on and monitors in-mission mode
  • Assess issue severity: A performance index for risk severity grading
  • Detect logical failures: Monitor margins with critical protection threshold
  • Boost reaction time: Low latency of the warning signals
  • Prevent fatal errors: A prescriptive approach for avoiding failures
  • Customizable outputs: Configure multiple output interfaces to fine-tune desired dynamic adjustment

An example of this operation is shown in the figure below. RTSM outputs a Performance Index, as well as a notification targeting the device’s power/clock frequency management units. The Performance Index indicates how close the device is to failure (predictive). The warning notification helps adapt the voltage or frequency to overcome the risk of incoming failure (prescriptive). Similarly, as with any other request or input for dynamic power/clock frequency management, the RTSM output is customized to the specific system interfaces.

To Learn More

I have only scratched the surface of the capabilities offered by proteanTecs. If a closed-loop predictive and prescriptive system sounds like an important addition to your next design, you need to get to know these folks. You can start with that short, entertaining and informative video here.

There is also a comprehensive white paper available. Highlights of this piece include:

  • The limitations of conventional safety assurance techniques
  • RTSM’s algorithm-based Performance Index for assessing the issue severity
  • Why monitoring margins under real workloads is crucial for fault detection
  • The technology behind RTSM which allows it to monitor in mission-mode
  • The role of RTSM in introducing Predictive and Prescriptive Maintenance

You can get your copy of the white paper here. And that’s how proteanTecs introduces a safety monitoring solution at #61DAC.

Also Read:

proteanTecs at the 2024 Design Automation Conference

Managing Power at Datacenter Scale

proteanTecs Addresses Growing Power Consumption Challenge with New Power Reduction Solution