Banner 800x100 0810

CEO Interview: Dr. Robert Giterman of RAAAM Memory Technologies

CEO Interview: Dr. Robert Giterman of RAAAM Memory Technologies
by Daniel Nenni on 04-29-2022 at 6:00 am

RAAM Memory Group Photo SemiWiki

Dr. Robert Giterman is Co-Founder and CEO of RAAAM Memory Technologies Ltd, and has over nine-years of experience with the research and development of GCRAM technology, which is being commercialized by RAAAM. Dr. Giterman obtained his PhD from the Emerging Nanoscaled Circuits and Systems Labs Research Center in Bar-Ilan University. Following the completion of his PhD in 2018, he joined the Telecommunications Circuits Laboratory in the Ecole Polytechnique Federale de Lausanne, Switzerland, as a post-doctoral researcher. As part of his research, he has led the front-end and physical implementations of multiple ASICs, and mentored numerous PhD thesis and MSc projects in the field of VLSI embedded memories. Dr. Giterman has authored over 40 scientific papers and holds 10 patents.

First, please tell me about RAAAM?
RAAAM Memory Technologies Ltd. is an innovative embedded memory solutions provider, that delivers the most cost-effective on-chip memory technology in the semiconductor industry. RAAAM’s silicon-proven Gain-Cell RAM (GCRAM) technology combines the density advantages of embedded DRAM with SRAM performance, without any modifications to the standard CMOS process available from multiple foundries.

RAAAM’s patented GCRAM technology can be used by semiconductor companies as a drop-in replacement for SRAM in their SoCs, allowing to significantly reduce fabrication costs through a significant die size reduction. Alternatively, increasing the on-chip memory capacity in the same die size enables a dramatic reduction in the off-chip data movement to resolve the memory bottleneck. This increase in on-chip memory capacity will enable additional features that can enable industry growth for applications in the areas of AR/VR, Machine Learning (ML), Internet-of-Things (IoT), and Automotive.

What problem are you solving?
Important industry growth drivers, such as ML, IoT, Automotive and AR/VR, operate on ever-growing amounts of data that is typically stored off-chip in an external DRAM. Unfortunately, off-chip memory accesses are up-to 1000x more costly in latency and power compared to on-chip data movement. This limits the bandwidth and power efficiency of modern systems. In order to reduce these off-chip data movements, almost all SoCs incorporate large amounts of on-chip embedded memory caches that are typically implemented with SRAM and often constitute over 50% of the silicon area. This memory bottleneck is further aggravated since SRAM scaling has been increasingly difficult in recent nodes, shrinking only at a rate of 20%-25% compared to almost 50% scaling for logic.

Can you tell us more about GCRAM technology?
GCRAM technology relies on a high-density bitcell that requires only 2-3 transistors (depending on priorities on area or performance). This structure offers up-to 2X area reduction over high-density 6T SRAM designs. The bitcell is composed of decoupled write and read ports, providing native two ported operation, with a parasitic storage node capacitor keeping the data. Unlike conventional 1T-1C eDRAM, GCRAM does not rely on delicate charge sharing to read the data. Instead, our GCRAM provides an active read transistor that provides an amplified bit-line current, offering low-latency non-destructive readout without the need for large storage capacitors. As a result, GCRAM does not require any changes or additional costs to the standard CMOS fabrication process and scales with technology when properly designed.

While the concept of 2T/3T memory cells has been tried in the past, reduction of the parasitic storage capacitor and concerns about increasing leakage currents has so far discouraged its application beyond 65nm. RAAAM’s patented innovations comprise clever circuit design at both memory bitcell and periphery levels, resulting in significantly reduced bitcell leakage and enhanced data retention times, as well as specialized refresh algorithms optimized for various applications, ensuring very high memory availability even under the most extreme operating conditions. In fact, we had demonstrated the successful scaling of GCRAM technology across process nodes of various foundries (e.g., TSMC, ST, Samsung, UMC), including recent silicon demonstrators in 28nm (Bulk and FD-SOI) and 16nm FinFET technologies implementing up-to 1Mbit of GCRAM memory macros.

Can you share details about your team at RAAAM and what has been done to validate the GCRAM technology?
RAAAM founders, including Robert Giterman, Andreas Burg, Alexander Fish, Adam Teman and Danny Biran, bring over 100+ combined years of semiconductor experience. In fact, RAAAM is built on a decade of world-leading research in the area of embedded memories, and GCRAM in particular. Our work on GCRAM technology has been demonstrated on 10 silicon prototypes of leading semiconductor foundries in a wide range of process nodes ranging from 16nm to 180nm, including bulk CMOS, FD-SOI and FinFET processes. Our work on GCRAM is documented by more than 30 peer-reviewed scientific publications in books, journals, and conference proceedings, and is protected by 10 patents.

Who is going to use RAAAM’s technology and what will they gain?
RAAAM’s GCRAM technology enables a significant chip fabrication cost reduction or highly improved performance, resolving the memory bottleneck for semiconductor companies in various application fields. Since GCRAM is directly compatible with any standard CMOS process and uses an SRAM-like interface, it can easily be integrated into existing SoC designs.

As an example for potential system benefits, we can look at the Machine Learning accelerators domain using a 7nm AI processor integrating 900MB of SRAM on a single die. In this case, the SRAM area constitutes over 50% of the overall die size. Replacing SRAM with RAAAM’s GCRAM technology can provide a reduction of up-to 25% of the overall die size, resulting in up-to $35 savings per die.

Alternatively, for memory-bandwidth limited systems, increasing the on-chip memory capacity can bring substantial performance and power improvements. In fact, the required DRAM bandwidth is often inversely proportional to the on-chip memory capacity. With off-chip memory accesses being up-to 1000x more costly in power and latency compared to on-chip data movement, replacing SRAM with 2X more GCRAM capacity at the same area footprint significantly reduces the off-chip bandwidth requirements and enables RAAAM’s customers to gain a competitive advantage in the power consumption of their chip.

What is RAAAM’s engagement model?
RAAAM follows an IP vendor licensing model. Semiconductor companies can license RAAAM’s GCRAM technology for a fee and production unit royalties RAAAM implements the front-end memory controller and GCRAM-based hard memory macros according to the customer specifications and delivers a soft RTL wrapper (using a standard SRAM interface), which instantiates the GCRAM hard

macros (GDS) and the soft refresh control (RTL). Additionally, the customer receives a characterization report of the hard memory macro and a behavioral model for system-level verification. At present,

RAAAM is working on the implementation and qualification of a GCRAM-based memory compiler, which will enable RAAAM’s customers to automatically generate the complete front and back-end views of GCRAM IP and corresponding characterization reports according to customer specifications.

Can you tell us about your recent achievements?
RAAAM has made very exciting progress recently. First, we have been evaluating the benefits of our technology for leading semiconductor companies, which has confirmed our projected substantial improvements from a performance and cost perspective over existing solutions based on SRAM. In fact, we have recently engaged with a very large semiconductor company on a long-term, co-development project and we continue running customer evaluations for various application fields and process nodes. We see growing interest in our technology in a variety of applications, both in very advanced process (7nm and beyond) nodes and in less advanced ones (16nm and higher). Finally, we are extremely pleased to have joined the Silicon Catalyst Incubator, allowing us to gain access to their comprehensive ecosystem of In-Kind Partners, Advisors, and Corporate VC and institutional investor network.

What is on the horizon for RAAAM?
Our product development roadmap includes full memory qualification in selected nodes of leading semiconductor foundries, based on customer demand. In addition, we have on-going discussions with numerous foundries for further technology migration to their next generation process nodes. Furthermore, we are looking to expand our embedded memory platform and introduce design flow automation based on our memory compiler development efforts. To this end, we are in the process of raising Seed funding to fully qualify our GCRAM technology and to accelerate our company’s overall business growth.

A preliminary GCRAM product brief is available upon request, please send an email to info@raaam-tech.com. Additional information can be found at: https://raaam-tech.com/technology  https://www.linkedin.com/company/raaam

Also read:

CEO Interview: Dr. Esko Mikkola of Alphacore

CEO Interview: Kelly Peng of Kura Technologies

CEO Interview: Aki Fujimura of D2S


Freemium Business Model Applied to Analog IC Layout Automation

Freemium Business Model Applied to Analog IC Layout Automation
by Daniel Payne on 04-28-2022 at 10:00 am

animate preview min

Freemium is the two words “free” and “premium” combined together, and many of us have enjoyed using freemium apps on our phones, tablets and desktop devices over the years. The concept is quite simple, you find an app that is useful, and download the free version, mostly to see if it operates as advertised, and then decide if there’s enough promise to warrant buying the fully-featured version. But wait, is there actually any EDA vendor offering a freemium business model?

Yes, about a year ago, the UK-based company Pulsic introduced their Animate Preview tool to the EDA world as a free download. The only requirement is that you are using Cadence Virtuoso IC6.1.6, IC6.1.7 or IC6.1.8 software. I had a Zoom call with three Pulsic folks this month to better understand this freemium model:


Mark Williams, CEO

  • Mark Waller, Director of User Enablement
  • Otger Perich, Digital Marketing

Q: Why a freemium model?

A: The typical EDA evaluation cycle for a new EDA tool is way too long. Often requiring an NDA to be agreed, terms and conditions to be negotiated, and time and resources for a formal evaluation. It can take many weeks before potential customers can start to really get to know the product’s capabilities.

We wanted to find a way to shortcut this process and remove all of the barriers to entry. With the freemium model, any interested engineer can quickly and directly download a free version and get started in minutes instead of weeks.

To make the freemium model work, we made Animate easy to use with a very simple UI, easy to learn and operate.

Q: What does Animate Preview do?

A: Animate Preview works within their Cadence Virtuoso schematic editor, where a circuit designer can quickly see the automatically created initial layout of their analog cells in minutes. The designer can see the effect of their circuit design decisions in the layout and get accurate area estimates. The free version contains all the features of the premium product, the user can do everything that can be done in the paid version, but they can only then save the design outline and IO pins.

The paid version is called Preview Plus, and with that version, you can save the automatically created initial layouts into OpenAccess. The saved layout includes all the detailed placement information and is a great starting point for creating the final analog block layout.

Animate Preview inside the schematic editor

Q: How long does it take to learn Animate Preview?

A: It’s fast; from downloading the app to seeing the first circuit layout can happen in as little as 20 minutes because it’s a simple process of filling out a form and opening the link in an email to get started. Anyone with a Cadence Virtuoso environment for schematics can use Animate Preview on their analog cells. We’re using a cloud-based license, so you don’t need to think about licensing.

Q: Does the Pulsic tool come with any design examples?

A: Yes, we ship with a Pulsic PDK with example designs in that technology, plus there’s a set of videos to get you started. It’s all designed to just run out of the box. As well as the getting started videos, there is a series of 2-minute tutorials, with 22 tutorials available.

Animate Preview runs in the background when you open a schematic in Virtuoso, which you use just like you normally would. The layouts appear automatically and are updated when circuit changes are made, all without the user needing to create any constraints. Just install and then see the auto-generated IC layouts based on schematics.

Q: What process technology is supported for analog IC layout generation?

A: Our focus has been to ensure that Animate creates great results for TSMC processes from 180nm down to 22nm. However, Animate will work with any planar process on OpenAccess with Cadence P-Cells. We have customers using Animate on many other processes from several fabs. We’re also starting to support some FD-SOI technology, but no FinFET yet.

Q: Is the generated IC layout always DRC clean?

A: Yes, the generated IC layout should be DRC clean, especially for TSMC processes. For other processes, if the rules are in the OA tech file, Animate will obey them. Most customers get good results out of the box, but if a user has any issues, they can contact Pulsic for better support.

Animate Preview generated layout

Q: So, who is using Animate for analog IC cell layout automation?

A: One company that we can talk about is Silicon Labs, out of Austin, Texas; back in 2019, when they were using an early version of the Animate technology, they said, “In our initial evaluation of Animate, we needed to achieve both efficiency and quality for our analog IC layouts, and Animate provided excellent results equal to using traditional approaches but in far less time,” said Stretch Young, Director of Layout at Silicon Labs.  “Collaborating with Pulsic, we see opportunities to improve the quality of our layout, which will increase productivity and save design time.”

Q: How many downloads so far of Animate Preview from your web site?

A: About 360 engineers have downloaded Animate so far. About 100 of these downloaders have created IC layouts, and we’ve followed up with 10s of engagements.

Q: What are some of the benefits of offering a freemium model for EDA tools?

A: With the freemium model, there is less pressure. We see that the users like the free download experience, and then we support them when they have follow-up questions. Users can see the benefits of analog automation within days without the hassle and pressure of the usual EDA sales process. Only if they like what they see and want to save the placement do they need to talk to us.

Launching a new product in COVID times was always going to be a challenge, but a big benefit for us was that we didn’t have to travel to do prospecting because it’s been all online evaluations. So we were able to reach the target audience much quicker.

Q: What types of IC design end markets are attracted to analog IC layout automation?

A: The IoT market has been the most significant sweet spot so far because of the need to be quick to market cheaply and the ability to iterate quickly.  Automotive and general analog IP providers also see great results from our tool.

Q: What are the limitations of Animate Preview as an EDA tool?

A: Animate Preview is designed for core analog cells. The tool is always-on inside the Cadence Virtuoso Schematic Editor and continually updates as you change the schematic. So you just leave it on all of the time, but it will warn you if it cannot apply the technology to a cell. A built-in circuit suitability check will warn you when the circuit is not suitable for Animate, e.g., a hierarchy that is too large or a digital block. Animate Preview will automatically create a layout for analog blocks with up to 100 schematic symbols. With Preview Plus, the user can create a layout for larger analog blocks; it might take a few minutes instead of seconds to produce a result.

Q: Will your company be attending DAC in SFO this summer?

A: Yes, look for our booth, and there will be a theatre setup to show the benefits of analog IC layout automation.

Q: How does Animate Preview work, under the hood?

A: Animate is radically different from other IC layout automation because it has a PolyMorphic approach in a virtual space, producing optimal IC layouts. It really is a unique architecture. The polymorphic engine is patented, but we don’t talk about how it works.

Related Blogs


The Path Towards Automation of Analog Design

The Path Towards Automation of Analog Design
by Tom Simon on 04-28-2022 at 6:00 am

Early parasitics estimation for analog design

You may have noticed that I have been writing a lot more about analog design lately. This is no accident. Analog and custom blocks are increasingly important because of the critical role they play in enabling many classes of systems, such as automotive, networking, wireless, mobile, cloud, etc.  Many of the SoCs needed for these markets are developed on advanced nodes, including FinFET. However, new design rules and other design complexities at these advanced nodes are making analog design more difficult and challenging.

Synopsys has a presentation at this year’s CICC that is titled “Has the Time for Analog Design Automation Finally Come?”, authored by Dave Reed and Avina Verma, which offers a close examination of methods for accelerating and improving how analog design is done. I had a chance to talk with them recently about their presentation. Automation of analog design is a laudable goal but has proven elusive. In part Dave and Avina attribute this to the fact that it’s more difficult for analog designers to provide a concise set of constraints to describe their design objectives. Asking analog engineers to create extensive text-based rules to drive the automation tools often results in just as much work as doing the physical design in the first place.  They also say that tool designers need to ensure design tools match the way designers want to work.

Their point is that each stage of the design process has a preferred creation and editing method that any tools for automation should accommodate. They believe that encouraging iterative design is better than asking for a big up-front investment to specify the results. There are several key goals for an automation flow. Faster layout should be possible with automated correct device level placement and device level routing. Design closure requires consideration of resistance, capacitance and electromigration issues. Designers want to get early insight into parasitics. Lastly, design reuse, if done right, can offer a huge productivity boost.

As one example of using graphical methods, they point to the way Custom Compiler uses a symbolic graphical palette to pre-define placement patterns for devices. Along with this it provides a real-time display of the actual layout visible at the same time. Visual feedback is provided with color coded device visualizations and a centroid display. It also provides an easy way add dummies and guard rings.

Device routing automatically connects large device arrays while ensuring matched R/C routes. Interconnect with user-controlled hand-crafted quality is created with greater ease than with manual methods. Just as with placement, Custom Compiler provides a graphical palette of predefined routing patterns that designers can choose from. Users can drive the router by guiding it using their cursor on the layout. It comes with automatic connection cloning, pin taping and via generation. There is also interactive DRC and obstruction feedback.

The key to moving from schematic to layout design closure is understanding layout parasitics quickly and accurately. Without this, rework effort can become considerable. Instead of having to wait until the design is LVS clean to run LPE and RC extraction, Custom Compiler’s schematic driven layout (SDL) flow gives layout engineers parasitics throughout the layout process. Before nets are routed, estimates are used. As nets are incrementally hooked up actual extracted parasitics are inserted for each one.

Early parasitics estimation for analog design

Even though the fully extracted design is not available until the end, enough information is available early in the process to help provide useful feedback. This is vastly preferable to waiting until the end of the layout process to get physical parasitic information. Synopsys has also been working on using machine learning to help improve prediction of parasitics for even better estimates earlier in the process.

I mentioned above that templates can be used to help drive placement. Dave and Avina talked about how existing designs can be mined to easily produce templates for device placement. Dave said that this is a favorite feature for a lot of users.

With the added complexity of advanced nodes, specifically with new complex design rules and the need to place or modify arrays of FinFET devices, automation of the analog layout process promises big gains in productivity and design quality. Dave and Avina argue that the time has finally come for the automation of analog designs. They understand that this will never be a “push the big red button” sort of thing but will instead be made up from numerous discrete capabilities that are easy for designers to integrate into their workflow.

More information is available through CICC in their educational session archives and also on the Synopsys web page for their custom design platform.

Also read:

Design to Layout Collaboration Mixed Signal

Synopsys Tutorial on Dependable System Design

Synopsys Announces FlexEDA for the Cloud!


Semiconductor CapEx Warning

Semiconductor CapEx Warning
by Bill Jewell on 04-27-2022 at 4:00 pm

CAPEX Growth 2022

Semiconductor makers are planning strong capital expenditure (CapEx) growth in 2022. According to IC Insights, 13 companies plan to increase CapEx in 2022 by over 40% from 2021. The largest CapEx for 2022 will be from TSMC at $42 billion, up 40%, and Intel at $27 billion, up 44%. IC Insights is forecasting total semiconductor industry CapEx at $190 billion in 2022, up 24% from $154 billion in 2021. 2021 CapEx was up 36% from $113 billion in 2020.

Could this large increase in CapEx lead to overcapacity and a downturn in the semiconductor market? Our analysis at Semiconductor Intelligence has identified points where significant increases in CapEx result in a downturn or significant slowdown in the semiconductor market in the following year or two. The chart below shows the annual change in semiconductor CapEx (green line on the left scale) and the annual change in the semiconductor market (blue line on the right scale). The CapEx data is from Gartner from 1984 to 2007 and from IC Insights from 2008 to 2022. The semiconductor market data is from WSTS. In the last 38 years, semiconductor CapEx growth has exceeded 56% six times (red “danger” line). In each of those six cases, semiconductor market growth has decelerated significantly (greater than 20 percentage points) in the following year. In three of the six cases the market declined the following year. In three of the years from 1984 through 2017, CapEx has exceeded 27% growth (yellow “warning” line) but been less than 56%. In each of these three years (1994, 2006 and 2017) the semiconductor market declined two years later.

2021 CapEx growth of 36% puts it above the warning ling but below the danger line. IC Insights current forecast of 24% CapEx growth in 2022 is close to the warning line. Increases in 2022 CapEx plans could put growth over the 27% warning line but is very unlikely to approach the 56% danger line. So, are we in for a market downturn in 2023?

A few factors may come into play to avoid the overcapacity/downturn cycle this time. Previous large jumps in CapEx have resulted from semiconductor companies chasing fast growing emerging markets. In 1984 it was PCs. In 2000 it was internet infrastructure. In 2010 it was smartphones. In each of these cases, the end market either declined the following year (PCs and internet infrastructure) or slowed (smartphones). In the current situation, semiconductor companies are trying to alleviate shortages, especially in the automotive market. Increasing semiconductor content in vehicles is driving demand for semiconductors. Automotive companies fell behind in semiconductor procurement when they cut production during the pandemic beginning in 2020. In the current case, the demand for automotive semiconductors is not likely to weaken anytime soon.

Another factor is most of the current growth is coming from non-memory companies. In previous cycles, memory companies have been a major driver of CapEx growth. With DRAMs and flash memory primarily commodity products, they are more prone to over-supply and price declines in downturns. In 2021, memory companies grew CapEx 33%, similar to the 38% growth for non-memory companies. In 2022, memory companies are more cautious; we estimate 7% growth in CapEx. With this estimate, non-memory companies CapEx growth would be 36% in 2022. Most of the non-memory products are non-commodity and the companies are more closely linked to their end market customers.

CapEx growth should not be looked at in isolation. Absolute levels of CapEx relative to the semiconductor market give an indication whether CapEx is too high. The graph below shows semiconductor CapEx as a percentage of the semiconductor market on an annual and five-year average basis. Over the last 38 years, from 1984 to 2021, CapEx has averaged 23% of the semiconductor market. The five-year average ratio has ranged from 18% to 28%. The ratio has been on an uptrend for the last several years, with the five-year average reaching 27% in 2022 based on forecasts from IC Insights and WSTS. This data indicates the ratio may be close to a peak, indicating lower CapEx in the near future.

Our conclusion is the increase in CapEx should lead to caution, but not to panic. There is no indication of an end demand bubble, such as with the PC and internet infrastructure. Most of the growth is driven by non-memory companies, which largely produce non-commodity products. But the CapEx growth in 2021 and 2022 should be of concern based on historical trends. Our current forecast for the semiconductor market is 15% growth in 2022 and 5% to 9% in 2023. At the low end, 5% growth in 2023 would be a 21 point drop from 26.2% growth in 2021. This would fit the model, with the 36% CapEx growth in 2021 above the 27% warning line and leading to an over 20-point growth rate decline two years later in 2023.

Also read:

Electronics, COVID-19, and Ukraine

Semiconductor Growth Moderating

COVID Still Impacting Electronics


Podcast EP74: A Tour of the DAC Engineering Tracks with Dr. Ambar Sarkar

Podcast EP74: A Tour of the DAC Engineering Tracks with Dr. Ambar Sarkar
by Daniel Nenni on 04-27-2022 at 10:00 am

Dan is joined by Dr. Ambar Sarkar, a member of the Design Automation Conference (DAC) Executive Committee and platform architect at Nvidia. Ambar and Dan explore the new Engineering Tracks at DAC – their purpose and noteworthy content. Topics such as the cloud, global supply chain and silent hardware errors are discussed, along with details of the popular Poster Gladiator competition.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


ML-Based Coverage Refinement. Innovation in Verification

ML-Based Coverage Refinement. Innovation in Verification
by Bernard Murphy on 04-27-2022 at 6:00 am

Innovation New

We’re always looking for ways to leverage machine-learning (ML) in coverage refinement. Here is an intriguing approach proposed by Google Research. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Learning Semantic Representations to Verify Hardware Designs. This paper published in the 2021 NeurIPS. The authors are from Google and Google Research.

The research uses simulation data as training input to learn a representation for the currently covered subset of a circuit state transition graph. In inference, the method uses this representation to predict whether a newly defined test can meet new cover points, much faster than running the corresponding simulation. The architecture of the reported tool, Design2Vec, is based on a blending of Graph Neural Network (GNN) reasoning about the RTL CDFG structure and RNN reasoning about sequential evolution through the STG.

The paper positions Design2Vec as an augment to a constrained-random (CR) vector generation process. The method generates CR vectors as usual, then ranks these using a gradient ascent algorithm to maximize the probability of covering target cover points. The simulator then runs tests with highest predicted coverage.

The authors detail evaluations across a couple of RISC-V-based designs, also the Google TPU, and show compelling results in improving coverage over constrained random methods alone..

Paul’s view

This is a great paper, on a center stage topic in commercial EDA today. The paper studies two very practical opportunities to use ML in mainstream digital verification. First using ML as a rapid low-cost way to predict the coverage a test will achieve. And second using ML to automatically tune test parameters to maximize coverage.

On the first, the paper eloquently demonstrates that predicting coverage without understanding anything about the design (where the design is a black box) doesn’t work very well (50% accuracy across 3 testcases). However, if features derived from the design’s control-dataflow-graph (CDFG) are also fed into the predictor then it can work quite well (80-90% accuracy across the same testcases).

The way the CDFG is modeled in their neural network is very slick, building incrementally on other published work for modeling software program control flow in a neural network using a softmax function.

On the second opportunity, they compare their CDFG-based neural network with another tool that uses an entirely black box algorithm using Bayesian optimization. Here the results are less conclusive, showing data for only 1 testcase, and for this case showing only marginal benefit using the CDFG-based neural network over Bayesian optimization

Stepping back for a moment, I believe there are huge opportunities to use ML to improve coverage and productivity in digital verification. We are investing heavily in this area at Cadence. I applaud the Google authors of this paper for investing and sharing their insights. Thank you!

Raúl’s view

The authors address the problem of coverage, hard-to-cover branches and generating tests to cover them. Their approach is through training to predict whether a cover point is activated by an input test vector. The CDFG architecture is captured by 4 different graph neural networks, out of which an enhanced IPA-GNN (Instruction Pointer Attention Graph NN [6]) called RTL IPA-GNN works marginally best.

Design2Vec is also used for test generation for given cover points. The method uses predicted probability in a gradient-based search to maximize detection probability. Tests generated are run through an RTL simulator to get the actual coverage. Results comparing to Vizier [18], a Google tool using Bayesian optimization are not conclusively superior.

They ran coverage prediction experiments on two small RISC-V cores and a TPU. They look at points covered by10-90% of random tests to exclude trivial cases. The authors compare results against 3 methods: statistical frequency random patterns; a multi-layer perceptron (MLP) treating the design as a black box; an MLP with node sequence embedding, allowing generalization across cover points.

Design2Vec beats other approaches by 20% (against statistical frequency) to about 3% (against node sequence embedding). Notably MLP black box approach does worse than statistical frequency for the TPU (large design). In the words of the authors “the MLP performs catastrophically poorly on the test data”. For me, the main insight is that embedding the design in the architecture is key in building a semantic representation.

The authors stress the “potential of deep learning to make a quantum leap in progress in the area of verification”. Their results back this up.

My view

If you download this paper, you may notice that it is missing some appendices. The appendices are useful though not essential for full understanding. You might find this live presentation will bridge that gap.

Also read:

Cadence and DesignCon – Workflows and SI/PI Analysis

Symbolic Trojan Detection. Innovation in Verification

Leveraging Virtual Platforms to Shift-Left Software Development and System Verification


Experts Talk: RISC-V CEO Calista Redmond and Maven Silicon CEO Sivakumar P R on RISC-V Open Era of Computing

Experts Talk: RISC-V CEO Calista Redmond and Maven Silicon CEO Sivakumar P R on RISC-V Open Era of Computing
by Daniel Nenni on 04-26-2022 at 10:00 am

Expert talk banner Maven Silicon

India’s top VLSI Training Services company Maven Silicon, a RISC-V Global Training Partner, conducted an insightful discussion with the industry experts Ms. Calista Redmond, CEO, RISC-V International and Mr. Sivakumar P R, CEO, Maven Silicon, on the topic “RISC-V Open Era of Computing”.

To introduce RISC-V, it is a free and open ISA, enabling processor, hardware, and software innovations through open collaboration.

Maven Silicon’s vision is to produce highly skilled VLSI engineers and help the global semiconductor industry to reach skillful chip design experts. The global semiconductor industry is transforming faster than ever, enabling us to create powerful integrated chips to reinforce the next generation of advancing technologies like IoT, AI, Cloud, and 5G. So, we need to produce skilled chip designers who can design more powerful and optimized processors of different kinds. This is how we aim to disrupt the semiconductor industry. Our vision aligns with that of RISC-V’s in disrupting the semiconductor industry.

We are delighted to introduce our Industry mavens who honored the discussion.

Ms. Calista Redmond
CEO, RISC-V International

Calista Redmond, CEO of RISC-V International has more than 20 years of senior-level management and alliance experience, along with significant open source community experience. Throughout her career, she has developed strategic relationships with the chip, hardware, and software providers, system integrators, business partners, clients, and developers.

Mr. Sivakumar P R
Founder and CEO, Maven Silicon

Sivakumar is the Founder and CEO of Maven Silicon. He is also the Founder and CEO of Aceic Design Technologies. He is a seasoned engineering professional who has worked in various fields, including electrical engineering, academics, and the semiconductor industry for more than two decades and specializes in offering Verification IPs and consulting services and EDA flow development.

This profound ‘Expert talk’ was hosted by Ms. Sweety Dharamdasani, Head of Learning & Development Division at Maven Silicon, who is extremely passionate about upskilling the young aspirants.

The discussion highlighted some incredible topics on RISC-V, and how it can be leveraged in redefining the VLSI curriculum.

Click here to watch the video

Sweety: I would like to understand from Calista the What and Why of RISC-V? An introduction for our audience on RISC V, please.

Calista: Along the journey of the Hardware industry, RISC-V discovered the collaboration in software that helps the entire industry to form a foundation upon which they can still compete. Now, there is a boom in customized processors, and so they have taken on the opportunity.

Below are the two reasons why RISC-V catapulted to being the most prolific open ISA that the microprocessor industry has ever seen:

[A] Disruptive technology

[B] Design Flexibility with unconstrained opportunities

Sweety: Why RISC V? What were our reasons for collaboration with RISC V?

Siva: RISC-V is an open ISA. It’s free, no license constraints, but nothing comes for free when it comes to designing the chip. Still RISC-V is special because of the freedom we engineers enjoy in designing the processor as we like. Also, we need specialized processors to build the chips as monolithic semiconductor scaling is failing. As RISC-V is an open and free ISA, it empowers us to create different kinds of specialized processors.

Why RISC-V for Maven: VLSI engineers need to understand how we build electronic systems like laptops/smartphones using chips and SoCs. Obviously, we need processors to build any chips/SoCs. Without knowing the processor, the VLSI engineers can’t deal with any sub-systems/chips. VLSI training has always been about training engineers on different languages, methodologies, and EDA point tools. So, we introduced processors as part of our VLSI course curriculum to redefine the VLSI training. As RISC-V is open, simple, and modular ISA, it was our choice.

Click here to watch the video

Sweety: So what is happening in the RISC V space right now? If you would like to share some success stories or developments, it would be great.

Calista: The predictions say that the semiconductor IP market will go from 5.2 billion dollars in 2020 to 8.6 billion dollars in 2025. We are on course with the prediction that RISC-V will consume 28% of that market in IoT, 12% in industrial, and 10% in automotive. Many venture capitalists are investing billions of dollars in RISC-V companies. There are many opportunities here for those who are starting their own companies and many more success stories are coming out every day.

Sweety: What are our current plans that are going on with RISC-V?

Siva: We are doing many things creatively with RISC-V. We have included RISC-V in our VLSI course curriculum, and it is open to all new college graduates, engineers, and even corporate partners.

Since Jan 2022, we have trained more than 200+ engineers in various domains, RTL Design, Verification and DFT, for a global chipmaker. All of them have used RISC-V/RISC-V SoCs as their projects/case studies to learn all these different technologies. It works beautifully.

Since we became RISC-V global training partner, we trained 1000+ engineers on RISC-V processor design and verification and introduced them to our semiconductor ecosystem.

Click here to watch the video

Sweety: What are we looking at in terms of the future for RISC V?

Calista: Being successful in the microprocessors or in any business with some small incremental growth will not be helpful. What we do in RISC-V is incredible. We have 12000 developers who are engaged with RISC-V. We have 60 different technical works that have been going on, which has been an incredible compliment for the education and the knowledge-based organization like Maven Silicon.

Sweety: What are our plans at Maven Silicon with regard to RISC V? Any upgrades on curriculums?

Siva:  New applications will demand RV128. There would be new security challenges, but still RISC-V will emerge as an industry standard open ISA for all kinds of specialized processors, replacing most of the current proprietary ISAs.

At Maven, we would be adding more new topics like complex pipelining, floating point core design, cache controllers, low power mode, compilers, and debuggers, etc., into our existing RISC-V course curriculum. We are also looking forward to creating long-term master learning programs like designing SoC using RISC-V.

Click here to watch the video

Sweety: It would be great if you can share with us a few tips for all our young VLSI aspirants, who plan to build a career in the semiconductor industry.

Calista: Understand where your ability fits in the VLSI space. Connect with your mentors, colleagues, peers, and co-work shoulder to shoulder, and strengthen your network in your domain. When you work together, you learn faster and understand better. You can select any of the 60 courses that we have and join RISC-V and learn the topics that are going on in the various areas in the RISC-V domain.

Sweety: What are the few tips that you would like to share with the young engineers?

Siva: One major piece of advice I would like to give to the next generation is that “Do not choose the domain based on the popularity, choose whatever you are interested in. Do not lose motivation when things do not fall in place, just do it sincerely”. Seek guidance from the people who will help you to grow better. Learning is a continuous process, ask questions to yourself and keep learning. Be part of a Non-profit organization like RISC-V that is contributing to the engineering community.

Click here to watch the video

Sweety: What are your takes on organizational culture, sensitivity, gender awareness, women in business, etc?

Calista: It is important to drive ourselves as women, and all of us to create an environment and opportunities to cultivate the women around us. It is difficult to be in the spotlight as it is more transparent, but it is important to take those steps. Find out the passion that will drive us. It is important to work in a company that you believe in and to grow with them. Lift up the people around you. Shine the light on others to help cultivate their success, while cultivating your own.

Sweety: We know that Maven Silicon voices out to people. Around 60% of our employees are women. How do you take care of your employees?

Siva: I would like to mention here our Co-founder and Managing Director Ms. Praveena G who is all about people and processes. She is extremely composed, honest, and detail-oriented whereas I look at the big picture and do business creatively. Along with our co-founder there are many super talented women who do amazing things at Maven and help us stay at the top of our game.

Organizational culture reflects the style of leadership. Our culture is based on our core values. We respect our customers, partners, and employees. We believe in ‘Lead without Title’.

Click here to watch the video

We truly appreciate Ms. Calista Redmond and Mr. Sivakumar P R for sharing their experiences, and so beautifully explaining the various topics including RISC-V, tips for the young aspirants, women empowerment, and the organizational culture. Also, we would like to thank RISC-V International for this great opportunity to work with their open-source community and contribute to RISC-V learning as a RISC-V Global Training Partner.

Also read:

Verification IP vs Testbench

CEO Interview: Sivakumar P R of Maven Silicon

RISC-V is Building Momentum


Adding Random Secondary Electron Generation to Photon Shot Noise: Compounding EUV Stochastic Edge Roughness

Adding Random Secondary Electron Generation to Photon Shot Noise: Compounding EUV Stochastic Edge Roughness
by Fred Chen on 04-26-2022 at 6:00 am

Compounding EUV Stochastic Edge Roughness 2

The list of possible stochastic patterning issues for EUV lithography keeps growing longer: CD variation, edge roughness, placement error, defects [1]. The origins of stochastic behavior are now well-known. For a given EUV photon flux into the resist, a limited fraction are absorbed. Since the absorption is less than 5% affected by dose [2], the absorbed photon number per unit area practically follows a Poisson distribution [3]. The Poisson distribution is much like a normal distribution whose standard deviation is the square root of the mean, yet truncated at zero (no negative values allowed). Prior work has already shown that the stochastic edge appearance is smoothed by resist blur [4]. The resist blur is taken to be a continuous function (e.g., Gaussian with sigma=2nm), but this does not take into account the actual random secondary generation yield [5] following EUV photon absorption.  Ionized electrons do not need to ionize other electrons to release energy; they can also lose energy through plasmons and vibrational excitations [6]. In this article, we will explore the electron number randomness as an extra stochastic factor in EUV lithography.

The electron yield per absorbed photon is estimated to be ~3 for organic chemically amplified resists [6], and ~8 for metal oxide resists [7]. Instead of being fixed numbers, these should be taken to be typical or average values; the actual number comes from a second Poisson distribution, distinct from that for photon absorption. Then the blur amplitude should naturally be scaled according to the actual electron number. Thus, secondary electrons effectively compound the stochastic behavior.

Edge deformation is a natural generalization of edge roughness, one of the known manifestations of stochastic behavior. The most obvious manifestation of this is a deviation of a contact or via shape from circularity. For a ~20 nm feature size, Figure 1a shows the edge deformation when electron yield per photon is fixed, whereas Figure 1b shows the same when electron yield per photon follows a Poisson distribution for an average value of 3 electrons per photon. The resist blur is modeled with 4x Gaussian convolution (sigma=1 nm), giving an effective sigma of 2nm.

Figure 1a. 16 simulation runs with 2nm blur and fixed secondary electron yield. The assumed resist layer absorption is 18% and the dose 60 mJ/cm2. Grid pixel size is 1 nm x 1 nm.

Figure 1b. 16 simulation runs with 2nm blur and secondary electron yield following a Poisson distribution with mean=3 electrons per photon. The same conditions as in Figure 1a were assumed.

Even without calculating the individual via areas, the difference in appearance is already striking. Besides increasing the photon dose, increasing the electron yield per photon is also suggested to keep stochastic effects in check, by reducing the standard deviation/mean ratio. Even so, electron number is constrained by the energy needed for ionization (~10 eV); EUV has only enough energy for no more than 9 ionized electrons per photon. A higher photon energy, i.e., shorter wavelength, can raise the upper limit. However, increasing the electron number also increases the range of electron paths [8]. This increases blur, which is fundamentally detrimental to resolution [9].

References

[1] https://www.prnewswire.com/news-releases/new-stochastics-solution-from-fractilia-enables-semiconductor-euv-fabs-to-control-multi-billion-dollar-industry-yield-problem-301506120.html

[2] R. Fallica et al., “Dynamic absorption coefficients of chemically amplified resists and nonchemically amplified resists at extreme ultraviolet,” J. Micro/Nanolith. MEMS MOEMS 15, 033506 (2016).

[3] https://en.wikipedia.org/wiki/Poisson_distribution

[4] https//www.linkedin.com/pulse/euv-resist-absorption-impact-stochastic-defects-frederick-chen

[5] C. E. Huerta et al., “Secondary electron emission from textured surfaces,” J. Phys. D: Appl. Phys. 51, 145202 (2018).

[6] J. Torok et al., “Secondary Electrons in EUV Lithography,” J. Photopolym. Sci. &Tech. 26, 625 (2013).

[7] Z. Belete et al., “Stochastic simulation and calibration of organometallic photoresists for extreme ultraviolet lithography,” J. Micro/Nanopattern. Mater. Metrol. 20,  014801 (2021).

[8] https://stats.stackexchange.com/questions/230302/is-there-a-relation-between-sample-size-and-variable-range; http://euvlsymposium.lbl.gov/pdf/2007/RE-08-Gallatin.pdf.

[9] https://www.linkedin.com/pulse/blur-wavelength-determines-resolution-advanced-nodes-frederick-chen

This article first appeared in LinkedIn Pulse:  Adding Random Secondary Electron Generation to Photon Shot Noise: Compounding EUV Stochastic Edge Roughness 

Also read:

EUV Resist Absorption Impact on Stochastic Defects

Etch Pitch Doubling Requirement for Cut-Friendly Track Metal Layouts: Escaping Lithography Wavelength Dependence

Horizontal, Vertical, and Slanted Line Shadowing Across Slit in Low-NA and High-NA EUV Lithography Systems


A MasterClass in Signal Path Design with Samtec’s Scott McMorrow

A MasterClass in Signal Path Design with Samtec’s Scott McMorrow
by Mike Gianfagna on 04-25-2022 at 10:00 am

Samtec Flyover Technology
Scott McMorrow

We all know signal integrity and power integrity are becoming more important for advanced design. Like package engineering, the obscure and highly technical art of SI/PI optimization has taken center stage in the design process. And the folks who command expertise in these areas have become the rock stars of the design team. I had an opportunity to speak with one of those rock stars recently. Scott McMorrow is the Strategic Technologist for the company’s 224 Gbps R&D work. Scott has a storied career in all manners of signal path design and optimization. What follows is essentially a MasterClass in signal path design. If you want your next system to work, this is important stuff. Enjoy.

Signal integrity and power integrity are disciplines that have been around for a while. For a long time, they were “fringe” activities – highly complex, hard-to-understand work done by rare experts. While the work is still quite complex, SI and PI are now mainstream, critical activities in almost all designs. What do you think drove this change? 

Simply, systems break when SI and PI are not considered.  In my consulting career prior to joining Samtec, a considerable number of customers requested my services in SI and PI because they had current or previous designs that had failed either in testing or at customer sites.  These sorts of things tend to sensitize managers and directors to the importance of deep SI and PI work.  What has now conspired against complacent design is the physics. 

At today’s data rates switches and AI processors are using extraordinary amounts of power, sometimes multiple kilowatts. There are systems that require over 1000 A of current at less than 1 V, and ICs that require 600 A at sub-µs rise times. This requires a power system capable of delivering mΩ and sub-mΩ impedance targets, which are difficult to engineer and measure. At these high switching currents, low frequency magnetic fields require careful management of component selection and via placement to minimize system noise and guarantee reliable operation.

As the speed and power requirements for silicon increase, the lower the probability that previous “Known Good Methods” will work.  Approximations and assumptions developed for 10 Gbps or 28 Gbps interconnect may not be valid as we begin to reach the statistical limits of signal recovery.  At 112 Gbps PAM4, with a risetime of approximately 10 ps (20%/80%), a signal bandwidth (BW) > 40 GHz (1.5 times Nyquist), and a bit time < 20 ps (< 10 ps for 224 Gbps PAM4) there is very little margin for noise.  Crosstalk and power systems are the primary contributors that must be contained.  These require system interconnect bandwidth of 50-90 GHz. For each performance step (56 Gbps PAM4 to 112 Gbps PAM4 as an example), the bandwidth and noise in the system essentially double.  This requires an SI engineer to accurately model and measure across a wider bandwidth. For example, Samtec SI engineers routinely model to 110 GHz and measure using 67 GHz and 110 GHz Vector Network Analyzers (VNAs).

The term “signal path” has taken on new meaning in the face of the convergence of multiple technologies found in contemporary designs. Can you comment on that evolution? What does a signal path entail in advanced designs today? What role does convergence play, and what new pieces will be added going forward? 

Signal interconnect in the last 20 years has always been a combination of copper, optics, and even radio transmission. From a cost tradeoff perspective, copper is the least expensive for the short distances as seen in system electronics enclosures and racks.  Up until recent years, a full copper interconnect was possible up to 3 m spanning a full rack, with the transition to optics occurring at the Top of Rack (TOR) switch to extend down a data center rack.  Although fiber optics is significantly less expensive than copper cable, the cost associated with electrical to optical conversion in the optical module is much more expensive than direct attach copper cables.  But, as data rate increases, the “reach” of electrical cables is reducing.  At 112 Gbps PAM4 and 224 Gbps PAM4 the architecture of switch locations in a rack must change to keep interconnect losses within design targets of about -31 and -39 dB from silicon-to-silicon in the link. At 112 Gbps, data center architects may need to place the TOR switch in the middle (a Middle of Rack switch?) to keep direct attach copper cable lengths to 2 m.  At 224 Gbps PAM4, multiple switch boxes per rack may be needed to keep total cable length to 1 m to remain within the end-to-end loss budgets.

At lower data rates, signals could be transmitted entirely on copper PCB interconnects until they reach the front panel module (QSFP, OSFP, etc.). However, to improve the loss budget, newer systems utilize Samtec Flyover® technology to reduce total loss. This is accomplished by using from 34 AWG to 30 AWG cable that has been engineered to work in the high temperature environment of modern electronics chassis.  Flyover technology extends copper’s usefulness to 112 Gbps PAM4 and 224 Gbps PAM4 operation. However, even this is a temporary measure.  Today we use Flyover technology from a PCB mounting location near to the silicon, but on the PCB.  However, at 224 Gbps PAM4 the losses in the silicon package copper traces accumulate to the point that one third of the system loss budget is accounted for simply in the package substrates of the transmitter and receiver, which conspires to reduce the total available external reach.

Samtec Flyover Technology

To fight “loss erosion” at 224 Gbps PAM4 several potential changes are posited by designers and architects:

  • Exit the silicon through optical fiber interconnect
    • This will be the “future”, but that future is a long way off, due to the complexity of designing silicon with mixed electrical and optical technology.
    • This future also requires full optical interconnect throughout the system, rack and data center, which is extremely expensive.
  • Move the electrical-to-optical conversion to a device mounted on the package, the so called Co-Packaged Optics (CPO).
    • This removes electrical transmission issues entirely for the external system, but greatly increases total cost, because of the need to fly optical for all external interconnects.
    • Placing an optical component on an IC package removes mixed silicon technology from being a problem. The optical device can be designed with the optimal process.  However, the rugged environment on package can approach a 600 W beast of a chip. This is daunting for many optical technologies.
  • Route signals off package via Flyover technology.
    • Flyover solutions are proven to reduce in-box interconnect losses and can be applied to packages.
    • This will work to achieve reliable 224 Gbps PAM4 channel operation, but it is proving hard to scale the connectors for attachment to the size needed for current packages.
    • As a result, package architectures are changing to provide more area for interconnect attachment.

Given the demands presented by form factor, density and performance, what are the considerations for materials involved in high-performance channels? Are there new materials and/or configurations on the horizon? Where does optical fit?

See above.  Materials will move to the lowest loss possible, but there is a bound set by the size of the copper conductors used.  Cable is lower loss than PCB trace simply because the conductor circumference is 2 – 3x larger than PCB traces. Inside the package designers will need to use materials that can withstand IR reflow during assembly along with operating temperatures from 85 – 120 °C near the die.  Many materials that were adequate for external or in-box usage are untenable for on-package use.

In terms of data rates, what will happen over the next five years? What will be a state-of-the-art data rate in five years, and how will we get there? 

This is a good question. Realistically, 56 Gbps PAM4 designs will be around for years to come, as 112 Gbps PAM4 designs are just prototyping. 224 Gbps PAM4 will be the next step in the data rate progression with a signal rise time of 5 ps and a BW > 80 GHz. Although test silicon is being built now, I suspect it will take three years for the early prototype systems to be revealed and five years for production to begin. By that time, we will be looking at how to either utilize higher order transmission encoding (PAM8, PAM16) or abandon copper totally and make the full transition to optical in about 10 years. This might be a good time for us copper interconnect specialists to retire.

There it is, a MasterClass in signal path design. I hope you found some useful nuggets. You can read more about Samtec here. 

Also read:

Passion for Innovation – an Interview with Samtec’s Keith Guetig

Webinar: The Backstory of PCIe 6.0 for HPC, From IP to Interconnect

Samtec, Otava and Avnet Team Up to Tame 5G Deployment Hurdles


Assembly Automation. Repair or Replace?

Assembly Automation. Repair or Replace?
by Bernard Murphy on 04-25-2022 at 6:00 am

Arteris SoC Integration 8000x4500 20210421 1

It is difficult to imagine an SoC development team not using some form of automation to assemble their SoCs; the sheer complexity of the assembly task for modern designs is already far beyond hand-crafted top-level RTLs. An increasing number of groups have already opted for solutions based on the IP-XACT integration standard. Still, a significant percentage use their own in-house crafted solutions. The solution of choice for many has been spreadsheets and scripts. Spreadsheets to capture aspect-wise information on instances and connections, scripts to convert this bank of spreadsheets into full SoC RTL. Great solutions, but eventually we must ask a perennial question. When reviewing in-house assembly automation – repair or replace?

Teams rightly take great pride in their creations, which serve their purposes well. But like all in-house inventions, with time, these solutions age. Original developers move on to other projects or companies. Designs become larger and must be distributed to geographically diverse teams. Local know-how must be replaced by professional training and support. Capability expectations (internal and external) continue to rise – more automation, directly integrating the network-on-chip, supporting traceability. Inevitably the organization must ask, “Should we continue to repair and enhance our in-house software, with all the added overhead that implies, or should we replace it with a professionally supported product?”

Scalability

Other groups copy successful in-house implementations, which they then modify to their own needs. Maybe there’s a merger with a company which has its own automation. Organizationally, your automation quickly becomes fragmented, with little opportunity to share code, design data or know-how. No one is eager to switch to another in-house solution in preference to the automation they already know. The only way to break this deadlock is to consider a neutral, standards-based platform.

A common platform immediately solves problems of sharing data between teams; common platforms encourage shareable models. For training and support, let a professional supplier manage that headache. For continuous improvement against diverse requirements across many design teams, let the software product supplier manage and prioritize demand,. And produce regular releases featuring fixes, performance improvements and enhancements.

Enhanced capabilities

There’s a widely held view in technology businesses that no one is going to switch to a new product purely for incremental improvements. Prospects will buy-in only to must-have advantages that would be out of reach if they didn’t switch. One opportunity here is closer automation linkages between the endpoint IPs and the network-on-chip. To better manage coupling between changes in network interfaces, performance expectations, address offsets, and power management. Fully exploiting the potential benefits is a journey, but as a provider of both the integration and network-on-chip technologies, Arteris IP is already on this journey.

Another high-demand capability is in re-partitioning designs, for emulation and prototyping, for floorplanning, power management and reuse. I’ve talked elsewhere about the pain of manual re-partitioning, limiting options you can explore. You can automate this process with truly interactive what-if analysis. Experimenting with new configurations in real-time.

A more recent demand is for traceability support. In safety-critical systems and in embedded systems with close coupling between the system and the SoC, compliance between system requirements and implementation in silicon is mandatory. As requirements traceability automation in software development has become common, there is a growing expectation from OEMs and Tier 1s that similar support should be provided for hardware implementation. Accurate linking between requirements tools and SoC design semantics is a complex task, beyond the scope of most in-house development scripts. Arteris IP now offers this capability in its tool suite.

Legacy compatibility

All of this sounds interesting, but what about the sunk costs  you have in all those spreadsheets and scripts? Will this solution only have value to you on completely new designs? Not at all – you can start with what you already have. The Arteris IP SoC & HSI Development platform can import CSV files with a little scripting support. It can also directly read IP and design RTL files, supported by intelligent name matching for connectivity, again perhaps with a little interactive help. Once you have setup scripts and mappings, you should be able to continue to use those legacy sources. Which is critical for long-term maintenance.

Many of your legacy scripts will probably no longer be needed, especially those relating to netlist generation and consistency checking. Those facilities are provided natively in the SoC/HSI platform. You can use some scripts, for IO pin-muxing or power sequence control, for example, as-is initially if the generator is sufficiently decoupled from the rest of the design. These scripts can also, if you wish, be redesigned to work under the SoC/HIS platform. You can build your scripts in Python using an API operating at an easy-to-understand semantic level (clocks, bus interfaces, etc.).

In summary, it’s never been easier to switch and now you have compelling reasons to switch. If you want to learn more, click HERE.

Also read:

Experimenting for Better Floorplans

An Ah-Ha Moment for Testbench Assembly

Business Considerations in Traceability