DAC2025 SemiWiki 800x100

Making UVM faster through a new configuration system

Making UVM faster through a new configuration system
by Daniel Payne on 12-26-2023 at 10:00 am

Elapsed Time min

The Universal Verification Methodology (UVM) is a popular way to help verify SystemVerilog designs, and it includes a configuration system that unfortunately has some speed and usage issues. Rich Edelman from Siemens EDA wrote a detailed 20-page paper on the topic of how to avoid these issues, and I’ve gone through it to summarize the highlights for you. Verification engineers use a UVM configuration database to set values, then to get the values later in their UVM test. One example of setting and getting a value ‘T’ is:

uvm_config#(T)::set(scope, instance_path_name, field_name, value);

uvm_config#(T)::get(scope, instance_path_name, field_name, value);

Connecting the UVM testbench to the device under test uses the configuration database to pass the virtual interfaces. There are three problems with using the UVM configuration:

  • Big code, some 2,600 lines of code
  • Requires exact type matching, so ‘int’ and ‘bit’ are not the same
  • Slow code

Consider the case of slow code, because with thousands of calls to set() using names with wildcards can take up to 30 minutes to complete the ‘set’ and ‘get’ phase.

Rich proposes a new solution to UVM configurations that has much faster speeds, taking only a few seconds in comparison.

If your UVM code avoids using wildcards and has few ‘set’ commands, then your code will run faster.

Possible solutions to the UVM configuration issues are:

  • Use a global variable instead
  • Use UVM configuration with one set()
  • Use UVM configuration with a few set()
  • Use a configuration tree
  • Try something different

That last approach of trying something different is the new solution, and it continues to use the set() and get() API, then simplifies by removing parameterization of the configurations, removes precedence, and removes the lookup algorithm change. The results of this new approach are fast speeds.

Your new configuration item is defined in the derived class from ‘config_item’, and the example below shows ‘int value” as the property being set. For debug purposes you add the pretty-print function.

class my_special_config_item extends config_item; 
   function new(string name = "my_special_config_item");
     super.new(name);
   endfunction 

   int value; 

   virtual function string convert2string();
     return $sformatf("%s - value=%0d <%s>", get_name(), value, super.convert2string());
   endfunction
 endclass

The ’config_item’ has a name attribute, and this name is looked up, plus the instance name. The configuration object also has a get_name() function to return the name. To find any “instance_name.field_name” the configuration database uses an associative array for fast lookup and creation speeds.
For traceability you can find out who set or who called get, because a file name and line number are fields in the set() and get() function calls.

set(null, "top.a.b.*", "SPEED", my_speed_config, `__FILE__, `__LINE__) 
get(null, "top.a.b.c.d.monitor1", "SPEED", speedconfig, `__FILE__, `__LINE__)

The accessor queue can be printed during debug to see who called set() and get().

To support wildcards required adding a lookup mechanism using containers. Consider the instance name ‘top.a.b.c.d.*_0’.

Container Tree

The wildcard part of the instance name is handled by using the container tree, instead of the associative array.

Summary

Sharing data between the module/instance and the class-based world in a UVM testbench can be done using the UVM configuration database, just be aware of the speed slowdowns. If your methodology uses lots of configurations, then consider using the new approach introduced which has a package using about 300 lines of code instead of the 2,600 lines of code in the UVM configuration database file.

Read the full 20-page paper, Avoiding Configuration Madness The Easy Way at Siemens EDA.

Related Blogs


SPIE 2023 Buzz – Siemens Aims to Break Down Innovation Barriers by Extending Design Technology Co-Optimization

SPIE 2023 Buzz – Siemens Aims to Break Down Innovation Barriers by Extending Design Technology Co-Optimization
by Mike Gianfagna on 12-26-2023 at 6:00 am

SPIE 2023 Buzz – Siemens Aims to Break Down Innovation Barriers by Extending Design Technology Co Optimization

Preventing the propagation of systematic defects in today’s semiconductor design-to-fabrication process requires many validation, analysis and optimization steps. Tools involved in this process can include design rule checking (DRC), optical proximity correction (OPC) verification, mask writing and wafer printing metrology/inspection (to gauge the process), wafer printing metrology/inspection, and physical failure analysis to confirm failure diagnosis. The exchange of information and co-optimization between these steps is a complex process, with many feed-forward and feed-back loops. Communication is often hampered by “walls” between various parts of the process technology, slowing innovation. At the recent SPIE conference Siemens EDA presented a keynote address that proposed a series of approaches to break down these walls to improve the chip design to manufacturing process. Read on the see how Siemens aims to break down innovation barriers by extending design technology co-optimization.  

About the Keynote

SPIE is the international society for optics and photonics. The organization dates back to 1955 and its conference has become a premier event for advanced design and manufacturing topics. At this year’s event, Siemens presented the keynote that is the topic of this post. There were many contributors to the presentation, including Le Hong, Fan Jiang, Yuansheng Ma, Srividya Jayaram, Joe Kwan, Siemens EDA (United States); Doohwan Kwak, Siemens EDA (Republic of Korea); Sankaranarayanan Paninjath Ayyappan, Siemens EDA (India). The title of the talk was Extending design technology co-optimization from technology launch to HVM.

The talk was part of a session on design technology co-optimization (DTCO). This concept isn’t new, but Siemens looked at its application across a broader scope of the process, from design to high-volume manufacturing (HVM). The ideas and results presented have significant implications. Let’s take a closer look.

What Was Presented

First, a look at the current state of DTCO usage across key parts of the ecosystem was presented. From a design perspective, many advanced fabless companies have a DFM team that is seeing the limits of a pattern-based approach. What is really needed is new technology to facilitate yield learning without foundry dependence.

The foundries are using brute-force pattern-based machine learning approaches, which are costly but not completely effective. They are also seeking efficient information mining of the massive manufacturing data they create. Equipment vendors and EDA vendors have been working closer together and are coming up with more efficient machine learning solutions.

Stepping back a bit, it was pointed out that there are walls between the design and manufacturing phases of the process. Fabless companies create the design, perform DRC and design for manufacturing (DFM), then they toss it over the wall to the OPC/RET team within the foundry or IDM. The design gets tasks such as OPC and verification done, and then the data is tossed over another wall for mask writing and metrology/inspection. The final wall is for fabrication. Here, electrical test and failure analysis will be done. By the time a root cause of failure is found, 6-18 months have passed. That’s a very long feedback loop. The graphic at the top of this post depicts this process.

DTCO attempts to break down the walls, but the methodologies available are incomplete. Traditional DTCO starts very early in process development. Starting with a scaling need, a standard cell is defined, and synthesis, place, and route are performed to come up with basic patterns and measure the performance and power. SRAM yielding is also done and that data loops back to the standard cell design.

What was presented at the SPIE keynote was a way to extend this co-optimization concept to the entire process from design to manufacturing. The approach involves enabling an easier flow of information from design all the way to the final process and physical analysis by creating an information channel.

While this sounds straight-forward, it is not. Many challenges were discussed with concrete approaches to mitigate the issues.  For example, early designs can be created with layout synthetic generators to help calibrate the process to real design issues as the process is developed. This can alleviate many of the surprises currently faced with early process tapeouts.

Dealing with massive data volumes is another challenge. Using new sophisticated compression techniques, a 30X improvement is possible. This improves the data handling and analysis tasks quite a bit.  A concept called explainable AI can help to find root causes of problems much faster. The ability to re-train AI models later in the manufacturing process without invalidating earlier results is another area for improvement. Also in the data analysis area are techniques to deal with “imbalanced data”. For example, there may be one hot spot found in 100,000,000 patterns.

Putting all this together can create a much more efficient end-to-end design flow, as shown in the figure below.  

Platform details

To Learn More

The impact of the approaches outlined in this keynote presentation is substantial. You can view the presentation and access a white paper on the process here.  There’s a lot of useful information to be gained.  And that’s how Siemens aims to break down innovation barriers by extending design technology co-optimization.


Application-Specific Lithography: Sense Amplifier and Sub-Wordline Driver Metal Patterning in DRAM

Application-Specific Lithography: Sense Amplifier and Sub-Wordline Driver Metal Patterning in DRAM
by Fred Chen on 12-25-2023 at 10:00 am

Varying pitch in metal lines in DRAM periphery

On a DRAM chip, the patterning of features outside the cell array can be just as challenging as those within the array itself. While the array contains features which are the most densely packed, at least they are regularly arranged. On the other hand, outside the array, the regularity is lost, but the in the most difficult cases, the pitches can still be comparable with those within the array, though generally larger. Such features are the lowest metal lines in the periphery for the sense amplifier (SA) and sub-wordline driver (SWD) circuits. A key challenge is that these lines are meandering in appearance, and the pitch is varying over a range (Figure 1). The max/min pitch ratio can range ~1.4 to 2. The imaging performances of two or more pitches together cannot be judged from the imaging performance of each of those pitches by themselves.

Figure 1. Varying pitch in metal lines in DRAM periphery. From the right, the pitch is at a minimum, but from the left, it is nearly twice the minimum pitch.

The image of lines for a fixed pitch is constructed from the interference of at least two beams that emerge from the pupil and go through the final objective with numerical aperture NA. The maximum phase error between any two of the beams affects the degradation of the image as it goes out of focus. In an EUV system with 0.33 NA, a 44 nm pitch image can only be formed from two beams, while the 66 nm pitch can be formed from two, three, or four beams. Figure 2 shows the interesting result that the two-beam image has the lowest maximum phase error. This underlies the existence of forbidden pitches with dipole illumination [1]. This has driven the two-mask exposure approach [2].

Figure 2. Phase errors for various images at 66 nm pitch vs 44 nm pitch under 45 nm defocus in a 0.33 NA EUV system. Two-beam images give the least phase error.

Unfortunately, only 10% of the pupil in the 0.33 NA EUV system supports two-beam imaging for both 44 nm and 66 nm pitches (Figure 3). Light cut out at the condenser reduces the light available for exposure [3]. The usable pupil fill is further reduced to 0 by considering pupil rotation across slit [4].

Figure 3. Portion of 0.33 NA EUV pupil supporting two-beam imaging for 44 nm and 66 nm pitches, at slit center (blue) and slit edge (orange) after 18 deg rotation. No part of the pupil supports the imaging consistently across slit.

It gets worse for the High-NA 0.55 NA EUV system, as there will definitely be at least three beams emerging from the pupil and the depth of focus is reduced further by the higher NA.

If a DUV 1.35 NA system were used instead, double patterning is necessary to achieve both 44 nm and 66 nm pitches. Thus, 88 nm and 132 nm pitches will be actually exposed. Both of these use two-beam imaging, which bodes well for finding illumination that has sufficient depth of focus for both pitches (Figure 4).

Figure 4. Phase errors for 88 nm and 132 nm pitches under 45 nm defocus in an 1.35 NA ArF (DUV) system, for an optimized dipole illumination shape (inset).

At this point, we can generalize to set some lithographic requirements for metal line patterning for SA and SWD circuits. In order to maintain two-beam imaging, the maximum-to-minimum pitch ratio should be <2, corresponding to half-pitch k1=0.5 and k1=0.25, respectively. For a max/min pitch ratio of 1.5, current 1.35 NA DUV systems can support down to 80 nm minimum pitch, 120 nm maximum pitch without double patterning. Once double patterning is used, however, the maximum line pitch should not exceed ~90 nm. The max/min pitch ratio may need to be correspondingly adjusted. Due to the meandering nature of the metal lines, it would not be unreasonable to have, for example, 3 metal lines (2 pitches) in one section span the same extent as 4 metal lines (3 pitches) in another section. This even-odd disparity could be resolved by splitting and stitching the odd metal feature, as shown in Figure 5 [5,6].

Figure 5. Splitting a layout containing even and odd numbers of lines can be resolved by splitting the odd feature to be stitched back together with the double patterning.

When the minimum line pitch gets smaller than ~40 nm (beyond 13nm DRAM node [7]), we should expect the DUV double patterning to become quadruple patterning (Figure 6). But why not consider EUV single exposure patterning?

Figure 6. Quadruple patterning (with a 1.35 NA DUV system); each color represents a separate exposure.

An additional consideration for SA and SWD metal patterning is that the layout requires two dimensions to accommodate the perpendicular bit line and word line directions. This entails the use of X+Y dipole, or cross-dipole illumination, which will restrict the mask types used. Essentially, the illumination can only support pitches in one orientation, and degrades pitches with the other orientation. Masks without pre-designed phase shifts (aka binary masks) suffer from a drop in normalized image log slope (NILS) (Figure 7). EUV currently contends with a lack of the necessary phase-shift masks [8,9]. Hence, two exposures (each already more than the cost of two DUV exposures [10]), one for X orientation, one for Y orientation, would be required.

Figure 7. Cross-dipole illumination reduces NILS for 2-beam imaging with binary masks.

DUV attenuated phase-shift masks (attPSMs) can be designed with 180 deg phase shifts between the bright and dark regions, mitigating this issue (Figure 8).

Figure 8. Cross-dipole illumination still reduces NILS for 2-beam imaging with attPSM masks, but the value stays above 2.

The scenarios described above are summarized in the table below.

Table 1. Scenarios for SA and SWD minimum pitch metal patterning in DRAM.

References

[1] M. Eurlings et al., Proc. SPIE 4404, 266 (2001).

[2] D. Nam et al., Proc. SPIE 4000, 283 (2000).

[3] M. van de Kerkhof et al., Proc. SPIE 10143, 101430D (2017).

[4] A. Garetto et al., J. Micro/Nanolith. MEMS MOEMS 13, 043006 (2014).

[5] Y. Kohira et al., Proc. SPIE 9053, 90530T (2014).

[6] S-Min Kim et al., Proc. SPIE 6520, 65200H (2007).

[7] J. Lee et al., Proc. SPIE 12495, 124950S (2023).

[8] F. Chen, “Phase Shifting Masks for NILS Improvement – A Handicap For EUV,” https://www.linkedin.com/pulse/phase-shifting-masks-nils-improvement-handicap-euv-frederick-chen

[9] A. Erdmann, H. Mesilhy, and P. Evanschitkzy, J. Micro/Nanopatterning, Materials, and Metrology 21, 020901 (2022).

[10] L. Liebmann, A. Chu, and P. Gutwin, Proc. SPIE 9427, 942702 (2015).

This article first appeared in LinkedIn Pulse: Application -Specific Lithography: Sense Amplifier and Sub-Wordline Driver Metal Patterning in DRAM

Also Read:

BEOL Mask Reduction Using Spacer-Defined Vias and Cuts

Predicting Stochastic Defectivity from Intel’s EUV Resist Electron Scattering Model

China’s hoard of chip-making tools: national treasures or expensive spare parts?


Preventing SOC Schedule Delays Using the Cloud

Preventing SOC Schedule Delays Using the Cloud
by Ronen Laviv on 12-25-2023 at 6:00 am

compute peaks 1

In my previous article, we touched on ways to pull in the schedule. This time I’d like to analyze how peak usage affects project timeline and cost. The above graph is based on real pattern taken from one development week in Annapurna Labs 5nm Graviton.

The Graph shows the number of variable servers per hour per day. There’s a baseline of “always on” compute that was removed from the graph, to focus on the variability of that usage per time in the week. We’ll touch how to address the baseline on a different article with saving plans or reserved instances.

Navigating the Uncertain Waters of On-Premises Compute

Looking at the next compute refresh cycle, estimating the required number of CPUs, memory size, and CPU type for diverse projects involves a significant amount of guesswork, often leading to inefficient resource allocation. While future articles will dive deeper into these specific aspects, this piece focuses on the limitations removed when adding AWS to your on-premises cluster.

When using on-premises compute, companies are forced to choose between oversizing the cluster (creating waste but enabling engineers) or under sizing it with opposite results. Imagine purchasing compute resources to accommodate the highest peak you see in the graph of Figure 1. The orange areas surrounding the peak represent unused, paid-for compute. This highlights the crucial tradeoff companies face: balancing the cost of resources against the potential impact on schedule requirements, quality and licenses utilization.

The Bottleneck Conundrum

So, the IT team, together with R&D leadership would work to guess the right compute number and reduce the amount of unused compute (upper orange block). Nevertheless,

a significant portion of un-used compute still remains, representing unutilized, purchased resources as you can see in the graph on Figure2.

However, this is not the most critical issue.

The true challenge lies in daily peak usage (white areas). During these periods, projects may face delays due to insufficient on-premises compute capacity that cause engineers or licenses to wait for their jobs to run. This forces companies to make difficult choices, potentially compromising on crucial aspects like:

  • Test coverage: Running fewer tests can lead to undetected issues, impacting product quality and potentially resulting in costly re-design, backend re-runs or ECOs after tapeout.
  • Number of parallel backend trials: Limiting parallel trials slows down the development process, hindering overall project progress. In the new era of AI runs, EDA tools as Cerebrus (Cadence) or ai (Synopsys) could directly affect schedule and PPA (Power Performance Area) results.
  • Simulations: Reduced simulations or regressions testing may potentially lead to problems later in the development cycle. As we all know, bugs that are found earlier in the product cycle have minimal impact on schedule. Bug that are found late in the process are often causing re-design, new place & route, timing, … and add redundant cycles to the critical path to tapeout.
  • Formal methodologies: Similar to reduced simulations, cutting back on formal methodologies increases the risk of design flaws and errors. Same problems discussed above for simulations are also relevant here.

These compromises can add risk to both tapeout quality and time to market (posting a potential tapeout delay). Both factors are critical in today’s fiercely competitive landscape.

Some companies attempt to address peak usage issues by shifting capacity from other projects. While this might seem like a quick fix, it has cascading effects, impacting the schedule and quality of the other projects as well. Ultimately, if resources are limited, shifting them around is not always a sustainable solution. If you find yourself in a situation where you don’t have the compute resources for your project, your schedule would probably slip or you may have to compromise on specifications, quality and PPA (Power Performance Area).

Embracing the AWS Cloud
Fortunately, there exists a simple yet powerful solution: AWS Cloud computing eliminates the need for guesswork and waiting for jobs. You only pay for the resources you use, ensuring efficient utilization and eliminating the “orange cost” of unused resources. AWS cloud will help you overcome capacity limitations. Your engineering team should not have any license waiting for a compute node. This will increase your EDA license utilization and engineering productivity.

Here’s a short video that summarizes it all:

Addressing the License Challenge

One valid concern regarding cloud adoption might be the question of licenses: “While the cloud solves the compute issue, what about EDA licenses?”.

Your company might not have enough licenses, and you’ll be constrained by them anyway.

This is a legitimate point. Addressing run issues requires a comprehensive approach that considers both compute and licenses. Fortunately, all key EDA vendors are now offering innovative licenses as well as technical solutions specifically designed for cloud usage. By collaborating with your EDA vendors (Cadence, Synopsys, MentorSiemens), you can explore these options and find the solution that best suits your needs.

However, it’s crucial to remember that you may also have EDA licenses that are waiting for compute to be able to run. Those also represent a potentially large hidden cost. Idle engineers waiting for runs translate to wasted money on licenses (…and engineering) as well as lost time to market. Similar to the un-used compute waste in Figure2, the white area affects also “licensing cost”. Since it can significantly impact your bottom line and time to market. We will explore this topic further in future articles.

Unlocking the Potential of the Cloud for Faster Time to Market

By embracing the cloud, companies can secure several key benefits:

  • Avoid delays: Cloud computing ensures access to the necessary compute power during peak usage periods. This prevents project delays and ensuring smooth development.
  • Optimize resource allocation: With the cloud’s pay-as-you-go model, companies can eliminate the waste associated with unused on-premises resources.
  • Variety of compute types: Other than access to quantity and not waiting for compute, there is also an aspect of the machine type itself: Memory size, CPU type etc. Those parameters affect performance as well and we’ll touch this topic in one of the future articles.
  • Faster time to market: By eliminating delays and optimizing resources, cloud computing helps companies bring their products to market faster, giving them a competitive edge. ARM reported doing 66% more projects with the same engineering team.
The Cloud: Beyond Technology, a Mindset Shift

To sum up, the cloud is about enabling your team to achieve its full potential and optimize its workflow without having to guess the capacity. Ability to handle pick usage has a direct effect on quality, schedule and engineering/license utilization. In my next article, I’ll talk about more cloud aspects that would result faster time to market and better utilization of your engineering and licenses.

Disclaimer: My name is Ronen, and I’ve been working in the semiconductor industry for the past ~25 years. In the early days of my career, I was a chip designer and manager, then moved to the vendor side (Cadence) and now spending my days assisting companies in their journey to cloud with AWS. Looking at our industry along the years, examining the pain points of our industry as a customer and a service provider I am planning to write a few articles to shed some light on how chip design could benefit from the cloud revolution that is taking place these days.

C U in the next article.


Is Intel cornering the market in ASML High NA tools? Not repeating EUV mistake

Is Intel cornering the market in ASML High NA tools? Not repeating EUV mistake
by Robert Maire on 12-24-2023 at 9:00 am

High NA EUV
  • Reports suggest Intel will get 6 of 10 ASML High NA tools in 2024
  • Would give Intel a huge head start over TSMC & Samsung
  • A big gamble but a potentially huge pay off
  • Does this mean $4B in High NA tool sales for ASML in 2024?

News suggests Intel will get 6 of first 10 High NA tools made by ASML in 2024

An industry news source, Trendforce, reports that Intel will get up to 6 of the 10 High NA ASML tools likely to be shipped in 2024.

The article also quotes Samsung Vice Chairman, Kyung Kye-hyun, saying ” Samsung has secured a priority over the High-NA equipment technology

This seems to imply that Intel is getting the most High NA tools followed by Samsung due to its recently announced $755M investment with ASML in Korea.

This would put TSMC in an unusual third place which its opposite its current dominance of EUV with 70 percent of the EUV tools in the world

Trendforce article on High NA tools

$3.5 to $4B in high NA sales for ASML in 2024??

If we assume tool cost of $350M to $400M and ten tools that would be between $3.5B and $4B in High NA sales in 2024 even though not all the revenue would likely be recognized in the calendar year.

We think the potential upside from High NA for ASML is not built in to the stock price yet as ten tools sounds like a stretch by most accounts, but if that could be done it would suggest strong upside.

We had suggested in past articles that any weakness due to Chinese sanctions would be more than made up by other sales and this is just one example

15 tools in 2025?…..20 tools in 2026? Zeiss is the gatekeeper….

If ASML does indeed get 10 tools shipped in 2024 that sort of ramp would imply an easy ramp to 15 or more tools in 2025 and likely over 20 in 2026.

As with regular EUV tools, lens availability from Zeiss will again be the gating factor with high NA lenses being far more complex than current already difficult EUV lenses.

Given that the EUV source will remain somewhat unchanged and the stage incremental it is really all about the lenses.

Who will get them? Where and When?

If we assume Intel does get the first 6 High NA tools as suggested we would imagine that Intel Portland will get at least the first two tools with other tools following in 2024 going to Arizona and/or Ohio.

Of the other four tools, maybe at least two to three to Samsung with TSMC getting one or two tools.

Its our guess that TSMC will likely push multi-patterning current EUV harder rather than jump to High NA right away.

There are numerous industry suggestions that High NA EUV tools are going to be difficult to cost justify versus multi-pattern existing EUV tools. This could either be a smart move by TSMC or a mistake…time will tell. From Intel’s perspective they have no choice but to push hard as their slowness to EUV in the past was one of the reasons that TSMC raced past them in Moore’s law.

Its clear that Intel does not want a repeat of what happened with the original EUV tools and thus committed early to ASML to get the first copies of High NA Tools as announced over a year ago.

If High NA does indeed pan out, this would be the “leap frog” that Intel needs to catch up.

If our guess of 15 tools in 2025 is close, we would expect the share between Intel, Samsung and TSMC to be a bit more even along with a tool for IMEC in Europe, who works closely with ASML, likely getting one as well for R&D.

In 2026, 20 tools is more of a toss up, between the big three foundry/logic makers, likely with the addition of a High NA tool for New York for R&D as recently announced.

We don’t expect memory makers to engage with High NA for the first several years as they are just starting to engage with regular EUV just now and remain 4 to 6 years behind foundry/logic need for EUV lithographic technology.

We would not be surprised that after 3 years of High NA shipments for Intel to have the majority of High NA tools much as TSMC has the lions share of current EUV technology.

A big but necessary bet for Intel

If Intel buys 6 High NA tools in 2024 that implys $2.1B to $2.4B in High NA tool capex alone. Thats a pretty big bet….but if it helps pay off in catching up to TSMC it will be money well spent. There is no choice, as to not do so would relegate Intel to a trailing and at best, if lucky, equal position with TSMC.

High NA will get into production faster than standard EUV did

Although there are significant differences between EUV and High NA EUV tools, the basic concepts are the same. Its not the quantum leap that EUV was over ARF immersion.

At $350 to $400M a copy, chip makers are going to need to put those new tools to work as quickly as possible.

They are also going to have to focus (pardon the pun) on getting an economic return on single pattern High NA versus multi pattern standard EUV which is probably not a slam dunk in the beginning just as EUV was hard to prove over multi pattern ARF immersion (there is still controversy as to thye cost advantage).

Does High NA finally leave China behind??

As we have seen with 7NM, China has been able to get EUV like fidelity using multi-patterning ARF immersion.

This work around seems extendable down to 5NM.

We have our doubts about extending ARF immersion down to 3NM in any sort of usable/economically feasible way. This would tend to suggest that those implementing High NA EUV will be beyond the reach of China’s fabs….at least for the foreseeable future.

The Stocks

We view this news if true, or even close to true, as a significant positive for both ASML & Intel. We however would not view it as a negative, just yet, for TSMC as they likely remain in a strong position in EUV overall.

We think that ASML and its stock could ride the High NA EUV wave throughout 2024 assuming no hiccups in production.

We would further suggest that the recent changes in the retiring CEO & CTO at ASML would not likely have happened unless and until High NA EUV was safely on its way and over any major technology humps or other issues as I doubt either one would likely retired on a weak note of a poor High NA roll out.

High NA is going out on a high note!

Although it may take a while for Intel to see the fruits of its High NA bet we think investors would be willing to overlook the negative impact of the spend required for High NA if the reward is regaining technology leadership…..

Have a great Holiday!!

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

About Semiwatch

Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects. Please contact us for these services as well as for a subscription to Semiwatch

Also Read:

AMAT- Facing Criminal Charges for China Exports – Overshadows OK Quarter

The Coming China Chipocalypse – Trade Sanctions Backfire – Chips versus Equipment

KLAC- OK quarter in ugly environment- Big China $ – Little Process $ – Legacy good


Podcast EP199: How Rambus is Helping to Counter the Security Threats of Quantum Computing with Scott Best

Podcast EP199: How Rambus is Helping to Counter the Security Threats of Quantum Computing with Scott Best
by Daniel Nenni on 12-22-2023 at 10:00 am

Dan is joined by Scott Best, technical director at Rambus. His research areas are memory architectures, 3D packaging, and security processors. Scott joined Rambus in 1998 and has served in many and varied technical roles. He has become one of the most prolific inventors in the company’s history. Over the course of his career at Rambus, he is a named inventor on over 200 patents worldwide.

Dan explores the Rambus Quantum Safe Engine (QSE) with Scott. The architecture is discussed, along with key markets and applications and how Rambus will help with the massive worldwide infrastructure update required to achieve quantum safe cryptography.

In this informative discussion, Scott explains the significant security implications presented by quantum computing – the timeline for when the threat will be real, what is being done across many industries to ensure security is maintained and how Rambus technology fits into the plans.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Giving Back – The Story of One Silicon Valley Veteran’s Journey

Giving Back – The Story of One Silicon Valley Veteran’s Journey
by Mike Gianfagna on 12-22-2023 at 6:00 am

Giving Back – The Story of One Silicon Valley Veteran’s Journey

The concept of giving back is something many of us have contemplated. Giving back to the community or to support a particular cause. How to respond to those inquiries from our alma mater is another example. These conversations typically focus on giving money to provide needed support. As engineers, we are surrounded by a massive problem that needs attention and money by itself won’t fix it. According to the US Bureau of Labor Statistics, economic projections point to a need for approximately 1 million more STEM professionals than the U.S. will produce at the current rate over the next decade if the country is to retain its historical preeminence in science and technology. That should get your attention; it’s a big problem that money alone can’t fix. This post is about someone who is helping to address this crisis in a personal way. Read on to learn the story of one Silicon Valley veteran’s journey to giving back.

The Changing Landscape

We have all witnessed the shift in innovation that has occurred over the past decade or so. Software is now the central focus for innovation. It defines new products, new user experiences, expanded deployment of AI and essentially sets the stage for what’s next. It’s been said that all companies are software companies. I tend to agree. Underpinning this shift is the fact that custom chips are required to bring new software ideas to life.

The demand for custom silicon is at an all-time high. Growing up in the ASIC business, this shift brings a smile for me. Software and silicon now take center stage around the world. Governments are getting involved as well. The Chips Act is fueling some big demands for growth. According to the Semiconductor Industry Association, “robust federal incentives for domestic chip manufacturing would create an average of nearly 200,000 American jobs annually as fabs are built and add nearly $25 billion annually to U.S. economy.”

Filling all these jobs is a critical part of success, and that means encouraging more enrollment in STEM curricula to increase the flow of new engineering graduates. A well-known Silicon Valley veteran has decided to work with his alma mater to help address this challenge. This is his story.

Rick Carlson Teams Up with Illinois Tech

Rick Carlson

Rick Carlson is vice president of sales, Verific Design Automation. He’s been there for 20 years after joining Verific from AccelChip. Prior to AccelChip, he held positions as vice president of sales for Averant, Synplicity (now Synopsys), Escalade (now Siemens), and EDA Systems. He is also a co-founder of the EDA Consortium (now the ESD Alliance) in 1987 and is currently a Lanza techVentures Investment Partner. To say that Rick is well connected in the technology community is an understatement.

What can a person with all that experience and all those connections do, beyond making companies like Verific successful? Rick discovered an opportunity to give back using those skills. It all began with the traditional email from the Illinois Institute of Technology (Illinois Tech) in Chicago, asking for a donation. Rick called the alumni office and asked how he could give back to the school that helped launch his career in high tech so many years ago.

The person speaking to Rick was, I believe, a truly creative thinker. After learning about Rick’s background, the Alumni Giving’s Office Director suggested that instead of writing a check Rick should volunteer using his 40+ years’ experience in EDA and business development to help the school. Not knowing what that would lead to, Rick said yes.

So, he started doing what he does best, networking and making connections. This led to an introduction to the Dean of the College of Computing, Dr. Lance Fortnow, who authored “The Golden Ticket: P, NP and the Pursuit of the Impossible.”  As a math major, this book re-ignited Rick’s interest in mathematics and that led to additional introductions to other colleges including the College of Electrical and Computer Engineering (ECE).

Rick’s angel investment in a Colorado-based company called Mountain Flow that makes a plant-based ski wax led to an introduction to the chemical engineering department now doing paid research project on a next-generation ski wax. His interest in learning about technology transfers led him to the Executive Director of Illinois Tech’s Kaplan Institute of Technology and Entrepreneurship. Kaplan teaches students about what it takes to start a company.

That led to a deep connection of networking, volunteering, and mentoring at Illinois Tech. Rick is now on three boards with a fourth board seat imminent at Illinois Tech. He is also producing a tech documentary series about a Chicago company called Influit Energy that developed a new form of energy that uses nano electric particles. The three founders are all affiliated with Illinois Tech as researchers and professors.

The school’s leadership is delighted with Rick’s involvement and impact. Rick takes pride in highlighting the significance of alumni giving back to their schools. In the case of Illinois Tech, he is following in the footsteps and inspired by Illinois Tech alumni including Chris Gladwin, founder of CleverSafe, Ed Kaplan, inventor of the bar code, Rohit Prasad, creator of Amazon’s Alexa, Marty Cooper, who led the team that built the first mobile phone and Victor Tsao who founded Linksys and enabled high-speed home internet.

Work/Life Balance

Verific is the leading provider of SystemVerilog, VHDL, and UPF front-ends. Its software is used worldwide in synthesis, simulation, formal verification, emulation, debugging, virtual prototyping and design-for-test applications, which combined have shipped over 60,000 copies. The company has a well-deserved reputation for excellent customer support and flexibility, and this culture is what help its 20-year veteran to focus on giving back.

Rob Dekker, Verific’s founder, and Michiel Ligthart, its president, have been supportive and they are embracing Rick’s efforts. Naturally, Illinois Tech has Verific tools for use in the ECE department.

What’s Next – It’s Up to You

Rick hopes to inspire individuals to get involved with alumni groups at their colleges and universities through monetary donations but also by becoming active participants in student activities. It’s a great way for students to get a feel for our industry and look more closely at semiconductor careers. It’s also a great way for us to give back to the place that gave us a start.

And that’s the story of one Silicon Valley veteran’s journey to giving back.

Also Read:

Bespoke Silicon Requires Bespoke EDA

Verific Sharpening the Saw

COO Interview: Michiel Ligthart of Verific


Agile Analog Partners with sureCore for Quantum Computing

Agile Analog Partners with sureCore for Quantum Computing
by Daniel Nenni on 12-21-2023 at 10:00 am

Agile Analog sureCore project PR image

Quantum computing is the next big thing for the computing world. The semiconductor industry has been talking about it for years. It’s shiny, mysterious, and capable of some incredible things. Instead of using classical bits to represent information (which can be either a 0 or a 1), quantum computers use quantum bits or qubits. What’s special about qubits is their ability to exist in multiple states simultaneously, thanks to a phenomenon called superposition. This allows quantum computers to perform complex calculations at speeds that would make traditional computers melt.

Quantum computing isn’t here to replace your laptop or iPhone, it’s more like a sledgehammer for specific compute intensive tasks: cyber security, meteorological, medical diagnostics, finance, cryptography, AI, etc… Imagine solving complex problems in seconds that would take today’s computers lifetimes.

To get there however we need a quantum capable semiconductor ecosystem and part of that ecosystem is Cyro-CMOS IP:

***

Agile Analog, the analog IP innovators, is collaborating with sureCore, the ultra-low power embedded memory specialist, to implement a cryogenic control ASIC on the GlobalFoundries 22FDX process, as part of the Innovate UK funded project: “Development Of Cryogenic CMOS To Enable The Next Generation Of Scalable Quantum Computers.”

The consortium members created cryogenic SPICE models for the GF 22FDX process technology, and sureCore had used these to recharacterize standard cell and IO cell libraries, as well as developing low power SRAM, ROM and Register File Compilers. These cryogenic IP libraries are being used to enable the development of a test chip that will allow measurement of performance at cryogenic temperatures. Agile Analog is working closely with sureCore to implement and verify its solution.

According to Barry Paterson, CEO of Agile Analog:

“We were delighted when sureCore approached us to undertake the physical design required to create a test chip for this cutting-edge project. Integrating control and measurement electronics capable of operation down to 4 Kelvin is critical to enabling quantum computer scaling. The UK is leading innovation in the quantum technology space and I am pleased that Agile Analog can participate in the development of this technology.”

Agile Analog completed the synthesis, floor planning, place and route, and design closure steps to ensure that the cryogenic test chip would be able to act as a qualification test vehicle, in order to prove that the approach adopted by this project could be a viable solution for cryogenic control ASICs.

Semiconductor process technologies are typically characterized for operation from -40C to 125C. However, in the world of quantum computing, where operational qubits demand temperatures even lower than 4K, co-locating the control electronics close to the qubits within the cryostat is crucial for quantum computer scaling. To achieve their true potential, there is a need to dramatically increase the number of qubits, from the several hundred that is possible today to millions. These qubits have to be controlled, and currently this is done by using external control electronics housed outside of the cryostat at room temperature. By generating semiconductor IP that can operate at cryogenic temperatures, quantum computing developers can quickly design their own control ASICs that can be co-located with the qubits in the cryostat.

Paul Wells, CEO of sureCore, commented on this collaboration:

“Agile Analog have a unique blend of skillsets that make them the ideal partner. Their expertise and sheer professionalism ensured that we were able to work extremely closely and identify critical issues early in the project, meaning that the physical design flow proceeded in a smooth and predictable manner. Having been on both sides of the fence I can say that they are one of the best design services teams I have come across in my 35 year career.”

Barry Paterson concluded:

“The Agile Analog team is pleased to be able to support sureCore, and the other consortium members, with the implementation of this platform. We have gained invaluable experience in working at challenging temperatures. The pathway to advanced quantum computers with millions of qubits relies on integrating the control and measurement within the cryostat. We can use the knowledge acquired during this project to make a range of our analog IP, including our data converters, available with support for these cryogenic temperatures. We are already having initial discussions with potential partners about delivering these solutions.”

***

Agile Analog – Analog IP the way you want it

Agile Analog is transforming the world of analog IP with Composa™, its innovative, highly configurable, multi-process analog IP technology. Headquartered in Cambridge, UK, with a growing number of customers across the globe, Agile Analog has developed a unique way to automatically generate analog IP that meet the customer’s exact specifications, for any foundry and on any process, from legacy nodes right up to the leading edge. The company provides a wide-range of novel analog IP and subsystems for data conversion, power management, IC monitoring, security and always-on domains, with applications including; data centers/HPC, IoT, AI, quantum computing, automotive, aerospace and defense. The digitally wrapped and verified solutions can be seamlessly integrated into any SoC, significantly reducing complexity, time and costs, helping to accelerate innovation in semiconductor design. www.agileanalog.com

sureCore  When low power is paramount

sureCore, the ultra-low power, embedded memory specialist, is the low-power innovator who empowers the IC design community to meet aggressive power budgets through a portfolio of ultra-low power memory design services and standard IP products. sureCore’s low-power engineering methodologies and design flows meet the most exacting memory requirements with a comprehensive product and design services portfolio that create clear market differentiation for customers. The company’s low-power product line encompasses a range of close to near-threshold, silicon proven, process-independent SRAM IP. www.sure-core.com

Also Read:

Agile Analog Visit at #60DAC

Counter-Measures for Voltage Side-Channel Attacks

Slashing Power in Wearables. The Next Step


Seven Silicon Catalyst Companies to Exhibit at CES, the Most Powerful Tech Event in the World

Seven Silicon Catalyst Companies to Exhibit at CES, the Most Powerful Tech Event in the World
by Mike Gianfagna on 12-21-2023 at 6:00 am

Seven Silicon Catalyst Companies to Exhibit at CES, the Most Powerful Tech Event in the World

According to its website, CES® Is the global stage for innovation, delivering the most powerful tech event in the world — the proving ground for breakthrough technologies and global innovators. Owned and produced by the Consumer Technology Association (CTA)®, it is the only trade show that showcases the entire tech landscape at one event. This is indeed a prestigious show. If you’ve never gone, you should. It takes over essentially all of Las Vegas for a few days in early January. There will be more information about the show later, but first I’d like to explore a rather impressive statistic. An extensive list of Silicon Catalyst incubator companies will be exhibiting this year at CES, presenting significant advances in AI, sensing and vision. Let’s examine the seven Silicon Catalyst companies to exhibit at CES, the most powerful tech event in the world.

Kura Technologies – https://www.kura.tech

Kura’s mission is to build the best personal AI assistant to empower people 100x in working, learning, and communicating. To do that, the company started by building its first product – the world’s best-performing direct-see-through augmented reality glasses and SaaS collaboration + AI platform, which solves today’s biggest adoption challenges in augmented reality, AI data and AI tool deployment. The goal is to expand the market 100x or more.

Today’s AR glasses are bottlenecked by a small field-of-view, low transparency (less than 25%, similar to dark sunglasses), low resolution, poor brightness (unsuited for outdoor or bright indoor environments), and bulky form factor, all of which prevent widespread adoption.

Kura’s AR glasses and telepresence+collaboration platform feature a 150° field-of-view (9x that of existing AR), 95% transparency (4x existing), 8K resolution (4x existing), high brightness, wide range of depth for variable focus, and a compact form factor, solving the bottlenecks which prevent the adoption of AR.

The company has achieved successful tapeout of its custom backplane at TSMC on a production CMOS node; this backplane uses its proprietary drive algorithms and is the world’s fastest micro-display driver. Kura has paid orders from 400+ companies with demand of 500K+ units; over $30M in PO requests in 2022 and 2023

Kura is not new to CES. Last year, they won the CES Innovation Award and Best of CES.

Oculi – https://www.oculi.ai

Oculi® is an alumnus of the Silicon Catalyst incubator.  It is a fabless semiconductor company that produces the OCULI SPU™ (Sensing & Processing Unit), a novel architecture and product in the world of AI and vision technology. The OCULI SPU is the only single chip Software-Defined Vision Sensor™ that delivers Real-Time Vision Intelligence (VI) at the edge. Oculi makes computer/machine vision faster and more efficient by embedding intelligence in the sensor starting at the pixel, the true edge!

The OCULI SPU is the product of over 18 years of R&D led by Dr. Charbel Rizk, Founder CEO CTO of Oculi which started at the Johns Hopkins University and was specifically focused on developing efficient vision Intelligence on a single chip that delivers fast response and low bandwidth, power, size, and weight.

With Oculi vision, its partners can deliver a natural and immersive user experience under any lighting conditions, indoors and outdoors, with up to 30X reduction in power consumption and latency with a lower bill of materials.

Owl Autonomous Imaging – https://www.owlai.us

Owl is also an alumnus of the Silicon Catalyst incubator.  The company is all about safety, especially pedestrian safety. It is developing HD thermal image sensors and computer vision software. Headquartered in the heart of image sensor innovation, Rochester NY, it is singularly focused on improving visibility in no light, bright light, and degraded visual environments.

As a fabless semiconductor company, Owl supplies a patented HD thermal digital image

sensor, auto qualified camera cores and a thermal camera reference design. Its computer vision algorithms are uniquely designed for thermal applications. For example:

  • Classification CNNs
  • Ranging CNNs
  • 3D fusion & object segmentation

The current markets served include automotive ADAS L2+, L3/4 autonomy, industrial, construction, agriculture, and smart infrastructure. Its thermal computer vision platform is in active trials today.

Quadric – https://quadric.io

Another alumnus of the Silicon Catalyst incubator, Quadric had created the industry’s first GPNPU (general purpose Neural Processing Unit). ML models are created using known labeled datasets (training phase) and are subsequently used to make predictions when presented with new, unknown data in a live deployment scenario (inference). Because an enormous amount of computing resources is required both for training and inference, an ML accelerator is now vital in handling computational workloads.

For most high-volume consumer products, chips are designed with tight cost, power, and size limitations. This is the market Quadric serves with innovative semiconductor intellectual property (IP) building blocks. Its ML-optimized Chimera processors allow companies to rapidly build leading-edge SoCs and more easily write application code for those chips.

After proving the innovative Chimera architecture in 2021 by producing a test chip, Quadric introduced its first licensable IP product – the industry’s first GPNPU (general purpose Neural Processing Unit) in November of 2022, and began product deliveries in Q2 2023. You can learn more about Quadric on SemiWiki here.

SigmaSense – https://sigmasense.com

SigmaSense® is pioneering a programmable continuous DSP-based sensing technology for use in touch displays, automotive surfaces, and battery impedance sensing applications. SigmaSense has created a radical technology transformation of how digital systems interact with the physical analog world, enabling reconfigurable mixed signal solutions that can be continuously optimized.

The new platform measures current direct-to-digital, eliminating the signal preconditioning of traditional voltage mode ADCs. This enables simplified designs, better quality data capture, and systems that can continuously improve. The Society for Information Displays recognized the company’s first touch controller (SDC100) as the “Display Component of the Year” for its innovation in leading the transition away from high voltage, time-base sensing, to current and frequency-based systems.

SigmaSense has substantial investments in its patent filings. The enabling IP is protected with over 250 patents already issued and 180 more pending. The patent filings extend to a range of applications including traditional touch screens, power regulation, batteries, electric motors, data communications, medical, automotive, and IoT applications.

Sonical – https://www.sonical.ai

Sonical is another alumnus of the Silicon Catalyst incubator.  The company is enabling Headphone 3.0, the next industry standard for ear worn products. Sonical is building a platform for Headphone 3.0 that unlocks the secret potential of your ears using more effective wearable products.  The team at Sonical is developing its own operating system, CosmOS, along with a dedicated chip specifically designed for hearables running downloadable plugins. 

This will unlock the large number of app developers that have created advanced AI based algorithms.  The company’s mission is to empower hearables manufacturers, as well as individual users, to select which features and combinations of apps they want to include in their new hearables products in the same way one currently chooses which apps for a laptop, tablet, or smartphone. Sonical enables App developers to have direct access to consumers to deliver a differentiated experience, exactly what they need, when they need it.

Sonical and XMOS, a semiconductor company at the leading edge of the intelligent IoT will be providing early demonstrations of a collaboration for Headphone 3.0 at CES.

SPARK Microsystems – https://www.sparkmicro.com

Also a Silicon Catalyst alumnus, SPARK Microsystems is a fabless semiconductor company focused on enabling a new generation of wireless extended reality, audio, gaming, human interface and IoT devices.

The company is building next generation short-range wireless communication devices. SPARK UWB provides high data rate and very low latency wireless communication links at an ultra-low power profile, making it ideal for personal area networks (PANs) used in mobile, consumer and IoT-connected products. Leveraging patented technologies, SPARK Microsystems strives to minimize and ultimately eliminate wires and batteries from a wide range of applications.

The company offers wireless transceivers, software development kits, hardware kits, and a SPARK headset reference platform.

About CES

CES will be held in Las Vegas, NV from January 9 – 12, 2024. You can learn more about the show and register to attend here. CES is a trade-only event for individuals affiliated with the consumer technology industry. I expect most of SemiWiki’s readers qualify for attendance. It is a show unlike any other, if you haven’t been there, consider registering. It will be an unforgettable trip and you’ll get to see the seven Silicon Catalyst companies to exhibit at CES, the most powerful tech event in the world.


ReRAM Integration in BCD Process Revolutionizes Power Management Semiconductor Design

ReRAM Integration in BCD Process Revolutionizes Power Management Semiconductor Design
by Kalar Rajendiran on 12-20-2023 at 10:00 am

Power Analog ICs Adopting ReRAM

Weebit Nano, a leading developer of advanced memory technologies, recently announced a significant collaboration with DB HiTek, one of the top ten foundries of the world. The collaboration is designed to enable integration of Weebit’s Resistive Random-Access Memory (ReRAM) into DB HiTek’s 130nm Bipolar-CMOS-DMOS (BCD) process. It marks a significant milestone in semiconductors for analog, mixed-signal and power designs, setting the stage for a higher levels of system integration and more competitive solutions.

Monolithic Integration and Streamlined Design

A key highlight of this announcement is the goal of monolithic integration, allowing analog, digital, and power components to coexist on the same chip seamlessly. This consolidation not only simplifies the design process but also enhances overall system performance. The synergy between ReRAM and BCD technology enables the creation of highly integrated circuits, reducing the complexity associated with external components and multiple dies.

Market Adoption and Industry Implications

The adoption of Weebit ReRAM by DB HiTek could be an early indicator of widespread industry adoption in the future. This collaboration establishes a precedent for other foundries and semiconductor manufacturers to explore the integration of Weebit ReRAM in their processes, further accelerating the industry’s trajectory towards advanced semiconductor solutions.

Versatility and Application Diversity

The announcement underscores the versatility of ReRAM, positioning it as a solution across diverse applications. From consumer electronics to industrial systems and Internet of Things (IoT) devices, ReRAM’s adaptability makes it a go-to memory technology for emerging technologies and next-generation electronic devices. This versatility opens up new possibilities for innovation and customization across various industries.

ReRAM is BEOL technology

ReRAM, as a BEOL technology, integrates in the later stages of semiconductor manufacturing. This simplifies the integration process, avoiding interference with Front-End components and allowing for straightforward incorporation into existing BCD processes. The simple stack design of ReRAM facilitates its seamless integration in the BEOL. This simplicity allows for the addition of ReRAM to existing layers of metal and insulating materials without extensive modifications.

ReRAM Advantages Over Flash Memory

ReRAM’s ease of integration, combined with high endurance, reliability at high temperatures, and low-power characteristics, positions it as an advantageous choice over Flash memory in BCD processes. The simplified integration process and reduced need for process tuning enhance the overall efficiency and performance of semiconductor designs.

Benefits to the Semiconductor Industry

The integration of ReRAM in DB HiTek’s BCD process brings forth several key benefits for the semiconductor ecosystem.

Cost-Effective Manufacturing: Weebit’s ReRAM technology requires only 2 added masks in the manufacturing process, compared to more than 10 such masks for flash. This has a direct effect on manufacturing cost and cycle time. In addition, the monolithic integration reduces the need for external components, streamlining manufacturing and lowering production costs. This cost-effectiveness aligns with industry demands for efficient and economical solutions.

Enhanced System Integration: ReRAM’s integration in the BCD process enables the seamless coexistence of analog, digital, and power components on a single chip. This not only simplifies the design process but also contributes to more compact and efficient electronic systems.

Energy Efficiency: Leveraging ReRAM’s low-power characteristics in conjunction with the power management capabilities of the BCD process enhances overall energy efficiency. This is particularly crucial in mobile applications where power consumption is a critical consideration.

Versatility and Customization: The collaboration empowers semiconductor designers with the flexibility to customize integrated circuits based on specific application requirements. The adaptability of ReRAM ensures that the technology can be tailored to meet the diverse needs of different industries.

Applications and Future Potential

The integration of ReRAM in the BCD process holds immense promise for various applications. Consumer devices like smartphones, tablets, and wearables stand to benefit from the enhanced energy efficiency and space-saving attributes of ReRAM in BCD processes. ReRAM’s robustness at high temperatures and integration with BCD’s power management make it well-suited for industrial control systems, automation, and robotics. IoT devices, with their compact size and low power requirements, could significantly benefit from the integration of ReRAM in the BCD process. ReRAM integration into BCD process also offers a powerful and efficient memory solution for the growing landscape of IoT applications.

Summary

By licensing Weebit’s ReRAM for integration in DB HiTek’s BCD process, the two companies are pioneering a path to more efficient, cost-effective, and versatile semiconductor designs. The benefits of cost-effective manufacturing, enhanced system integration, energy efficiency, and application versatility position ReRAM in BCD as a transformative solution for the semiconductor industry. As the integration gains momentum, we can anticipate a wave of innovation across various applications, ultimately shaping the future of semiconductor design.

You can access the Weebit Nano – DB HiTek joint press release here.

For more details about Weebit’s technology and solution offerings, visit www.weebit-nano.com

Also Read:

A preview of Weebit Nano at DAC – with commentary from ChatGPT

Weebit ReRAM: NVM that’s better for the planet

How an Embedded Non-Volatile Memory Can Be a Differentiator