Banner 800x100 0810

Analog to Digital Converter Circuits for Communications, AI and Automotive

Analog to Digital Converter Circuits for Communications, AI and Automotive
by Daniel Payne on 12-29-2022 at 6:00 am

RF Data Converters, analog to digital converter, min

Sensors are inherently analog in nature, and they get digitized for processing by using an Analog to Digital Converter (ADC) block. At the recent IP SoC event I had the chance to see the presentation by Ken Potts, COO of Alphacore on their semiconductor IP for ADCs. I learned that Alphacore started out in 2012, now offering both standard and custom IP for AMS, RF, imaging and radiation hardened electronics through a global organization, based in Arizona.

Data converters can be designed in any IC process node, however the FD-SOI technology provides the lowest power while being tolerant to radiation effects. A 28nm FD-SOI chip will consume 70% lower power when compared to a bulk CMOS process.

RF data converters need to have both high bandwidth and low power to fit applications like phase array architectures, direct to RF sampling, beamforming and 5G radios.

Alphacore designed a hybrid ADC named the A11B5G with a sampling rate of 5GS/s, resolution of 11 bits, with a 800mV supply, and a power of just 50mW by using a 22nm FD-SOI process from GlobalFoundries. One useful feature of this ADC is an integrated auto-calibration, as it eliminates interleaving spurs.

Output spectrum before calibration
Spurs removed after calibration

Another Analog to Digital Converter with even lower power is the A10B3G with a sampling rate of 3GS/s, 8.6 Effective Number Of Bits (ENOB) at 100MS/s, consuming just 13mW, fabricated on the 22nm FD-SOI process from GlobalFoundries.

A10B3G ADC

The first low-power Digital to Analog Converter (DAC) that Ken showed was the D6B5G, and it consumed only 16mW, with 5.4 ENOB, 6-bit input and running at 5GS/s.

Phase Locked Loop (PLL) circuits can be used when demodulating a signal, to distribute clock signals inside an SoC, create a new clock frequency multiple, or recover a signal from a communication channel. The PLL5G is a very low jitter <150fs design, taping out in January 2023 in the 22FDx node.

For serial communications a SerDes circuit is used, and Alphacore has a 22FDx-based design taping out in January 2023, dubbed the SD16G, supporting a data rate from 1Gb/s to 16Gb/s, using either 8 or 16 bit for serialization/de-serialization width. All the popular protocols are supported: PCIe, JESD204, SATA, SRIO, SG-MII, USR/XSR.

All IP from Alphacore comes with a design kit that includes everything that you’ll need for customization:

  • GDSII
  • RTL
  • Schematics
  • DRC/LVS logs
  • Abstract
  • Extracted View
  • Extracted simulation model
  • Verilog-AMS models
  • Integration guide: DFT, I/O

Roadmaps for ADC, DAC, PLL and SerDes were shared for four foundry nodes: TSMC 28HPC+, TSMC 12FFCP, Intel16, GF 22FDx. So 2023 is a very busy year for silicon proven IP.

At Alphacore they are experts at designing radiation hardened circuits, taking special care for effects like Total Ionizing Dose (TID) and Single Event Effects (SEE). They have rad-hard ADC and DAC in GF 22FDx now, then plans for Intel16 in Q2’23, GF 22FDx in Q3’23, and SkyWater RH90 in Q4’23.

Three more rad-hard design examples were for Power Management ICs (PMIC), a 2-color in-pixel ADC, and an imager/camera with high frame rate of 120 FPS.

Summary

Low-power and radiation-hardened applications are a niche market, requiring specialized expertise. At Alphacore there’s a strong track record of delivering a growing family of ADC, DAC, PLL, SerDes, PMIC and imagers. The tapeout schedule for 2023 looks quite full, meaning that you get even more IP that is silicon proven for your designs in 5G, space communications, automotive, even in quantum computing.

Related Blogs


Siemens Aspires to AI in PCB Design

Siemens Aspires to AI in PCB Design
by Bernard Murphy on 12-28-2022 at 6:00 am

PCB min

This. AI in PCB design is not a new idea. Other PCB software companies also make that claim. But when a mainstream systems technology company like Siemens talks about the subject, that is noteworthy. They already have an adaptive user interface (UI) for their mechanical modeling suite and to assist in low-code development for application flows. Now they are starting to look at how artificial intelligence (AI) could accelerate and improve PCB development and analysis.

A PCB Moonshot

No-one sees PCB design as a career goal – it’s a means to an end. Crafting PCBs is a necessary but thankless task in system design, a task which nevertheless requires massive expertise. To jointly design and optimize for a good layout solution, performance (signal integrity, electromagnetic compatibility, and thermal considerations), together with design for manufacturability. A natural for more automation it would seem, yet this is truly a multiphysics problem crossed with supply chain and business constraints. Established automation is generally good at solving point problems, not so much cross-domain problems. And even within that limitation, some problems like layout and routing continue to depend heavily on heuristics and expert user guidance. In computer science jargon these problems are non-deterministic polynomial time (NP) complete, meaning that deterministic and reasonable run-time algorithms are not feasible.

Which naturally turns attention to AI and machine learning (ML). Since expert PCB designers routinely build high quality PCB designs, perhaps ML systems could capture that expertise. The DARPA IDEA program within the Electronics Resurgence Initiative (ERI) is a moonshot effort aiming at “no human in the loop” electronic design, in which PCB design will be an obvious target. This could span new ML approaches to NP-complete problems, establishing electronic libraries of components, also ML applied to analysis and optimization tools across those multiple domains.

The Siemens view

Siemens recently released a white paper on this topic. The paper is quite high level, not talking too much about specific tools or capabilities. This leads me to believe it is an aspirational stage-setter for upcoming more detailed announcements. I will make that assumption in what follows. The paper talks first about component selection and model creation. I am sure Siemens is following the ERI initiative in their work. For model creation, they suggest tools like natural language processing and image recognition to gather data from documentation data sources to build machine processable models.

For schematic generation, they suggest an intelligent schematic auto-complete feature. When you place a component such as a processor, other components can be added and connected automatically, based on prior experience in similar design applications. They also suggest ML techniques to recognize and suggest reusability potential from earlier generation completed designs.

Constraint creation and management is a natural for AI assistance. Common constraint choices and values should carry over between designs, at least in concept. The tricky part here I would imagine is to minimize the need for supervised learning which could be as cumbersome as simply recreating constraints from scratch. Learning at a meta level, or semi-supervised learning, would be preferable.

Place and route is also an obvious target for AI. Siemens suggest building on their sketch-routing technology (which is pretty neat). They would use these sketch patterns as information to carry between designs (again, a meta level). Finally, they propose using AI for analysis and verification. This to my mind would form the center of a multiphysics design and analysis platform. Siemens is already very experienced in this class of design and analysis for mechanical design. If they bring the same kind of expertise to PCB design, you could image this turning into a powerful suite.

If you missed the link above, you can read the white paper HERE.

 


The Era of Chiplets and Heterogeneous Integration: Challenges and Emerging Solutions to Support 2.5D and 3D Advanced Packaging

The Era of Chiplets and Heterogeneous Integration: Challenges and Emerging Solutions to Support 2.5D and 3D Advanced Packaging
by Kalar Rajendiran on 12-27-2022 at 6:00 am

High End Performance Packaging Spectrum

From the multi-chip-modules (MCM) of yester years to today’s System-in-Package (SiP) implementations, things have progressed a lot in terms of package technology. The chiplet movement is not only a big beneficiary of today’s advanced package technologies but drives further advances in this technology area. While a chiplets-based implementation addresses the yield issues of monolithic SoC implementations at 5nm and below, it introduces or exacerbates other challenges. The challenges being signal integrity issues, longer latencies, increased power and test complexities. This puts the spotlight on the Die-to-Die (D2D) interfaces for successful SiP implementations. Solution leaders who are involved with SiP implementations have developed proprietary D2D interfaces but also recognize the need for standardization to accommodate heterogeneous chiplets implementations. The result is the push from an industry consortium to promote an open standard called Universal Chiplet Interconnect Express™ (UCIe™). The consortium includes a long list of technology leaders including heavyweights such as AMD, Arm, ASE, Google, Intel, Meta, Qualcomm, TSMC and others.

All these initiatives are great but chiplets-based system implementations still have to grapple with the “elephant in the room” requirement of delivering defect-free products over its guaranteed lifetime. Scrapping a defective device in an advanced package is a very expensive proposition. BIST techniques detect gross failures such as opens and shorts but are often unable to detect small variations that may cause catastrophic system failures in the future. Current approach to handling this challenge is to implement spare lanes that can replace defective ones. But how to identify which lanes are candidates for replacement? This essentially is the context and crux of a recent webinar hosted by proteanTecs. The presenters included Stefan Chitoraga, a technology and market analyst, Igor Elkanovich, a CTO and Nir Sever, a senior director of business development. Click here to listen to the entire recorded webinar. If you are involved in the advanced packaging space or chiplets implementation space, you will find the webinar quite informational.

Following are some salient points from the webinar.

Yole Intelligence – Stefan Chitoraga

The design cost growth trend from 65nm to 5nm node is one of the drivers behind heterogeneous chiplets adoption. High computational applications in markets such as datacenter networking, high-performance-computing and autonomous vehicles, is another driver.

Yole Intelligence, part of Yole Group, predicts the high-end performance packaging market total to grow at 19% CAGR between 2021 and 2027 to reach $7.87B. The growth rate of technologies such as UHD FO, HBM, 3DS, 3DNAND, etc., will far outrun the growth rate of Si Interposer technology. The barrier to entry is getting high to get into high-end packaging supply chain. Intel, Samsung and TSMC are making heavy investments and offering their innovative products and services for high-end performance applications.

Global Unichip Corporation (GUC) – Igor Elkanovich

The GLink™-2.5D interface improves power efficiency by more than 80% and reduces end-to-end latency by more than 75% compared to a 2D interface. The GLink-3D interface improves power efficiency by more than 96% and reduces end-to-end latency by more than 97%, compared to a 2D interface.

GUC offers its 2.5D and 3D Multi-die Advanced Packaging Technology (APT) platform to its ASIC customers as part of its services. proteanTecs’ interconnect monitoring solution is integrated into GUC’s GLink D2D interface that is used to implement heterogeneous chiplets based SiP solutions. The proteanTecs’ technology monitors signal quality trends and repairs low signal quality lanes to prevent future failures. This in turn improves the quality and production yield of the final product. Without proteanTecs’ technology, many marginal lanes would have gone unnoticed until they failed during field operation.

proteanTecs – Nir Sever

proteanTecs’ D2D interconnect monitoring enables comprehensive visibility and parametric lane grading. The signal monitoring solution for D2D connectivity is supported on InFO, CoWoS®, 3DFabric™ and EMIB technologies and can be implemented with GLink™, AIB, HBM3, OpenHBI and UCIe interfaces.

The solution covers 2GHz to 8GHz speed range with full eye visibility provided on DDR signals. Lane performance is monitored for all lanes over the PVT range during the characterization phase. During the mass production stage, the solution identifies marginal pins and recommends candidates for spare lane swapping, early alerting and spare lane activation as available. In the field, the solution makes predictive maintenance possible by alerting about pins that show signs of wear-out. With this information, lane swapping or module swapping is executed during the next boot of the system, thereby avoiding a catastrophic system failure.

The power of proteanTecs’ technology extends beyond the lane monitoring benefit covered above. Earlier posts on SemiWiki cover how proteanTecs technology based solutions can benefit the development phase as well as the device testing phase for minimizing scrap.

Also Read:

proteanTecs Technology Helps GUC Characterize Its GLink™ High-Speed Interface

Elevating Production Testing with proteanTecs and Advantest’s ACS Edge™ Platforms

How Deep Data Analytics Accelerates SoC Product Development


Micron Ugly Free Fall Continues as Downcycle Shapes Come into Focus

Micron Ugly Free Fall Continues as Downcycle Shapes Come into Focus
by Robert Maire on 12-26-2022 at 6:00 am

Micron Crashing 2022

-Micron off the proverbial cliff and falling faster
-Looking at a much longer/deeper decline in memory
-Layoffs, capex cuts, slowdowns- battening down the hatches
-Micron seems to imply more of a “U” or “L” shaped downcycle

Micron’s numbers as bad as we expected And much worse than most on the street expected

We have been involved with the semiconductor industry for over 30 years and have always felt that upsides are usually more than expected and down cycles are usually worse than expected. All you need a is a little imbalance, in either direction, between supply and demand , and the industry seems to start a run away reaction.

Memory is always the worst given its commodity like nature. Micron suggested we are in one of the worst memory downturns in the last 13 years.

Revenue was off an astounding 39% quarter over quarter as both pricing and demand have collapsed.

We see no signs of it getting any better any time soon and neither does Micron as they announced further cuts to capex to “survival” levels, cuts in wafer production and layoffs of 10%. We had projected the layoffs in prior notes as well as capex cuts.

We again would not be surprised to see it get even worse before any signs of it getting better. While Micron does have cash, its net cash position is not great especially if we hemorrhage cash through increasing losses. Management is clearly aware of this and will obviously take further steps to slow losses.

We remain concerned about Samsung and other memory makers

We are very worried that Samsung and other large memory makers could use the industry weakness combined with their deeper pockets to continue to spend to try to take share from Micron in the downturn. It would not be totally out of character for Samsung to press its size advantage here and now.

Not surprisingly, the number of memory makers in the world almost always gets reduced in down cycles.

Maybe a small reprieve from Yangtze memory risk

We have been even more concerned about the new and upwardly rocketing Yangtze memory in China. They tend to aggressively price at or below a 20% discount to the market to take share. Even Apple was headed in their direction. They clearly want to duplicate China’s takeover in the memory market much as what happened in both LED and solar where non Chinese manufacturers were wiped out by aggressive pricing.

Yangtze has been finally put on the “entity” list (Santa’s naughty list) by the US government, which means they will be starved of equipment. However this may be a bit of closing the barn door after the cows have left town as Yangtze seems to have caught up with Micron and Samsung in technology in leading edge NAND.

We had heard that there was a huge rush of equipment going to Yangtze as equipment makers wanted to ship before the embargo door was closed.
They probably have enough equipment still in crates to unbox for the next year….

The embargo of Yangtze will eventually help Micron but not before they grab further share of the NAND market over the next year or two.

Even Apple has now been scared away from doing business with Yangtze but they have more than enough internal demand inside China to keep their fab running flat out as compared to Micron’s slowing production.

The big question? Is it a “V”, “U” or “L” shaped downcycle/recovery?

It seems fairly clear that the memory market looks a lot more like an “L” or “U” shaped downcycle at best. 2023 will be a weak year and 2024 is obviously in question. Foundry is a big of a mixed story with leading edge weaker due to slow PC/server and high end chips while low end and automotive still needs capacity. The issue for equipment makers is that the part of the market that needs capacity are older 200MM fabs making cheap parts while the memory market, which buys 300MM tools by the boatload is weak.

This obviously does not bode well for a recovery in semiconductor equipment. We have never seen a “real” upcycle without the memory makers participating. With memory maker’s capex weak the equipment market will do little more than tread water.

The bottom line is that memory has to get better before we have a “true” upcycle. Memory also can’t get better just by cuts in production, we have to se demand ticking up….. unlike oil, you can’t cut your way to a supply/demand balance.

For now, it feels like 2023 will be the bottom of the “U” shaped down cycle.

Hair on fire mode

We recently visited silicon valley, meeting with industry participants. We got the sense that manufacturers are still frantically rearranging their order book to cope with the China embargo fall out and other cancellations.

Trying to figure out what’s real and what’s not. We don’t think anyone has a clear handle on what the final, bottom level, of business will look like.

This suggests that we still don’t have a clear idea of where the bottom is. This also implies that we still don’t have a good idea of how long before we get there or how long we stay at the bottom. Once the cancelations and order books stabilize we should get a better idea as to where we have wound up and for how long…. until then, all bets are off.

The stocks

We see no good reason to go near Micron any time soon. The odds are that things will continue to worsen in terms of production cuts, lay offs, capex reductions and further delays in new projects. We remain concerned about conservation of cash.

This is obviously not a good harbinger for the rest of the industry as Intel cuts and delays along with others. The macro economic picture is not driving semiconductor demand in the near term, at least not enough to stimulate an up cycle.

We certainly remain very positive about the long term prospects for the industry as strong secular , long term growth remains, however the next year or two could be rough.

One of the issues is that the industry hasn’t gone through a “real”, ” cleansing” down cycle in quite some time so many investors and participants think this is just a one or two quarter blip then back to the races.

We don’t think this is a short blip as evidenced by lay offs and production cuts and cancellations that didn’t happen in prior short blips in an otherwise strong growth pattern. This is a definitive direction change.

We would prefer to sit on the sidelines, perhaps opportunistic if something is overdone, but generally out of the semiconductor space. We don’t see any significant good news coming in 4th quarter results and perhaps some more negative surprises such as we just got out of Micron.

About Semiconductor Advisors LLC‌

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

AMAT and Semitool Deja Vu all over again

KLAC- Strong QTR and Guide but Backlog mutes China and Economic Impact

LRCX down from here – 2023 down more than 20% due to China and Downcycle

Is ASML Immune from China Impact?


Podcast EP133: OpenLight and Their Revolutionary Approach to Photonics

Podcast EP133: OpenLight and Their Revolutionary Approach to Photonics
by Daniel Nenni on 12-23-2022 at 10:00 am

Dan is joined by Dr. Thomas Mader, Tom is the Chief Operating Officer of OpenLight, a newly formed independent company by investments from Synopsys and Juniper Networks. Dr. Mader’s experience spans 27 years across the photonics and consumer electronics industries. Prior to the formation of OpenLight, he led the same team within Juniper Networks. His previous experience includes six years at Intel, where he founded Light Peak, which eventually became Thunderbolt. He also drove innovation at Amazon for six years, creating several new devices such as the Amazon Dash Button.

Tom describes OpenLight’s unique ability to integrate optical components directly onto silicon – lasers, amplifiers and modulators. This technology allows a level of integration for silicon photonics that is new. Tom describes the benefits of OpenLight’s approach and the unique business model they employ to bring that technology to the market.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


A Five-Year Formal Celebration at Axiomise

A Five-Year Formal Celebration at Axiomise
by Daniel Nenni on 12-23-2022 at 6:00 am

DAC 2022 Axiomise

It’s been a bit more than a year since I interviewed Dr. Ashish Darbari, founder and CEO of Axiomise. I’ve been keeping an eye on Ashish and his colleagues, and I was surprised to learn that they recently celebrated their fifth anniversary as a company. I thought that this would be a good time to catch up with him to find out what’s happened over the year since we talked and to learn more about the last five years.

Ashish, congratulations on reaching five years! Can you summarize your journey?
Thank you, Dan. If I had to pick just one word, it would be “amazing.” When I set up Axiomise in October 2017 to offer training and consulting services around formal verification, it was because I knew that there was a big hole in the available industry solutions and methodologies, and therefore a big need and a big opportunity. But I have to say that the past five years have met or exceeded every expectation that I had. Clients have responded enthusiastically and provided us the business to grow and prosper. I’m very grateful for their support, and for the interest that you and others in the EDA community have shown in our company.

How has Axiomise evolved since we last spoke?
2022 has been incredible for us. For a start, we’ve been growing as a company. Gurudutt (GD) Bansal joined as COO and Neil Dunlop joined as CTO. They’ve been great partners in taking our business to the next level. We opened new offices in Hemel Hempstead, just outside of London, with room for more team members going forward.

I see on Wikipedia that Hemel Hempstead has been a village for more than 1000 years and was granted its town charter by Henry VIII in 1539. So you’re part of that great European tradition of doing cutting-edge technical work in historic settings?
That’s a nice way to look at it. The U.K. is a great location for a services company. We can work with Asia in our morning, with North America in our afternoon, and with Europe all day. We have a worldwide client base, which is part of our amazing story.

Speaking of expanding your scope, I see that you recently joined the ESD Alliance. What role will that play in your future?
As part of the SEMI Technology community, the ESD Alliance is closely tied to the semiconductor industry, and that’s where our clients are. So we’re becoming a more integral part of the chip ecosystem and are already networking, making contacts, and forging relationships that will help us grow further. The benefits of membership work both ways. The ESD Alliance is the voice of the EDA industry, and we feel that any successful company should be part of it and give back to the community by sharing experiences and offering advice to fellow members.

One of the things that has impressed me about you personally is your ability to act as an industry spokesperson for formal while running a company and doing hands-on verification work. Have you continued your speaking activity in 2022?
My goodness, yes. It’s been a really busy year on that front. Probably my highest profile activity was participating in the panel “Those Darn Bugs” at the Design Automation Conference (DAC) in San Francisco. Brian Bailey of Semiconductor Engineering led a lively discussion on whether it will ever be possible to eradicate all bugs from chip designs. Of course, there is no chance of that happening without formal taking a lead role. A video of the panel is available online and I think it’s worth watching.

Also at DAC, I talked about “Taming the Beast: RISC-V Formal Verification Made Easy” in the Cadence Theatre. I explained how 32-bit and 64-bit processors cores are verified with formal verification using the Axiomise formalISA app. A video of this talk is also available. Please thank your colleague Daniel Payne for covering our DAC activities.

I joined the “SoC Leaders Verify with Synopsys” panel at the Synopsys Users Group (SNUG) event and a recording of that is online as well. At DVCon Europe in Munich, I appeared on the panel “5G Chip Design Challenges and their Impact on Verification.”  Also in Munich,  I presented “Accelerating Debug and Signoff for RISC-V Processors Using Formal Verification” at CadenceLIVE Europe. Finally, I discussed how formal can address safety and security as well as functional verification at a Cadence Club Formal event in the U.K.

The Axiomise team participates in all kinds of events. Thanks to your colleague GD for doing a podcast with me earlier this year. The immediacy and directness of a podcast seemed to work well for explaining the potentially scary topic of formal. Have you done others?
I had the pleasure of recording a “Fish Fry” with Amelia Dalton of EE Journal on “The Art of Predictability” and the three pillars of formal verification. In fact, I like podcasts so much that I have my own series and have now recorded 50 episodes.

That’s really impressive; I can’t imagine how you possibly find the time. You write quite a bit as well, don’t you?
Yes, this year, I’ve published articles in EDN magazine and Electronic Design magazine. We also do webinars, white papers, and more. You can go to the Knowledge Hub menu on our website to get a complete list.

Surely all this external activity doesn’t prevent you from continuing to innovate in formal?
Not a chance. Speaking and writing is fun, but it’s the work with our clients that keeps us in business. They’re designing and verifying some of the biggest and baddest chips in the world, so they are constantly pushing the limits of formal technology. We have no choice but to innovate constantly, and that’s a big part of the value we bring to the industry.

As you can see from some of our talks and articles, our biggest innovation this year was expanding our solution for RISC-V verification. We announced this late last year and since then have been very busy helping clients verify their processors. Again and again, we have found serious bugs in RISC-V designs when they were thought to be correct based on massive amounts of simulation testing and even, in some cases, had been fabricated and tested in silicon.

How do you work with your clients?
Our primary goal is to offer maximum ROI to the client in the shortest possible time. It often means we take the formal verification work hands-on as a turnkey services project. It allows the client to see how formal is done on actual designs at a fast pace with excellent proof convergence finding bugs and establishing proofs of bug absence. Apart from the turnkey services work, which has been our primary focus, we also offer training to complement the services.

Do you have any final thoughts for our readers?
I just want to thank everyone who has provided support to us for the last five years. We’re excited to have hit this milestone but it’s only the beginning of what we can do to lead the industry in creating chips that are functionally correct, safe, and secure. To lean more, you can email us at info@axiomise.com or contact us through www.axiomise.com.  We are here to help.

Thank you for your time, Ashish.
You’re most welcome!

Also Read:

CEO Interview: Dr. Ashish Darbari of Axiomise

Accelerating Exhaustive and Complete Verification of RISC-V Processors

Life in a Formal Verification Lane

Why I made the world’s first on-demand formal verification course


Building better design flows with tool Open APIs – Calibre RealTime integration shows the way forward

Building better design flows with tool Open APIs – Calibre RealTime integration shows the way forward
by Peter Bennet on 12-22-2022 at 10:00 am

calibre real time digital and custom

You don’t often hear about the inner workings of EDA tools and flows – the marketing guys much prefer telling us about all the exciting things their tools can do rather than the internal plumbing. But this matters for making design flows – and building these has largely been left to the users to sort out. That’s an increasing challenge as designs and EDA tools get more complex and it’s sometimes become necessary to run a part of one tool from within another. To enable that, EDA companies have to pick up their share of the work.

That’s particularly the case for point tools in largely integrated vendor design flows. Calibre is perhaps the best known point tool out there and one common to all major analog, digital and mixed signal design flows. So it’s interesting to hear what Siemens EDA is doing here with Calibre.

Calibre’s RealTime interface supports this closer flow integration and is an established presence in all major digital and custom implementation flows.

How modern design flows drive closer tool interactions

While the Siemens EDA white paper here (link at the end) spends some time discussing the costs and benefits of best-in-class tool flows (like most Calibre ones) versus full single-vendor flows, that’s really a subject in itself (interesting, but perhaps for a separate article). The reality today is that users frequently need some third party point tools to be integrated into flows and it’s likely that many will always demand this.

We used to think in terms of a serial design flow where each tool has a distinct flow step and there’s little overlap between the tools. Something like this example for place and route:

Of course, we’ve simplified a bit here – we check (verification) after each step and have frequent iterative loops back to try things like alternative placements.

With today’s huge, hierarchical designs, leaving the entirety of signoff steps like DRC and LVS to the end of the flow is inefficient and puts signoff schedules at risk. Many of these checks can be done earlier in the flow. We just need an efficient way to do it. Similarly, it’s often helpful to do some local resynthesis within placement to minimise total flow run time.

What we’re looking for here might be called “on demand checking” (or implementation) – doing a local operation on a part of a design exactly when we need to, by pulling forward functionality from one tool to run within an earlier one in the flow. As ever, we want to run things as early as we possibly can – what Siemens call shift left.

It’s a real change in how we think about tools and flows.

How APIs help us here

We’ve always been able to add custom menus in tool GUIs and inject custom Tcl scripts into tool run scripts to access other tools. What’s usually been lacking is being able to call external tool functionality with the lowest interfacing delay and smallest memory footprint with clean, documented reliable integration that regular users can configure. We certainly don’t want to pass the entire design or invoke all the functionality of another tool if we can possibly avoid it.

At first glance, this would appear to be a decisive advantage for more integrated single vendor flows and an increasing drag on integrating other vendor tools.

But that’s not necessarily the case.

Users can interface with EDA tools in a variety of ways, including:

  • Native command shell (usually Tcl)
  • Tool commands
  • Direct database access (query, modification)
  • GUI (sometimes menu customisation, sometimes GUI scripting interface)
  • Reports (native, user-defined through scripting)
  • Logs

Anyone who’s spent too much time with a tool has also run across some hidden (or private) settings and perhaps further, less documented interfaces with unusual naming styles. When a tool pulls the command side together into a more complete and consistent, documented interface, this becomes an Application Programming Interface (API).

The limiting factor in tool flow integration is often the quality, consistency and scope of the API and the inherent ability of the tool to support rapid surgical interventions on critical parts of a design – regardless of whether it’s a single or multi-vendor flow.

These are often determined in the initial tool architectural design when the core data structures and envisaged use models are considered (otherwise they’re shoehorned back in much later). As so often in engineering, it’s the interfaces that are critical. As these are so critical for point tools, they often get more attention. You soon learn what type of tool you have from the consistency of the interfaces (and single-vendor flows may not yet be quite as streamlined as we might assume).

The white paper goes into more details about how this is implemented (Figure 1). Calibre functionality is added to a layout tool through both customisation of the GUI menus and a direct interface to the Calibre API through the layout tool scripting language.

Individual design groups can use an off-the-shelf Calibre integration (EDA vendors can do this through the Siemens EDA Partner Program) or quickly and easily fine tune an integration for their exact flow needs.

Putting this into practice, Calibre RealTime can provide on-demand signoff DRC checking with the signoff rule deck, giving designers a significant run time and productivity gain over running separate Calibre DRC checks.

Another application in use today is running Calibre SmartFill within the layout flow to get more accurate parasitics earlier in the flow.

Summary

There are many cases where design flows benefit from closer tool integration and we’ll likely need more and tighter interaction between what we used to think of as separate tools in a waterfall flow as we optimise design flows to run checks exactly where we want them, when we want them. But getting there requires determined efforts from EDA vendors improving tool usability with interfaces like APIs.

The Calibre RealTime interface shows what’s possible here. It’s being widely used in all major flows (Synopsys Fusion Compiler, Cadence Innovus and Virtuoso, as well as many others).

Find out more in the original white paper here:

https://resources.sw.siemens.com/en-US/white-paper-open-apis-enable-integration-of-best-in-class-design-creation-and-physical

Related Blogs and Podcasts

The Siemens EDA website contains a wealth of further material in white papers and videos:

https://eda.sw.siemens.com/en-US/ic/calibre-design/

This paper looks at the related importance of tool ease of use, again from a Calibre perspective:

https://blogs.sw.siemens.com/calibre/2022/03/16/ease-on-down-the-roadwhy-ease-of-use-is-the-next-big-thing-in-eda-and-how-we-get-there/

You can also learn more about Calibre here:

https://www.youtube.com/@ICNanometerDesign

Also Read:

An Update on HLS and HLV

Cracking post-route Compliance Checking for High-Speed Serial Links with HyperLynx

Calibre: Early Design LVS and ERC Checking gets Interesting


How an Embedded Non-Volatile Memory Can Be a Differentiator

How an Embedded Non-Volatile Memory Can Be a Differentiator
by Kalar Rajendiran on 12-22-2022 at 6:00 am

State of Weebit ReRAM

Embedded memory makes computing applications run faster. In the early days of the semiconductor industry, the desire to utilize large amount of on-chip memory was limited by cost, manufacturing difficulties and technology mismatches between logic and memory circuit implementations. Since then, advancements in semiconductor manufacturing have been bringing on-chip memory costs down.

Fast forward to today, applications such as AI, machine learning, mobile and other low-power applications have been fueling demands for large amounts of embedded memories. A challenge with SRAM-based memory processing elements is that they consume a lot of power which is not affordable by many of the above mentioned applications. In addition, many of the existing embedded non-volatile memory (NVM) technologies such as flash face challenges as the process node goes below 28nm. The challenges are due to additional material layers and masks, supply voltages, speed, read & write granularity and area.

Resistive RAM (ReRAM or RRAM) is a promising technology that is specifically designed to work in finer geometry process nodes where charge-based NVM technologies face challenges. It is true that ReRAM as a technology has spent many decades in the research phase. For satisfying NVM needs, Flash technology had the edge for many applications until 28nm.

ReRAM’s simplicity for process manufacturing makes it easier to be integrated into Back End of Line (BEOL) with only a few extra masks and steps. ReRAM technology enables high-speed, low-power write operations and increased storage density, all critical for AI computing-in-memory applications, as an example.

At the IP-SoC Conference 2022, Eran Briman of Weebit Nano talked about their ReRAM offering and how a wide range of markets and applications could benefit from it.

Who is Weebit Nano?

Weebit Nano is a leading developer of ReRAM technology based IP. They license their IP to FSCs and Fabs to manufacture the chips embedding this IP. From their early days in 2015, Weebit Nano has strategically partnered with CEA-Leti to leverage research in NVM and specifically on ReRAM.

Weebit Nano’s ReRAM Technology

Weebit Nano’s ReRAM technology is based on the creation of a filament that is made of oxygen vacancies in a dielectric material, and is hence called OxRAM. The dielectric layer is deposited between 2 metal stacks at the BEOL, and by applying different voltage levels a filament is either created, representing a logical 1, or dissolved, representing a logical 0. The technology is inherently resistant to tough environmental conditions as the information is retained within the stack itself. As a result, OxRAM is resilient in its operation at high temperatures, exposure to radiation and EM fields. The technology also utilizes materials and tools commonly used in standard CMOS fabs.

The resulting Weebit Nano based NVM solution is also very cost-effective as it requires only two additional masks compared to around 10 additional masks for embedded Flash. It is also power efficient as programming can be done at below 2V compared to at around 10V for embedded Flash. During operation, memory reads can be accomplished at 0.2V which is very power efficient.

Weebit ReRAM Status/Availability

The technology is now production ready as of November 2022, with wafers having been manufactured in 130nm to 28nm to date. Getting to production ready status required passing JEDEC industry standard qualification process for NVM memories. The qualification process includes rigorous tests for endurance, retention, retention of cycling, solder reflows, etc., on hundreds of blindly selected dies from three independent wafer lots.

Weebit Nano’s first production manufacturing partner SkyWater recently produced 130nm wafers embedding Weebit’s 256Kb ReRAM module. The dies are now going through the JEDEC qualification process and are available for customers to integrate into a range of target SoCs.

ReRAM: Why Now?

As noted earlier, Flash memory is facing scaling limitations beyond 28nm along cost and complexity dimensions. At the same time, the pressure for lower power and lower cost solutions is increasing, pushing products toward more advanced process nodes. ReRAM technology scales nicely beyond 28nm and fits easily in bulk CMOS, FD-SOI as well as FinFET processes. It can also support low power, high performance, RF CMOS, high-voltage and other process variants too. This opens up target markets to include mixed-signal, power management, MCUs, Edge AI, Automotive, Industrial and Aerospace & Defense applications. According to Yole, a market research firm, the embedded ReRAM market is projected to grow from less than $20M in 2021 to around $1B in 2027.

Why Weebit Nano ReRAM?

Refer to the following table which highlights how Weebit Nano’s ReRAM IP addresses key requirements of various applications in fast growing markets.

Those looking into designing chip solutions for applications that could benefit from embedded memories should reach out to Weebit Nano to get more insights about their ReRAM solutions.


Regulators Wrestle with ‘Explainability’​

Regulators Wrestle with ‘Explainability’​
by Roger C. Lanctot on 12-21-2022 at 10:00 am

Regulators Westle with Explainability​

The letter from the San Francisco Municipal Transportation Authority (SFMTA) to the National Highway Traffic Safety Administration (NHTSA) shines a bright spotlight on a major weakness of current automated vehicle technology – explainability. The letter is in reply to a request for comment from interested parties by NHTSA regarding General Motors’ request for exemptions from traditional safety regulations for GM’s Cruise Automation Origin vehicle to operate driverlessly on public roads.

The letter highlights the performance shortcomings of Cruise’s existing fleet of self-driving vehicles – based on Chevy Bolts – and the agency’s disappointment in Cruise’s responsiveness to multiple concerns that the agency has expressed. While the letter also highlights the clashing jurisdictions of Federal and local authorities, it also raises alarms regarding the potential negative impact that might result from Cruise unleashing hundreds or even thousands of vehicles on San Francisco streets – or, in fact, the streets of any city.

At the core of the SFMTA’s concerns though is not only Cruise’s failure to respond to questions regarding day to day operation of its vehicles or to concerns regarding particular incidents – the letter raises questions as to Cruise’s ability to explain how or why its vehicles are doing what they are doing. This breakdown reflects a shortcoming in artificial intelligence and machine learning technology where users or creators are unable to explain the output of their own algorithms.

The first evidence of this explainability breakdown emerged earlier this year when NHTSA opened investigations into phantom braking incidents plaguing vehicles from Tesla operating in Autopilot or Full-Self-Driving. Tesla vehicles are known to periodically come to a stop on highways – and the company has been unable to either explain or remedy the problem.

This experience echoed the “unintended acceleration” events that struck Toyota vehicles years ago and spurred a Congressional investigation and NHTSA’s outreach to the National Aeronautic and Space Administration (NASA) to try to explain the phenomenon. Of course, the Toyota incidents were not tied to artificial intelligence or machine learning.

The SFMTA letter cites multiple circumstances of Cruise vehicles slowing or stopping mid-block in the flow of traffic for no reason – including situations where emergency responders were impeded. The agency also expressed its unhappiness with Cruise vehicles not pulling out of traffic lanes and over to available curb space to pick up or drop off passengers – as required by law.

The agency further raised questions as to the timeliness of Cruise’s responsiveness in the event of vehicle failures. In these situations there were delays in making contact with appropriate personnel at Cruise as well as additional delays in Cruise personnel coming to rescue inoperable Cruise vehicles.

SFMTA’s concerns were elevated by its anticipation of the Bolt-based Cruise vehicles – which are equipped with steering wheels and brake and accelerator pedals – being replaced with much larger Cruise Origin vehicles that lack such manual vehicle controls. In fact, the lack of those controls are the motivation for GM to be requesting regulatory waivers for as many as 5,000 Cruise Origin AVs.

Cruise personnel can easily reposition or remove Bolt-based AVs, but Origin vehicle failures are expected to require the involvement of flatbed or tow trucks to remove or reposition the vehicles.

One of the SFMTA’s greatest concerns, though, expressed early in the letter, is the anticipated impact of steering wheel-less AVs operating in San Francisco in substantial numbers. According to the SFMTA’s own research, the introduction of a total of 5,700 Uber and Lyft vehicles six years ago was responsible for 25% of all travel delays in the city at that time.

Cruise’s current fleet operating in San Francisco without drivers currently consists of considerably less than 100 vehicles. The company has logged less than 20,000 miles of autonomous operation through May 22, according to SFMTA.

Cruise’s failures to adequately respond to local regulatory authorities and/or to explain the failure or idiosyncratic functioning of its vehicles marks an important turning point for the AV industry. SFMTA reported a significant uptick in 911 calls to emergency responders in connection with the erratic behavior or apparent failures of unmanned Cruise vehicles – even at their currently low on-road volume.

Cruise has made some efforts to reach out to the public with marketing messages and to try to explain itself, its operations and its goals. The more salient authority that is in desperate need of this kind of outreach is the SFMTA and local emergency responders who have been forced to cope with the evolving operational shortcomings of Cruise vehicles and the public’s reaction to them.

It’s worth noting that few such complaints or pushback has arisen from the operation of Waymo’s AVs. Waymo has thus far been operating with traditionally equipped and regulatory-compliant vehicles that do not require waivers from NHTSA to operate.

With its letter, the SFMTA posits a nightmare scenario where Cruise might – on its own – decide to introduce hundreds or thousands of its driverless AVs on the streets of San Francisco. The waiver request from GM to NHTSA forces the SFMTA to ponder the impact of such a prospective deployment.

For the average San Francisco native, the letter suggests that it might be time to put the brakes on all robotaxi activities until and unless the city decides that robotaxis are indeed a desired transportation objective. One hint as to the unlikeliness of this are the additional objections and concerns expressed by the SFMTA regarding accommodations for residents with disabilities. It was only after a major regulatory and legal tussle that the SFMTA was able to obtain appropriate concessions from Uber and Lyft for such residents.

In the end, Cruise needs to come clean and clean up its act. And the SFMTA has now raised questions that all municipalities must ask: Do we want robotaxis? How do we want them to operate? And how many are we prepared to accomodate?

Letter from the SFMTA: https://regmedia.co.uk/2022/09/26/letter_to_nhtsa.pdf

Also Read:

U.S. Automakers Broadening Search for Talent and R&D As Electronics Take Over Vehicles

Super Cruise Saves OnStar, Industry

The Truly Terrifying Truth about Tesla


Validating NoC Security. Innovation in Verification

Validating NoC Security. Innovation in Verification
by Bernard Murphy on 12-21-2022 at 6:00 am

Innovation New

Network on Chip (NoC) connectivity is ubiquitous in SoCs, therefore should be an attractive attack vector. Is it possible to prove robustness against a broad and configurable range of threats? Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Towards the Formal Verification of Security Properties of a Network-on-Chip Router. The authors presented the paper at the 2018 IEEE ETS and are/were at the Technical University of Munich in Germany.

NoCs are the preferred connectivity fabric for modern SoCs. In mesh form NoCs are fundamental in arrayed processor architectures for many core servers and AI accelerators. Given this, mesh NoCs are a natural target for direct and side-channel software-based attacks. Further supporting recent interest, Google Scholar shows nearly 2k papers on NoC Security for 2022.

Use of formal methods as presented here is appealing since mesh structures are regular and the approach proposed is inductive (or a related algorithm), which should imply scalability. The authors illustrate with a set of properties to check for security against data alteration, denial of service and timing side-channel attacks.

Paul’s view

This is a perfect paper to close out 2022; a tight and crisp read on an important topic. Table II on page 4 is the big highlight for me, where the authors beautifully summarize formal definitions of security attacks on a NOC with a hierarchy of just 21 concise properties covering data modification, denial-of-service, and side-channel (interference, starvation) attacks. These definitions can be pulled out of the table and plugged directly into any commercial model checking tool or implemented as SystemVerilog assertion for a logic simulation or emulation run.

Since the paper was published in 2018 we have seen NOC complexity rise significantly, with almost any SOC from mobile to datacenter to vehicle deploying some form of NOC. At Cadence we are seeing model checking become an industry must-have for security verification. Today, even complex speculative execution attacks like Spectre and Meltdown can be effectively formulated and proven with modern model checking tools.

Raúl’s view

This month’s paper tackles Network on Chip (NoC) vulnerabilities by defining a large set of security related properties of the NoC routers. These are implementation independent. Subsets of these properties provide specific security and correctness checks; for example, avoiding false data in a router comes down to 1) Data that was read from the buffer was at some point in time written into the buffer, and 2) During multiplexing, the output data is equal to the desired input data (properties b5 and m1 in Table II and Fig. 3). Nine such composite effects of the properties are listed, in addition to avoiding false (above) data for example buffer overflow, data overwrite, packet loss, etc.

These 21 properties are formalized – written in Linear Temporal Logic (LTL) and checked using the Phaeton framework [25,26] which uses an unbound model checking solver. The actual model checked is the router synthesized together with a verification module. The latter includes the properties to be verified and acts as a wrapper to read and write the inputs and outputs to the router. All properties could be verified between 142-4185 seconds on a small Intel I5 CPU with 4GB of memory.

To show the effectiveness of the approach, 6 different router implementations were used. Examples include round-robin, time-division-multiplexing… and also trojan versions such as Gatecrasher which issues grants without any request. Time division multiplexing turns out to be the only implementation protected against all threats, i.e., satisfying all 21 properties. Table III summarizes the results.

For someone well versed in formal verification the paper is not difficult to follow, but it is definitely not self-contained or an easy read. It is a nice contribution towards secure NoCs (and router implementations). The correctness and security properties can be verified with “any other verification tool supporting gate-level or LTL model checking”, e.g., commercial model checkers.