Bronco Webinar 800x100 1

DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain

DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain
by Robert Maire on 04-01-2022 at 8:00 am

EUV DUV Lithography

-New PUV light source will push litho into Angstrom Era
-Rare earth elements shortages add to supply chain woes
-Could strategic wafer reserve releases lower memory pricing
-Can we cut off/turn off Russian access to chip equipment?

DUV, EUV and now “PUV” to become next generation lithography

Lithography is the locomotive the pulls the entire semiconductor industry along the Moore’s Law curve to ever smaller feature sizes by using ever smaller wavelengths of light to print smaller and smaller transistors. Light is the paintbrush and shorter wavelengths is akin to using finer and finer paintbrushes.

The industry has gone from using “G line” and “I Line” visible light to DUV, ultraviolet light generated by Krf (248NM) and Arf (193NM) laser sources. We have now transitioned to the smaller “paintbrush” of EUV, or extreme ultraviolet at 13NM.

The industry is already forced to use optical tricks to print 5NM and 3NM features from 13NM light, much as with prior nodes, but is already running out of gas so we have to move to an even shorter wavelength than EUV for future nodes.

PUV is the next generation of lithographic light at 6.5NM. This is still considered “soft XRay”, so we can use current lens and reticle technology, as we are not into “tender” or “hard” XRays of sub Nanometer wavelengths.

One of the key advantages of PUV is that the light source is polarized (hence the name “P”UV) which allows for more accurate printing. In fact the proposed litho tools will have two light sources of opposing polarity (P & S) for higher contrast.

The other nickname for the new technology is “Plaid” which also indicates the cross hatching of crossed polarity printing.

Plaid UV (PUV) will be generated by a new methodology rather than the current laser shooting tin droplet LPP source. This new source will require short but very, very high spikes of power generating almost fusion/fission like conditions. This will require battery power technology as local power grids already strain under EUV power requirements.

It is rumored that Intel is working with Tesla as the supplier of the battery technology for Plaid UV as it has significant experience in this area.

Plaid could accelerate Intel past TSMC in race to leading edge

If it is Intel working on PUV and they are successful it could be the secret weapon that accelerates them past TSMC in the race to leading edge of technology. Not only does Plaid offer much smaller feature sizes and higher contrast, it also offers higher throughput (speed) as its much higher power light.

In addition there is the potentially huge benefit that existing EUV tools could be retrofitted with Plaid sources as the EUV lenses and reticle are compatible (in theory).All of this makes Plaid well beyond Ludicrous…

Demonstration of Plaid Technology

Rare materials shortages add to supply chain woes in chip industry

The conflict in Ukraine has had far reaching impact around the globe as international trade has slowed and may grind to a halt between the East and West. This is of key concern in the area of rare earth elements and especially noble gases which Ukraine was a large exporter of, as production has essentially stopped.

Neon, which is used in lithography tools seems to be in adequate supply for the time being but could run short over time. Xenon is potentially used in future EUV reticle inspection tools, also hard to find.

Rare earth elements come primarily from China and some from Russia and other countries, with lower cost labor and lax mining regulations. Yttrium (Y) is used not only in LEDs but also as a lining inside deposition and etch chambers to protect them. Neodymium (Nd) is used in magnets in electronics.

Phlebotnium (Ph) is a key nanotechnology ingredient mechanically applied to enhance wafers. Unobtainium (Uo) is high on the periodic table and is used in matter/anti-matter experiments due to its instability and isotope half life. It is currently one of the key raw materials in advanced, atomic layer, selective deposition and etch tools due to these unique, otherworldly, characteristics.
Alternate sources of these rare materials needs to be developed quickly. The US used to have rare earth mines but few if any are left. A new mined source of Unobtainium is from Pandora but will take time to develop, and may require help from Elon Musk as well, for transport.

Its not just money and fabs that are needed for US self sufficiency in chips, its elusive, hard to find, rare materials as well.

Strategic Wafer Reserves may be released to alleviate shortages

The US recently announced that it was releasing a million barrels of oil a day from the strategic reserves. It is not widely known but there is also a strategic reserve of semiconductor wafers. Obviously it would be difficult to store chips of every type and vintage but common devices, mainly memory and CPU types are stockpiled by the defense electronics agencies. This is especially important as many semiconductor devices used in defense applications have been discontinued long ago.

While not huge, a release of strategic wafer reserves could help is some areas in which chips are in short supply. Rationing of those released devices will obviously be on a prioritized basis for applications that are the most needy.
We could see memory pricing negatively impacted if stockpiles of memory chips are released too quickly into the market, but it would have the effect of lowering prices and inflation in semiconductors in general.

Is there a “kill switch” built in to semiconductor equipment?

Russia does not have very much in the way of a semiconductor industry. Unlike China, Russia has not figured out the critical importance that the semiconductor industry plays in todays world. They are perhaps the only remaining manufacturer of vacuum tubes because much of their electronics still uses them. There are a handful of ancient fabs in Russia producing 10 and 15 year old technology semiconductors that is probably right in line with their current state of the art for electronics as well.

Pretty much all of the semiconductor equipment in Russia comes from the West. We wonder if any of that equipment has remote “diagnostic” capabilities. Could it be disabled, turned off or maybe just stop shipping spares and consumables?

Its no small wonder why TSMC is so paranoid about the tools in its fab. USB sticks are not allowed in and there are no live internet connections allowed in the fab for fear of remote hacking.

Maybe tool makers should install wireless backdoors into their equipment for non-paying customers…. or maybe they already have… you never know.

Happy April First!!!!

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read

AMAT – Supply Constraints continue & Backlog Builds- Almost sold out for 2022

Intel buys Tower – Best way to become foundry is to buy one

KLAC- Great quarter and year – March Q is turning point of supply chain problem


CEO Interview: Kelly Peng of Kura Technologies

CEO Interview: Kelly Peng of Kura Technologies
by Daniel Nenni on 04-01-2022 at 6:00 am

Kelly Peng, Co-founder, and CEO of Kura Technologies

This interview is with Kelly Peng, Co-founder, and CEO of Kura Technologies. Kura Gallium, Kura’s first product, was named Best of CES 2022 and received a 2022 CES Innovation Award as well. Kelly is an inventor, engineer and entrepreneur that leads a team of dedicated innovators that are redefining the term “Augmented Reality”. She was a recipient of the prestigious Forbes “30 Under 30” in 2019 for her work in developing the most advanced augmented reality glasses that will start sampling in 2022. Based in Silicon Valley, Kura Technologies is eclipsing the competition in areas of field of view, resolution, brightness, transparency, depth of field, sizes, and other critical metrics.

Web3 and in particular Augmented Reality is a hot topic currently, help our readers understand what differentiates Kura from the many other early-stage companies in this market?

As you have indicated, the interest in Virtual Reality and Augmented reality is currently expanding in all directions. The fact is that we are currently experiencing a rapid change in the way AR and VR will be deployed in the very near future. These changes will open up new markets that were inconceivable a few years ago and will allow us to interact with information in the world around us in practical and interesting ways. The emerging applications will enable new activities like medical diagnosis and treatments, training, 3D design visualization,  industrial inspection, and face-to-face virtual communications all through a pair of glasses.

To facilitate this, Kura has focused our technology development to create a visualization system that is as natural as wearing glasses, and allows the wearer to experience the enhanced content necessary to optimize the desired reality. The Kura Gallium is the first pair of AR glasses to offer a 150-degree full frame field of view, 95% transparency, 8K resolution, unlimited range of depth, and many other features that provide the seamless view of the natural and augmented surroundings often referred to as the “Metaverse”.

What is the status of the glasses that were demonstrated at CES in January?

We showed demos with the world’s biggest field of view in AR, high transparency, and high brightness; newly assembled eyepieces for our upcoming dev kits, and software applications including3D model viewers and telepresence and remote collaboration tools running on our headset with 9-degree of-freedom head-tracking and gesture input. The team is focused on a couple of major developments for the hardware and telepresence platform side of Gallium. One of the larger projects is the development of the ASIC that creates the backbone of the system electronics. Custom ASICs and silicon were taped out last Fall at some of the world’s biggest production foundries and we are pleased to announce that the silicon has already come back from the fab and has also been packaged. We have been testing and running characterization recently and the results look great. The tape-out is a big success!

Can you provide some more details about the ASIC?

Yes, the internal code name for the chip is “Mill’s Creek”. The chip incorporates the control and driver circuits for the micro-LED displays, as the world’s fastest micro-LED display driver ASIC and core enabler for our 8K resolution.  This is one of the most critical components in the systems because it provides more than 100x resolution expansion, on-the-fly pixel repair, high dynamic range and full-color images.

The Augmented Reality user experience is dependent on high-quality display capabilities. The Gallium glasses use fully customized micro-LED displays to create the brightness and image sharpness that people expect. However, the micro-LED technology is not optimized unless the display driver and our optical architecture are optimized specifically for the application. The “Mill’s Creek” ASIC is fully customized specifically for the Kura Micro-LED display hardware with a completely unique architecture, which is unlike practically all of the other headsets that use off-the-shelf components for the display driver. This successful tape-out is a big milestone for us toward pushing Gallium to production.

Startup companies are all about the team of engineers and innovators that are driving the development. Can you give us a little more insight into the team?

Kura is currently made up of over 35 people that are all contributing to the product development. We have been very fortunate to pull together a great mix of talent. More than half of Kura’s founding and leadership are from MIT, and 3 of our lead engineers have together of 400+ patents. Kura’s in-house ASIC design team leader is Mark Flowers, Kura’s Director of Technology, who was previously the founding CTO of Leapfrog (IPOed, and valued $1B+). He has over 30 years of leadership experience designing and delivering custom mixed-signal ASICs. In the past, he was responsible for the shipment of tens of millions of customized chips as well as integrated consumer and enterprise platforms and products. In an earlier startup, Mark was the co-inventor of DSL, with over 800 million installed lines. That company was acquired by Texas Instruments. He graduated from MIT with a Master’s and Bachelor’s in electrical engineering with a specialty in IC design and computer science.

We are also fortunate to have a strong operations organization. That team is led by Gregory Gallinat, our COO, and Chuck Alger, Director of Supply Chain and Manufacturing. A core focus of the Ops team is defining and facilitating the worldwide supply chain and business practices to support Kura’s product launch and growth trajectory. Chuck has more than 20+ years of experience with Intel, Microsoft, and CP Display. This includes multiple manufacturing site launches for products like Hololens and Surface. He also has extensive expertise in semiconductor quality and reliability coming from his time at Intel. Chuck also worked as Director of Supply Chain at Compound Photonics (just acquired by Snap), a company building ASICs for driving micro-displays like micro-LED and LCoS.

What’s the demand and upcoming adoption of Kura Gallium?

We are a platform company poised to reshape the landscape of AR. The interest in Kura Gallium has been fantastic in the last year. The CES awards and the exposure we received through various other venues has opened up the path to adopt the platform into a broad range of applications. The various venues where we have been invited to speak at such conferences as SPIE, has exposed our thought leadership to this market. Kura currently has orders from over 350 companies, 100% of which are in-bound, and among those, over 50 companies that are in the Fortune 500, with total order requests from paid Fortune 500 companies totaling more than 100K units, and as these companies recognize the superiority of our performance and plan to use our product and platform in such areas as remote collaboration, telepresence, virtual showrooms, training, entertainment, tele-medicine, etc..

Many of these clients had also become investors in Kura. We also have several active projects with government agencies that see Augmented Reality as a critical technology for training, visualization, and remote collaboration and assistance. As you can see, the need for AR in various enterprise applications is very high now and we see many of these adopting our product quickly, as many clients and repeatedly express to us they really want to have the headset deployed as soon as possible. The CEO of Tokens .com, a publicly traded company that invests in Web 3 assets recently said in an interview on CNBC that “within the next 24 months all major companies will have a presence in the Metaverse like they have a website.”

Your initial focus seems to be on the enterprise and B2B2C side. When will consumers be buying Kura Gallium?

As with most emerging technologies, the early adopters start with the enterprise market. The enormous benefit of having real-time information augmenting that forward-looking view of users can be realized quickly in many industries. We are launching our hardware + software platform (global holographic telepresence platform, computer vision/AI SDK, and AR data platform), and many of our clients are industry leaders or some of the biggest companies in the world in automotive, training, design, telecommunication, entertainment, etc.

AR is really an industry in that demand had been waiting long for a product that users can use with acceptable vision quality together with a good form factor, Kura’s product and platform are serving the biggest demand in the industry and also largely expand the number of use models. Not to mention, Kura’s performance combined with the comforts of our first product over-compete all the “consumer-targeted” AR glasses and solutions today already. The consumer demand is there and will grow rapidly following the enterprise adoption, with a rich set of applications like the App Store. We have already designed many of the core technologies for our future generations of products that will be launched for both consumers and enterprises with improved performance and even more compact with a deeper level of silicon integration.

Also read:

CEO Interview: Aki Fujimura of D2S

CEO Interview: Frankwell Lin, Chairman and CEO of Andes Technology

CEO Interview: Tamas Olaszi of Jade Design Automation


AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family

AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family
by Kalar Rajendiran on 03-31-2022 at 10:00 am

FSM of State DVT VSCodium

“A picture is worth a thousand words” is a widely known adage across the world. Recognizing patterns and cycles becomes easier when data is presented pictorially. Naturally, data visualization technology has a long history from the early days, when people used a paper and pencil to graph data, to modern day visualization platforms. While visualization products have gotten fancier, driven by the data age we are in, the semiconductor industry was among the early industries that needed them. Electronics is all about signals and waveforms. It is easier to comprehend and analyze that data graphically than in the form of a table of data points. While Microsoft Excel has always offered visualization through its graphing feature, visualization solutions received broad market attention after Tableau introduced its visualization platform in 2003.

Visual Studio (VS)

Over the last couple of decades, rapid advances in the field of software have led to the introduction of integrated development environment (IDE) platforms. While there are many development platforms available to software developers, Eclipse and Visual Studio are two well-known and widely used IDE platforms. Is an IDE platform a visualization platform per se? Well. The platform itself is an environment that enables visualization of all sorts through various specific tools that work under that environment. The platform makes this possible through ongoing addition of extensions to support interfacing to various analysis and visualization tools.

So, why is Visual Studio called visual studio? Does it mean Eclipse IDE is not a visualization platform? The Visual Studio name has Visual Basic to thank for it. The developer GUI to Visual Basic earned it the name almost three decades ago. While the development environment has expanded since then, Microsoft has maintained the “visual” prefix for their modern-day IDE. Eclipse IDE is also a visualization platform, even though it does not have “visual” in its name.

Visual Studio (VS) Code

Visual Studio Code is a source code editor developed by Microsoft for Windows, Linux and OS X. It works within the Visual Studio IDE environment and includes support for debugging, embedded Git control and the IntelliSense feature. IntelliSense is a code-completion feature that is context-sensitive to the language being edited. VS Code is also customizable for the editor’s theme, keyboard shortcuts and preferences. In November of 2015, Microsoft released the VS Code source code under the MIT License and posted on GitHub.

A long list of extensions and themes ecosystem is available for VS Code, making it a very popular editor. The open-source nature of the source code is also attracting a large section of the developer community. The speed performance of the editor is not shabby either.

Semiconductor Design and Verification

Design and verification of semiconductors involves code development too. While VHDL and Verilog are not the standard software languages, they are programming languages nonetheless. Design and verification tasks can benefit from an IDE just as software coding and testing do. As such, there has been interest and a push for IDE offerings to support the semiconductor community.

AMIQ EDA

AMIQ EDA provides software tools that enable hardware design and verification engineers to improve productivity and reduce time to market. Prior to its spinoff from AMIQ Consulting, the team had observed three recurring challenges semiconductor companies faced: developing new code to meet tight schedules, understanding legacy or poorly documented code, and getting new engineers up to speed quickly. In the software world, IDEs are commonly used to overcome such challenges. But in the early 2000s, no IDE was available for design and verification languages such as Verilog, VHDL, e and SystemVerilog. So, they developed an IDE for internal use.

In 2008, AMIQ EDA was spun off from AMIQ Consulting and Design and Verification Tools (DVT) Eclipse IDE was launched. They launched DVT Debugger in 2011, Verissimo SystemVerilog Testbench Linter in 2012, and Specador Documentation Generator in 2014. As a company that strongly believes in user-driven development and building solutions based on real-life experiences, they recently launched DVT IDE for VS Code. You can read their press announcement here.

AMIQ EDA’s products help customers accelerate code development, simplify legacy code maintenance, speed up language and methodology learning, and improve source code reliability.

DVT IDE for VS Code

DVT IDE for Visual Studio Code (VS Code) is an integrated development environment (IDE) for SystemVerilog, Verilog, Verilog-AMS, and VHDL. The DVT IDE consists of a parser, the VS Code (editor), an intuitive graphical user interface, and a comprehensive set of features that help with code writing, inspection, navigation, and debugging. It provides capabilities that are specific to the hardware design and verification domain, such as design diagrams, signal tracing, and verification methodology support. The VS Code platform’s extensible architecture allows the DVT IDE to integrate within a large extension ecosystem and work flawlessly with third-party extensions.

DVT IDE for VS Code shares all the analysis engines with DVT Eclipse IDE, that is field proven since 2008. The product enables engineers to inspect a project through diagrams.  Designers can use HDL diagrams such as schematic, state machine, and flow diagrams. Verification engineers can use UML diagrams such as inheritance and collaboration diagrams. Diagrams are hyperlinked and synchronized with the source code and can be saved for documentation purposes. Users can easily search and filter diagrams as needed, for example, visualizing only the clock and reset signals in a schematic diagram. Both tools also have important non-visualization features such as hyperlinked navigation, auto-complete, code refactoring, and semantic searches for usages, readers, and writers of signals and variables.

For a couple of screenshots showing the DVT IDE for VS Code in action, refer to the Figures below.

 

DVT IDE for VS Code is available for download from the VS Code marketplace. For more details, refer to the product page.

Also read:

Automated Documentation of Space-Borne FPGA Designs

Continuous Integration of RISC-V Testbenches

Continuous Integration of UVM Testbenches


Shift left gets a modulated signal makeover

Shift left gets a modulated signal makeover
by Don Dingee on 03-31-2022 at 6:00 am

Modulated signals uncover combined effects in a shift left approach

Everyone saw Shift Left, the EDA blockbuster. Digital logic design, with perfect 1s and 0s simulated through perfect switches, shifted into a higher gear. But the dark arts – RF systems, power supplies, and high-speed digital – didn’t shift smoothly. What do these practitioners need in EDA to see more benefits from shift left? Higher fidelity behavioral models. Authentic waveforms. Fast, accurate simulation schemes. And, looking forward, components with an “executable datasheet,” reproducing physical results when simulated in various contexts. Let’s go inside Keysight’s strategy in this series on how shift left gets a modulated signal makeover.

Unlocking more value across the ecosystem

The fundamental value of shift left – in fact, the whole purpose of EDA tools – is earlier visibility for design teams in virtual space. Waiting until problems show up in hardware drives up cost and risk and takes away flexibility. Earlier virtual validation reduces hardware re-spins with their trial-and-error. Teams can explore architectural options and avoid over-design trying to protect margins.

But shift left has the potential to unlock more value by pulling the ecosystem together. Project “visibility” improves when predictions show how close a design is to performance goals, and what remains to be done. Virtual sampling in the customer’s and the customer’s-customer contexts would help vendors demonstrate parts quickly and close sales faster.

Ultimately, shift left sets the stage for digital twins, enabling physical engineering activities to move to virtual space. For example, design teams and end customers would be able to gauge difficult physical experiences, like a satellite in orbit with interference, sun loading, jamming, and other dynamics. Digital twins need unprecedented levels of accuracy and trust in EDA tools.

Paving a two-way path for waveforms and data

When electromagnetic (EM) behaviors appear, these higher value wins only happen if modulated signals come to life in virtual space. Consider the case of a power amplifier (PA) designed using S-parameters and simulated with sine waves. Basic validation of a PA might fall apart when a team designs a hardware prototype around it and applies complex modulation like 5G or Wi-Fi 7.

Think about it this way: if testing with modulated signals is a given for hardware, why is it optional for simulating a design? The answer is most RF EDA workflows are one-way. They lack any path for bringing physical measurements back through simulation, either as a stimulus waveform or as improvements to behavioral modeling parameters. Bringing real-world effects back into a math-based model is at best applying a fudge factor with who knows what margin. When data and models don’t line up, system-level experiences get lost in translation.

Now consider Keysight’s vision for a two-way path with high-fidelity transportable models in a common modeling language. A device under test comes packaged with authentic waveforms and enhanced modeling data – an executable that goes beyond a datasheet. A single workflow connects design and simulation from detailed componentry to world-level scenario planning, enabling regression not possible with discontinuous point-based tools, waveforms, and data.

An example of why modulation is not an option

When shift left works for EM design, workflows must be two-way, and modulation is not an option. An example is error vector magnitude (EVM), a reliable figure of merit in wireless applications.

PA designers do an excellent job of focusing on power-dependent parameters, such as non-linearity and peak-to-average power ratio (PAPR). EVM outcomes rely on other important effects at work. Quickly varying effects determine performance in power, frequency, and time domains. Slower varying effects alter performance around temperature, load, and bias.

For example, a complex waveform can set off time-dependent memory effects that present as self-heating problems. Or wideband operation uncovers impedance mismatches at some points. Shouldn’t designers see those problems developing virtually, before getting surprised by measurements on a hardware prototype, or observations from a distant deployed platform?

Most EDA approaches have models that look at one affect at a time. Modern EM problems have more interrelated dimensions, and demand simulation of combinations of effects. Keysight provides a unique approach, blending EDA and test and measurement expertise, to see deeper. Modulated signals are the key to bringing the customer into the design environment. That requires stronger simulation engines, higher fidelity models, multiple effects in play, and one workflow from schematic capture through validation.

How does this work? Here’s a new Keysight video on modulated signals in EVM performance, using test and measurement insights and simulation techniques getting faster results on complex waveforms with combined effects.

Beyond the “choose your own compliance adventure”

A final point on changes in the complex EM systems business. Systems now have carefully defined waveforms from industry specifications. Choosing your own compliance adventure with some random stimulus won’t get the job done. Teams using shift left with modulated signals in a Keysight EDA environment can demonstrate with confidence that a virtual design hits requirements before hardware materializes.

It’s true not only for RF communications systems, but also for adjacent markets. Switched-mode power supply design must hit EMI compliance profiles in many markets. High-speed digital design is now dictated by specifications like PCIe Gen 5 and DDR5. Shift left brings those specifications and their waveforms into view much sooner in the design cycle.

System complexity shouldn’t be left to the reader to solve alone. Keysight has invested in creating EDA tools and workflows for RF system design with world-class measurement science built in. Over the next installments in this series, we’ll see more detailed examples in Keysight’s EDA solutions for shift left with modulated signals. Next up: digital pre-distortion design.

Also read:

WEBINARS: Board-Level EM Simulation Reduces Late Respin Drama

 

 


Synopsys Announces FlexEDA for the Cloud!

Synopsys Announces FlexEDA for the Cloud!
by Daniel Nenni on 03-30-2022 at 10:30 am

Synopsys Cloud Graphic

There’s been a lot of discussion and hype regarding use of the cloud for chip design for quite a while, more than ten years I would say. I spoke with Synopsys to better understand their recent Synopsys Cloud announcement to determine if it is different. Briefly, it is different, and here is why:

If you’re trying to design a complex SoC, or more like a system-in-package today, you have many hurdles to negotiate regarding the design infrastructure required. Things like:

  1. Compute power: Has your IT department provisioned enough CPUs, memory and disk (with the required access speed) to support your next huge design project? What about peak load requirements? This is a tough one since it takes a long time to justify, procure and provision this kind of asset.
  2. EDA tools: This one can be quite tricky as well. Like IT infrastructure, EDA procurement cycles can be long and complicated. You may have a good idea of how many of each kind of license you need for that next huge design. But there will be surprises. You may need more of some tools to meet time to market. There may be a new tool that gets released during the design – a tool that is perfect for this project. If only you knew about it beforehand. Now that the procurement cycle is done, you’re faced with more difficulty to get what you need.
  3. Building the design flow: Has your CAD department hooked up the required tools in the right flow for each step in the huge project ahead of you? This isn’t the end of it. Besides the actual flow, the tools need to be matched to the right compute environment. Some design steps need a lot of memory. Some need a lot of compute and others need both. Everything seems to need more disk space than you’ll ever have.

And all of this just gets you to the starting line. You still must design that next huge project.

The widespread move to the cloud for chip design has really helped with the first item, above. But the second two are still essentially an exercise for the design team (and CAD team) to address. Whether you’re on the cloud or on premises, configuring machines for the workload at hand and ensuring you had the foresight to buy enough of all the right tools are challenges. Negotiating peak load license capacity can help, but did you ask for enough? And what about new tools that aren’t in your contract?

What’s New?

The second and third items on the list, above, are what sets Synopsys Cloud apart. Two business models are supported – bring your own cloud (BYOC) and a unique software-as-a-service (SaaS) approach to chip design on the cloud.

BYOC is similar to traditional models – you use the cloud vendor of your choice to flatten the compute requirement problem and the EDA vendor provides cloud-certified tools that you can purchase and run there. The SaaS model takes the user experience a step further by providing pre-packaged design flows for all the workflows you’ll need that are optimized and matched to the right compute resources. This is all provided by Synopsys, so you don’t need either an IT or CAD department.

BYOC is just that, pick your favorite cloud provider (Azure, AWS, Google Cloud Platform) and Synopsys will provide cloud-certified tools. The SaaS model is a joint development with Microsoft Azure.

There is more, however. Another innovation that is part of Synopsys Cloud is something called FlexEDA. This one is a game-changer. This is patent pending metering technology that provides access to a growing catalog of EDA tools from Synopsys and use them on a pay-per-use basis by the minute or hour. No pre-determined licensing requirements. Decide what you need and provision it for as long as you need it. This makes EDA deployment just like cloud computing deployment. Ask for whatever you need and pay for what you use.  FlexEDA is available for both the BYOC and SaaS model, so lots of options. Synopsys is also working with foundries to simplify access to resources like PDKs. There is no EDA company closer to the foundries that Synopsys.

The FlexEDA model is what we cloud enthusiasts, like myself, have been patiently waiting for and could fundamentally change the EDA landscape, absolutely.

Companies can sign up for Synopsys Cloud immediately.

Also read:

Use Existing High Speed Interfaces for Silicon Test

Getting to Faster Closure through AI/ML, DVCon Keynote

Upcoming Webinar: 3DIC Design from Concept to Silicon


Symbolic Trojan Detection. Innovation in Verification

Symbolic Trojan Detection. Innovation in Verification
by Bernard Murphy on 03-30-2022 at 6:00 am

Innovation New

We normally test only for correctness of the functionality we expect. How can we find functionality (e.g. Trojans) that we don’t expect? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Trojan Localization Using Symbolic Algebra. The paper published in the 2017 Asia and South Pacific DAC. The authors are from the University of Florida and the paper has 30 citations.

Methods in this class operate by running an equivalence check between a specification in RTL, presumed Trojan-free, and a gate-level implementation potentially including Trojans. Through equivalence checking, the authors can not only detect the presence of a Trojan but also localize that logic. This approach extracts and compares polynomial representations from specification and gate-level views, unlike traditional BDD-based equivalence checking.

Implementation polynomials are easy to construct, eg NOT(a) → 1-a and OR(a,b) → a+b-a*b. One writes specification polynomials to resolve to 0, eg 2*Cout+Sum-(A+B+Cin) with one polynomial per output. In sequential logic, flop inputs act as as outputs. Checking proceeds then by “reducing” specification polynomials, tracing backwards in a defined order from outputs. At each step backwards, the method replaces output/intermediate terms by the implementation polynomials creating those values. On completion, each polynomial remainder should be zero if specification and implementation are equivalent. Any non-zero remainder flags suspect logic.

Since the method identifies quite closely any suspect areas, they can prune out all non-suspect logic, leaving a much smaller region of logic. They then run ATPG run to generate vectors to trigger theTrojan.

Paul’s view

This is a fun paper and an easy read. The basic idea is to use logical equivalence checking (LEC) between specification and implementation (e.g. RTL vs. gates) to see if malicious trojan logic has been added to the design. The authors do the equivalence by forming arithmetic expressions to represent logic rather than more traditional BDD or SAT based approach. As with commercial LEC tools their approach first matches sequential elements in the specification and implementation, which then reduces the LEC problem to a series of Boolean equivalence checks on the logic cones driving each sequential element.

The key insight for me in this paper is the observation that if a non-equivalent “suspicious” logic cone overlaps with another equivalent logic cone (i.e. they share some common logic) then this overlap logic cannot be suspicious – i.e. if can be removed from the set of potential trojan logic.

Having used this insight to identify a minimal set of suspicious gates, the authors then use an automatic test pattern generation (ATPG) tool on this set to identify a design state that activates the trojan logic.

To be honest I didn’t quite follow this part of the paper. Once the non-equivalent logic is identified, commercial LEC tools just perform a SAT operation on the non-equivalent logic itself. This generates a specific example of design state where the specification and implementation behave differently. It isn’t necessary to use an ATPG tool for this purpose.

As another fun side note, Cadence has a sister product to our Conformal LEC tool, called Conformal ECO,. This uses this same overlapping logic cone principle (together with a bunch of other neat tricks) to identify a minimal set of non-equivalent gates from a LEC compare. A designer can use the tool to automatically map last minute functional RTL patches to an existing post-layout netlist patch late in the tapeout process. This is a big advantage when re-running the whole implementation flow is not feasible.

Raúl’s view

Detecting Trojans is difficult because the trigger conditions are deliberately designed to be satisfied only in very rare situations. Random or other test pattern generation will likely not activate the Trojan and the circuit will appear to be functioning correctly. If a specification is available, formal verification, i.e., equivalence checking, of this specification against an implementation will uncover that they are not equivalent.

This paper uses a method based on extracting polynomials from an implementation and comparing them to a specification polynomial. This works only for combinatorial circuits. Retiming through movement of Flip-Flops would presumably break this assumption. The basics are explained in the introduction to this blog, it is an application of Gröbner basis theory.

The paper claims that the algorithms scale linearly, which would mean that equivalence checking is linear (unlikely at best). I am not clear what feature the call linear. As Paul points out, most of this is state of the art in commercial tools. However, the ability to narrowly identify the part of the circuit that contains the Trojan is a nice result.

My view

I understood that this was an LEC problem, approached in a different way. However it didn’t occur to me that it was still subject to the same bounds. Paul (an expert in this area) set me straight!

Also read:

Leveraging Virtual Platforms to Shift-Left Software Development and System Verification

Using a GPU to Speed Up PCB Layout Editing

Dynamic Coherence Verification. Innovation in Verification


Data Processing Unit (DPU) uses Verification IP (VIP) for PCI Express

Data Processing Unit (DPU) uses Verification IP (VIP) for PCI Express
by Daniel Payne on 03-29-2022 at 10:00 am

Fungible min

Domain specific processors are a mega-trend in the semiconductor industry, so we see new three letter acronyms like DPU, for Data Processing Unit. System level performance can actually be improved by moving some of the tasks away from the CPU. Companies like Xilinx (Alveo), Amazon (Nitro) and NVIDIA (BlueField) have been talking about DPU architecture for awhile now, and the SmartNIC is now being called a DPU in the hyper-scale data centers.

Last month I read about a new company, Fungible, as they announced their own DPU, and for verification of the PCI Express they used VIP from Avery Design Systems. Fungible presented their F1 DPU architecture at the Hot Chips conference, and here’s the block diagram where PCIe is on one side, and Ethernet on the other:

Source: Hot Chips

 

To learn more about DPU and PCI Express VIP I scheduled a Zoom call with Chakravarthy Kosaraju, SVP, Silicon Design and Validation at Fungible, and Christopher Browy, VP of Sales/Marketing at Avery Design Systems. In the big picture of things the CPU used to be powerful enough to handle all networking tasks, but now with so much data traffic it simply overwhelms the CPU cycles, so both the SmartNIC and DPU approaches are growing in popularity to get around CPU bottlenecks.

Disaggregation is the big trend in the data center now, because it allows more efficient use of resources like storage and data. CPUs paired with GPUs are trying to coordinate all the other PUs. The PCIe slot handles data between the server and storage. The Fungible F1 DPU goes into a storage server, manages all of the SSDs, and even handles cryptography.

Avery Design Systems and Fungible have been working together for the past 3-4 year on PCIe VIP.  On the VIP side the engineering team at Avery have now developed support for over 60 protocols, where PCIe is just one of their high speed IO protocols. The PCIe standard started way back in 2003, created by Intel, Dell, HP and IBM; now the specifications are managed by the PCI-SIG, a group of 900 companies.

When using PCIe in a SoC, you really don’t want to re-invent the wheel by hand-coding your own VIP, because it takes too many man-years of effort to do so, and Avery has been involved with the PCIe standard since version 1.0, and now we’re up to version 6.0 of the spec. Avery is a member of the PCI-SIG, and has many customers using their PCIe VIP, so that provided the team at Fungible the confidence to choose Avery for the version 5.0 VIP.

The team of globally dispersed verification engineers at Fungible were able to readily contact Avery with support questions about their new VIP.  Fungible used VIP from Avery,  among others, for its F1 DPU project. Complimentary feedback was provided by Fungible regarding the helpfulness of the VIP output files for debugging, the usefulness of the tracker files, appreciation for the extensive protocol checks, the speed of switch enumeration they experienced, and the ability to verify a system topology with ease. The F1 chips came back working successfully on first silicon; proving VIP improved their success rate. There are over 100 customers using the PCIe VIP from Avery, which speaks volumes about its stability and value.

The VIP from Avery is sold as time-based licensing, and has a flexible spending model (monthly remixing). Every simulation checks out both a simulator and VIP license for users.

The latest PCIe version is 6.0 right now, and there have been two updates already, even before full approval by the PCI-SIG. Typically Avery will do a quarterly update for VIP, tracking the standards so that all features are implemented. They have thousands of test cases and protocol checks, plus bug fixes and patches are part of their normal procedure.

Summary

Fungible was able to get first silicon success using a methodology of IP and VIP re-use on their F1 DPU chip, aimed at the datacenter market. Choosing Avery Design Systems as a partner for PCIe VIP was part of a multi-year relationship between the companies, and I expect them to continue that into the future.

Related Blogs


Path Based UPF Strategies Explained

Path Based UPF Strategies Explained
by Tom Simon on 03-29-2022 at 6:00 am

Path Based UPF Semantics

The development of the Unified Power Format (UPF) was spurred on by the need for explicit ways to enable specification and verification of power management aspects of SoC designs. The origins of UPF date back to its first release in 2007. Prior to that several vendors had their own methods of specifying power management aspects of a design. The IEEE 1801 specification that emerged has become widely accepted by designers and EDA tools that are related to power. Each new revision of the IEEE 1801 specification has worked to clarify and improve the effectiveness of UPF.

Yet, with such a novel and comprehensive scope, ideas that initially seemed workable have shown to have weaknesses. The very fact that there is no guarantee of backwards compatibility between revisions of IEEE 1801 shows that the working committee is willing and able to update and improve aspects of the specification that experience has shown need to be changed. One such area that was highlighted during a presentation at the 2022 DVCon by Progyna Khondkar from Siemens EDA. His paper and presentation titled “Path Based UPF Strategies Optimally Manage Power on Your Designs” clearly and concisely covers the changes in 3.0/3.1 relating to strategies for UPF protection elements such as isolation, level shifters and repeaters.

Previously the UPF syntax and semantics used to specify the location of isolations, level-shifters or repeaters, which are used between power domains to ensure proper operation of the circuit, were ad hoc and port based. In specifying the location of these protection elements or cells, there are a few kinds of potential problems that can arise – failure to insert a needed element, incorrect insertion of an element or duplicate placement of an element. The expansion in semantics from port based to path based is a significant change that addresses all of these issues.

Path Based UPF Semantics

UPF has added explicit use of -sink and -diff_supply_only TRUE to control inferring UPF protection cells. This is coupled with new precedence rules to eliminate unnecessary cells. Previously port based semantics allowed port splitting, which led to redundant UPF protection cell insertion. Now port-splitting is an error. UPF protection elements can be placed along a net so that only connections to specified sinks are made.  This leads to the placement of UPF protection elements as close to the sink domain as possible.

There are a lot of nuances to this change in UPF. The Siemens paper and presentation do an excellent job of going through various scenarios to illustrate the effects of using various path based options, while also comparing them to how port based semantics would perform.

There are three context options that can be used for UPF protection elements: -location self, -location parent and -location fanout. They have a profound effect on protection element placement. At the same time, they allow very precise tuning of this placement and remove ambiguity – leading to more precise results. The Siemens paper goes through each of them with illustrative examples to show how they differ. There is also a comparison of how the effects of the location directives are influenced by the choice of port or path based semantics.

There is a lot to absorb with this change. Tools supporting path based UPF protection elements need to perform consistently and also issue meaningful warnings when there are going to be unexpected results. The author suggests an approach for this. The paper and presentation conclude with a number of caveats and suggestions for designers switching to path based semantics. However overall it looks as though this is a welcome addition that will improve design quality and verification efficiency. The paper and presentation are available at the DVCon 2022 website.

Also read:

Co-Developing IP and SoC Bring Up Firmware with PSS

Balancing Test Requirements with SOC Security

Siemens EDA on the Best Verification Strategy


CEVA PentaG2 5G NR IP Platform

CEVA PentaG2 5G NR IP Platform
by Kalar Rajendiran on 03-28-2022 at 10:00 am

Pentag2 Programmable Accelerators Page 1

There are currently a number of attractive markets for technology oriented businesses to pursue. One such area is the 5G cellular market with opportunities to develop products for many use cases. A recent Ericsson Mobility Report forecasts incredible growth opportunities for various use cases within the cellular market. For example, cellular IoT connections are expected to grow from 1.9 billion in 2021 to 5.5 billion in 2027. Fixed Wireless Access (FWA) connections are projected to grow from 90 million in 2021 to 230 million in 2027. With such high-growth market opportunities, many semiconductor companies and systems OEMs are already pursuing various use cases for offering products. Many companies are also pursuing new entrance into this market. With 5G New Radio (NR) as the radio access technology for 5G mobile network, market success relies on rapid, cost-effective implementation.

All businesses desire a few things that are essential for profitable growth, no matter what markets they compete in. Those few things are: Easy development efforts at low costs. Rapid time to market for their products. Not wanting to be captive to high cost suppliers. And, of course easy entry for themselves into attractive market segments. The 5G cellular market is no different and a Software Defined Radio (SDR) based implementation may appear to be a good approach to play in that market. But the downside of SW centric implementation is power consumption. A key aspect of 5G NR specification is its focus on significant enhancements to solution flexibility, scalability, efficiency and power usage. A hardware implementation approach can deliver well on these aspects and does not have to be cumbersome and cost-prohibitive.

The above is the context for a recent product announcement from CEVA. Their PentaG2 5G NR IP platform substantially lowers barriers for semiconductor companies and OEMs to enter the cellular market segments. As the leading licensor of wireless connectivity, smart sensing technologies and integrated IP solutions, CEVA was the first to offer a 5G NR IP platform (PentaG) back in 2018. The platform has found wide adoption and has shipped in millions of 5G NR smartphones and mobile broadband devices to date. The current announcement is the 2nd generation of the platform and includes all the key building blocks for a full LTE/5G modem design.

The following provides some insights into the PentaG2 5G NR IP platform.

Optimizing Modem Processing Chains

The PentaG2 IP platform integrates low power DSPs with many specialized programmable accelerators for optimal modem processing chains. The accelerators are used for a complete end-to-end acceleration of uplink and downlink processing for both data and control channels, offloading the DSP cores from all data-path operations. Platform is still highly flexible by using efficient DSP controller cores to configure the HW elements. Each accelerator comes with standard AXI interface for ease of integration and allowing customers to add their own IP and secret sauce. Accelerators can be directly cascaded to form modulation and demodulation chain pipelines, without any need to buffer or access the DSP core for each operation.  The platform includes a complete L1 SW functional implementation of the main 5G Rx and 5G Tx processing chains. The result is a 4X improvement in power efficiency over its predecessor, the PentaG platform. Refer to the Figures below for the various CEVA accelerators included with the PentaG2 platform.

DSP Capabilities

The platform also includes field-proven low-power scalar and vector DSPs. The scalar DSP is used for PHY control, hardware acceleration scheduling and running the protocol stack. The vector DSP with 5G ISA extensions is used for channel estimation related workloads.

Current PentaG2 Platform Configurations

The PentaG2 platform is currently offered in two configurations. Both configurations allow for customers to incorporate their proprietary algorithms and IP as the platform supports standard AXI interfaces.

The PentaG2-Max configuration is for supporting eMBB use cases in handsets and CPE/FWA Terminals and mmWave, NR-Sidelink and cellular V2X (C-V2X) applications as well as URLLC enabled AR/VR use cases.

The PentaG2-Lite configuration is a compact and lean implementation, supporting reduced capability (RedCap) use cases including LTE Cat1 and future 3GPP Rel 17/Rel 18 RedCap. This platform configuration is ideally suited for tight integration into SoCs and for the IoT.

To learn more details, visit the PentaG2 product page.

Support for Simulation and Emulation

The PentaG2 platform deliverables include System-C simulation environment for modeling and debugging the designs. The PentaG2 SoC simulator interfaces with MATLAB platform for algorithmic development. A PentaG2-based system can be emulated on a FPGA platform for final verification.

Availability

PentaG2 is immediately available for licensing to lead customers and for general licensing in the second half of 2022.

Intrinsix IP Integration and Design Services

Customers can implement their PentaG2-based SoC using their in-house chip design teams or leverage CEVA’s Intrinsix IP integration services division. CEVA acquired Intrinsix in 2021 to bring additional offerings and services to its customer base. An example of a recent such offering is the CEVA Fortrix™ SecureD2D IP for securing communications between heterogeneous chiplets. Read more about SecureD2D IP here.

Also read:

CEVA Fortrix™ SecureD2D IP: Securing Communications between Heterogeneous Chiplets

AI at the Edge No Longer Means Dumbed-Down AI

RedCap Will Accelerate 5G for IoT


Analog Bits and SEMIFIVE is a Really Big Deal

Analog Bits and SEMIFIVE is a Really Big Deal
by Daniel Nenni on 03-28-2022 at 6:00 am

SemiFive Analog Bits SemiWiki

Given the recent acquisitions the ASIC business is coming full circle as a critical part of the fabless semiconductor ecosystem. The most recent one being the SEMIFIVE acquisition of IP industry stalworth Analog Bits. These two companies came to the industry from opposite directions which make them a perfect match, absolutely.

Analog Bits was founded in 1995 here in Silicon Valley the traditional way. Started by a group of engineers as a consulting company. In 2003 they pivoted to an IP company in concert with the foundries. This was a bootstrap operation (no debt) focused on customer success. I don’t recall my first engagement with Analog Bits but it was many years ago and for the last 4 years we have collaborated on SemiWiki.

Analog Bits is a critical supplier of leading edge mixed signal IP in the SoC, mobile, hyperscale, AI, and automotive communities. They started with PLLs, DLL, IO’s and memory IP, and have expanded to include SERDES, PVT, and POR. They are now serving customers down to 3nm which means intimate foundry relationships.

They have customers all over the world but more importantly Anaolg Bits is closely partnered with the top foundries: TSMC, Samsung, Globalfoundries, UMC, and had a recent announcement with Intel Foundry Services. As a foundry person myself I know the inside story here and let me tell you that it is an amazing achievement for a 50 person company.

SEMIFIVE took the opposite approach. After getting his PhD in Computer Architecture from MIT in 2012, Brandon Cho spent five years at Boston Consulting Group in Korea. In 2018 he joined SiFive in Korea and SEMIFIVE was spun out eight months later. Brandon and company have raised more than $100M in Korea thus far and now with a Silicon Valley based IP division (Analog Bits) expect them to raise more funds in California.

Here is a 2020 video explaining more about SEMIFIVE and what they do:

After the Analog Bits acquisition, SEMIFIVE has more than 350 employees and a solid base in North America. My prediction is SEMIFIVE will raise more money outside of Korea, do more acquisitions, and evolve into a multinational ASIC powerhouse.

They key to the ASIC business of course is IP and foundry relationships. SEMIFIVE has a close relationship with Samsung but does not currently work with TSMC. Analog Bits works closely with all foundries but has a very close relationship with TSMC. Seriously, it seemed like every time I was in Taiwan the Analog Bits team was there. To ensure these relationships continue unaffected by the acquisition Analog Bits will operate separately to remain foundry neutral.

Bottom line: To me this acquisition is another 1+1=3. SEMIFIVE gets a strong IP base in North America plus foundry and customer relationships that have been silicon proven for 20+ years. Analog Bits gets the ability to scale rapidly and increase the depth and breadth of their IP offering.

About SEMIFIVE
SEMIFIVE is the pioneer of platform based SoC design, working with customers to implement innovative ideas into custom silicon in the most efficient way. Our SoC platforms offer a powerful springboard for new chip designs and leverage configurable domain-specific architectures and pre-validated key IP pools. We offer comprehensive spec-to-system capabilities with end-to-end solutions so that custom SoCs can be realized faster, with reduced cost and risks for key applications such as data center or AIenabled IoT. With a strong partnership with Samsung Foundry as a leading SAFETM DSP partner, as well as the larger ecosystem, SEMIFIVE provides a one-stop shop solution for any SoC design needs. For more information, please visit www.semifive.com.

About Analog Bits
Analog Bits, Inc. is the leader in developing and delivering low-power integrated clocking, sensors and interconnect IP that are pervasive in virtually all of today’s semiconductors. Products include a wide portfolio of precision clocking macros PLLs and XTAL and RC Oscillators, Sensors to monitor Temperature, Voltage Drops, Voltage Spikes, System Power Integrity with integrated or separately available bandgaps and ADC’s. We connect the logic voltage of synthesized digital logic to external physical world using our unique programmable interconnect solutions, such as multi-protocol SERDES, C2C I/Os and differential transmitters and receivers. For more information, please visit analogbits.com.

Also Read:

Low Power High Performance PCIe SerDes IP for Samsung Silicon

On-Chip Sensors Discussed at TSMC OIP

Package Pin-less PLLs Benefit Overall Chip PPA