Banner 800x100 0810

AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family

AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family
by Kalar Rajendiran on 03-31-2022 at 10:00 am

FSM of State DVT VSCodium

“A picture is worth a thousand words” is a widely known adage across the world. Recognizing patterns and cycles becomes easier when data is presented pictorially. Naturally, data visualization technology has a long history from the early days, when people used a paper and pencil to graph data, to modern day visualization platforms. While visualization products have gotten fancier, driven by the data age we are in, the semiconductor industry was among the early industries that needed them. Electronics is all about signals and waveforms. It is easier to comprehend and analyze that data graphically than in the form of a table of data points. While Microsoft Excel has always offered visualization through its graphing feature, visualization solutions received broad market attention after Tableau introduced its visualization platform in 2003.

Visual Studio (VS)

Over the last couple of decades, rapid advances in the field of software have led to the introduction of integrated development environment (IDE) platforms. While there are many development platforms available to software developers, Eclipse and Visual Studio are two well-known and widely used IDE platforms. Is an IDE platform a visualization platform per se? Well. The platform itself is an environment that enables visualization of all sorts through various specific tools that work under that environment. The platform makes this possible through ongoing addition of extensions to support interfacing to various analysis and visualization tools.

So, why is Visual Studio called visual studio? Does it mean Eclipse IDE is not a visualization platform? The Visual Studio name has Visual Basic to thank for it. The developer GUI to Visual Basic earned it the name almost three decades ago. While the development environment has expanded since then, Microsoft has maintained the “visual” prefix for their modern-day IDE. Eclipse IDE is also a visualization platform, even though it does not have “visual” in its name.

Visual Studio (VS) Code

Visual Studio Code is a source code editor developed by Microsoft for Windows, Linux and OS X. It works within the Visual Studio IDE environment and includes support for debugging, embedded Git control and the IntelliSense feature. IntelliSense is a code-completion feature that is context-sensitive to the language being edited. VS Code is also customizable for the editor’s theme, keyboard shortcuts and preferences. In November of 2015, Microsoft released the VS Code source code under the MIT License and posted on GitHub.

A long list of extensions and themes ecosystem is available for VS Code, making it a very popular editor. The open-source nature of the source code is also attracting a large section of the developer community. The speed performance of the editor is not shabby either.

Semiconductor Design and Verification

Design and verification of semiconductors involves code development too. While VHDL and Verilog are not the standard software languages, they are programming languages nonetheless. Design and verification tasks can benefit from an IDE just as software coding and testing do. As such, there has been interest and a push for IDE offerings to support the semiconductor community.

AMIQ EDA

AMIQ EDA provides software tools that enable hardware design and verification engineers to improve productivity and reduce time to market. Prior to its spinoff from AMIQ Consulting, the team had observed three recurring challenges semiconductor companies faced: developing new code to meet tight schedules, understanding legacy or poorly documented code, and getting new engineers up to speed quickly. In the software world, IDEs are commonly used to overcome such challenges. But in the early 2000s, no IDE was available for design and verification languages such as Verilog, VHDL, e and SystemVerilog. So, they developed an IDE for internal use.

In 2008, AMIQ EDA was spun off from AMIQ Consulting and Design and Verification Tools (DVT) Eclipse IDE was launched. They launched DVT Debugger in 2011, Verissimo SystemVerilog Testbench Linter in 2012, and Specador Documentation Generator in 2014. As a company that strongly believes in user-driven development and building solutions based on real-life experiences, they recently launched DVT IDE for VS Code. You can read their press announcement here.

AMIQ EDA’s products help customers accelerate code development, simplify legacy code maintenance, speed up language and methodology learning, and improve source code reliability.

DVT IDE for VS Code

DVT IDE for Visual Studio Code (VS Code) is an integrated development environment (IDE) for SystemVerilog, Verilog, Verilog-AMS, and VHDL. The DVT IDE consists of a parser, the VS Code (editor), an intuitive graphical user interface, and a comprehensive set of features that help with code writing, inspection, navigation, and debugging. It provides capabilities that are specific to the hardware design and verification domain, such as design diagrams, signal tracing, and verification methodology support. The VS Code platform’s extensible architecture allows the DVT IDE to integrate within a large extension ecosystem and work flawlessly with third-party extensions.

DVT IDE for VS Code shares all the analysis engines with DVT Eclipse IDE, that is field proven since 2008. The product enables engineers to inspect a project through diagrams.  Designers can use HDL diagrams such as schematic, state machine, and flow diagrams. Verification engineers can use UML diagrams such as inheritance and collaboration diagrams. Diagrams are hyperlinked and synchronized with the source code and can be saved for documentation purposes. Users can easily search and filter diagrams as needed, for example, visualizing only the clock and reset signals in a schematic diagram. Both tools also have important non-visualization features such as hyperlinked navigation, auto-complete, code refactoring, and semantic searches for usages, readers, and writers of signals and variables.

For a couple of screenshots showing the DVT IDE for VS Code in action, refer to the Figures below.

 

DVT IDE for VS Code is available for download from the VS Code marketplace. For more details, refer to the product page.

Also read:

Automated Documentation of Space-Borne FPGA Designs

Continuous Integration of RISC-V Testbenches

Continuous Integration of UVM Testbenches


Shift left gets a modulated signal makeover

Shift left gets a modulated signal makeover
by Don Dingee on 03-31-2022 at 6:00 am

Modulated signals uncover combined effects in a shift left approach

Everyone saw Shift Left, the EDA blockbuster. Digital logic design, with perfect 1s and 0s simulated through perfect switches, shifted into a higher gear. But the dark arts – RF systems, power supplies, and high-speed digital – didn’t shift smoothly. What do these practitioners need in EDA to see more benefits from shift left? Higher fidelity behavioral models. Authentic waveforms. Fast, accurate simulation schemes. And, looking forward, components with an “executable datasheet,” reproducing physical results when simulated in various contexts. Let’s go inside Keysight’s strategy in this series on how shift left gets a modulated signal makeover.

Unlocking more value across the ecosystem

The fundamental value of shift left – in fact, the whole purpose of EDA tools – is earlier visibility for design teams in virtual space. Waiting until problems show up in hardware drives up cost and risk and takes away flexibility. Earlier virtual validation reduces hardware re-spins with their trial-and-error. Teams can explore architectural options and avoid over-design trying to protect margins.

But shift left has the potential to unlock more value by pulling the ecosystem together. Project “visibility” improves when predictions show how close a design is to performance goals, and what remains to be done. Virtual sampling in the customer’s and the customer’s-customer contexts would help vendors demonstrate parts quickly and close sales faster.

Ultimately, shift left sets the stage for digital twins, enabling physical engineering activities to move to virtual space. For example, design teams and end customers would be able to gauge difficult physical experiences, like a satellite in orbit with interference, sun loading, jamming, and other dynamics. Digital twins need unprecedented levels of accuracy and trust in EDA tools.

Paving a two-way path for waveforms and data

When electromagnetic (EM) behaviors appear, these higher value wins only happen if modulated signals come to life in virtual space. Consider the case of a power amplifier (PA) designed using S-parameters and simulated with sine waves. Basic validation of a PA might fall apart when a team designs a hardware prototype around it and applies complex modulation like 5G or Wi-Fi 7.

Think about it this way: if testing with modulated signals is a given for hardware, why is it optional for simulating a design? The answer is most RF EDA workflows are one-way. They lack any path for bringing physical measurements back through simulation, either as a stimulus waveform or as improvements to behavioral modeling parameters. Bringing real-world effects back into a math-based model is at best applying a fudge factor with who knows what margin. When data and models don’t line up, system-level experiences get lost in translation.

Now consider Keysight’s vision for a two-way path with high-fidelity transportable models in a common modeling language. A device under test comes packaged with authentic waveforms and enhanced modeling data – an executable that goes beyond a datasheet. A single workflow connects design and simulation from detailed componentry to world-level scenario planning, enabling regression not possible with discontinuous point-based tools, waveforms, and data.

An example of why modulation is not an option

When shift left works for EM design, workflows must be two-way, and modulation is not an option. An example is error vector magnitude (EVM), a reliable figure of merit in wireless applications.

PA designers do an excellent job of focusing on power-dependent parameters, such as non-linearity and peak-to-average power ratio (PAPR). EVM outcomes rely on other important effects at work. Quickly varying effects determine performance in power, frequency, and time domains. Slower varying effects alter performance around temperature, load, and bias.

For example, a complex waveform can set off time-dependent memory effects that present as self-heating problems. Or wideband operation uncovers impedance mismatches at some points. Shouldn’t designers see those problems developing virtually, before getting surprised by measurements on a hardware prototype, or observations from a distant deployed platform?

Most EDA approaches have models that look at one affect at a time. Modern EM problems have more interrelated dimensions, and demand simulation of combinations of effects. Keysight provides a unique approach, blending EDA and test and measurement expertise, to see deeper. Modulated signals are the key to bringing the customer into the design environment. That requires stronger simulation engines, higher fidelity models, multiple effects in play, and one workflow from schematic capture through validation.

How does this work? Here’s a new Keysight video on modulated signals in EVM performance, using test and measurement insights and simulation techniques getting faster results on complex waveforms with combined effects.

Beyond the “choose your own compliance adventure”

A final point on changes in the complex EM systems business. Systems now have carefully defined waveforms from industry specifications. Choosing your own compliance adventure with some random stimulus won’t get the job done. Teams using shift left with modulated signals in a Keysight EDA environment can demonstrate with confidence that a virtual design hits requirements before hardware materializes.

It’s true not only for RF communications systems, but also for adjacent markets. Switched-mode power supply design must hit EMI compliance profiles in many markets. High-speed digital design is now dictated by specifications like PCIe Gen 5 and DDR5. Shift left brings those specifications and their waveforms into view much sooner in the design cycle.

System complexity shouldn’t be left to the reader to solve alone. Keysight has invested in creating EDA tools and workflows for RF system design with world-class measurement science built in. Over the next installments in this series, we’ll see more detailed examples in Keysight’s EDA solutions for shift left with modulated signals. Next up: digital pre-distortion design.

Also read:

WEBINARS: Board-Level EM Simulation Reduces Late Respin Drama

 

 


Synopsys Announces FlexEDA for the Cloud!

Synopsys Announces FlexEDA for the Cloud!
by Daniel Nenni on 03-30-2022 at 10:30 am

Synopsys Cloud Graphic

There’s been a lot of discussion and hype regarding use of the cloud for chip design for quite a while, more than ten years I would say. I spoke with Synopsys to better understand their recent Synopsys Cloud announcement to determine if it is different. Briefly, it is different, and here is why:

If you’re trying to design a complex SoC, or more like a system-in-package today, you have many hurdles to negotiate regarding the design infrastructure required. Things like:

  1. Compute power: Has your IT department provisioned enough CPUs, memory and disk (with the required access speed) to support your next huge design project? What about peak load requirements? This is a tough one since it takes a long time to justify, procure and provision this kind of asset.
  2. EDA tools: This one can be quite tricky as well. Like IT infrastructure, EDA procurement cycles can be long and complicated. You may have a good idea of how many of each kind of license you need for that next huge design. But there will be surprises. You may need more of some tools to meet time to market. There may be a new tool that gets released during the design – a tool that is perfect for this project. If only you knew about it beforehand. Now that the procurement cycle is done, you’re faced with more difficulty to get what you need.
  3. Building the design flow: Has your CAD department hooked up the required tools in the right flow for each step in the huge project ahead of you? This isn’t the end of it. Besides the actual flow, the tools need to be matched to the right compute environment. Some design steps need a lot of memory. Some need a lot of compute and others need both. Everything seems to need more disk space than you’ll ever have.

And all of this just gets you to the starting line. You still must design that next huge project.

The widespread move to the cloud for chip design has really helped with the first item, above. But the second two are still essentially an exercise for the design team (and CAD team) to address. Whether you’re on the cloud or on premises, configuring machines for the workload at hand and ensuring you had the foresight to buy enough of all the right tools are challenges. Negotiating peak load license capacity can help, but did you ask for enough? And what about new tools that aren’t in your contract?

What’s New?

The second and third items on the list, above, are what sets Synopsys Cloud apart. Two business models are supported – bring your own cloud (BYOC) and a unique software-as-a-service (SaaS) approach to chip design on the cloud.

BYOC is similar to traditional models – you use the cloud vendor of your choice to flatten the compute requirement problem and the EDA vendor provides cloud-certified tools that you can purchase and run there. The SaaS model takes the user experience a step further by providing pre-packaged design flows for all the workflows you’ll need that are optimized and matched to the right compute resources. This is all provided by Synopsys, so you don’t need either an IT or CAD department.

BYOC is just that, pick your favorite cloud provider (Azure, AWS, Google Cloud Platform) and Synopsys will provide cloud-certified tools. The SaaS model is a joint development with Microsoft Azure.

There is more, however. Another innovation that is part of Synopsys Cloud is something called FlexEDA. This one is a game-changer. This is patent pending metering technology that provides access to a growing catalog of EDA tools from Synopsys and use them on a pay-per-use basis by the minute or hour. No pre-determined licensing requirements. Decide what you need and provision it for as long as you need it. This makes EDA deployment just like cloud computing deployment. Ask for whatever you need and pay for what you use.  FlexEDA is available for both the BYOC and SaaS model, so lots of options. Synopsys is also working with foundries to simplify access to resources like PDKs. There is no EDA company closer to the foundries that Synopsys.

The FlexEDA model is what we cloud enthusiasts, like myself, have been patiently waiting for and could fundamentally change the EDA landscape, absolutely.

Companies can sign up for Synopsys Cloud immediately.

Also read:

Use Existing High Speed Interfaces for Silicon Test

Getting to Faster Closure through AI/ML, DVCon Keynote

Upcoming Webinar: 3DIC Design from Concept to Silicon


Symbolic Trojan Detection. Innovation in Verification

Symbolic Trojan Detection. Innovation in Verification
by Bernard Murphy on 03-30-2022 at 6:00 am

Innovation New

We normally test only for correctness of the functionality we expect. How can we find functionality (e.g. Trojans) that we don’t expect? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Trojan Localization Using Symbolic Algebra. The paper published in the 2017 Asia and South Pacific DAC. The authors are from the University of Florida and the paper has 30 citations.

Methods in this class operate by running an equivalence check between a specification in RTL, presumed Trojan-free, and a gate-level implementation potentially including Trojans. Through equivalence checking, the authors can not only detect the presence of a Trojan but also localize that logic. This approach extracts and compares polynomial representations from specification and gate-level views, unlike traditional BDD-based equivalence checking.

Implementation polynomials are easy to construct, eg NOT(a) → 1-a and OR(a,b) → a+b-a*b. One writes specification polynomials to resolve to 0, eg 2*Cout+Sum-(A+B+Cin) with one polynomial per output. In sequential logic, flop inputs act as as outputs. Checking proceeds then by “reducing” specification polynomials, tracing backwards in a defined order from outputs. At each step backwards, the method replaces output/intermediate terms by the implementation polynomials creating those values. On completion, each polynomial remainder should be zero if specification and implementation are equivalent. Any non-zero remainder flags suspect logic.

Since the method identifies quite closely any suspect areas, they can prune out all non-suspect logic, leaving a much smaller region of logic. They then run ATPG run to generate vectors to trigger theTrojan.

Paul’s view

This is a fun paper and an easy read. The basic idea is to use logical equivalence checking (LEC) between specification and implementation (e.g. RTL vs. gates) to see if malicious trojan logic has been added to the design. The authors do the equivalence by forming arithmetic expressions to represent logic rather than more traditional BDD or SAT based approach. As with commercial LEC tools their approach first matches sequential elements in the specification and implementation, which then reduces the LEC problem to a series of Boolean equivalence checks on the logic cones driving each sequential element.

The key insight for me in this paper is the observation that if a non-equivalent “suspicious” logic cone overlaps with another equivalent logic cone (i.e. they share some common logic) then this overlap logic cannot be suspicious – i.e. if can be removed from the set of potential trojan logic.

Having used this insight to identify a minimal set of suspicious gates, the authors then use an automatic test pattern generation (ATPG) tool on this set to identify a design state that activates the trojan logic.

To be honest I didn’t quite follow this part of the paper. Once the non-equivalent logic is identified, commercial LEC tools just perform a SAT operation on the non-equivalent logic itself. This generates a specific example of design state where the specification and implementation behave differently. It isn’t necessary to use an ATPG tool for this purpose.

As another fun side note, Cadence has a sister product to our Conformal LEC tool, called Conformal ECO,. This uses this same overlapping logic cone principle (together with a bunch of other neat tricks) to identify a minimal set of non-equivalent gates from a LEC compare. A designer can use the tool to automatically map last minute functional RTL patches to an existing post-layout netlist patch late in the tapeout process. This is a big advantage when re-running the whole implementation flow is not feasible.

Raúl’s view

Detecting Trojans is difficult because the trigger conditions are deliberately designed to be satisfied only in very rare situations. Random or other test pattern generation will likely not activate the Trojan and the circuit will appear to be functioning correctly. If a specification is available, formal verification, i.e., equivalence checking, of this specification against an implementation will uncover that they are not equivalent.

This paper uses a method based on extracting polynomials from an implementation and comparing them to a specification polynomial. This works only for combinatorial circuits. Retiming through movement of Flip-Flops would presumably break this assumption. The basics are explained in the introduction to this blog, it is an application of Gröbner basis theory.

The paper claims that the algorithms scale linearly, which would mean that equivalence checking is linear (unlikely at best). I am not clear what feature the call linear. As Paul points out, most of this is state of the art in commercial tools. However, the ability to narrowly identify the part of the circuit that contains the Trojan is a nice result.

My view

I understood that this was an LEC problem, approached in a different way. However it didn’t occur to me that it was still subject to the same bounds. Paul (an expert in this area) set me straight!

Also read:

Leveraging Virtual Platforms to Shift-Left Software Development and System Verification

Using a GPU to Speed Up PCB Layout Editing

Dynamic Coherence Verification. Innovation in Verification


Data Processing Unit (DPU) uses Verification IP (VIP) for PCI Express

Data Processing Unit (DPU) uses Verification IP (VIP) for PCI Express
by Daniel Payne on 03-29-2022 at 10:00 am

Fungible min

Domain specific processors are a mega-trend in the semiconductor industry, so we see new three letter acronyms like DPU, for Data Processing Unit. System level performance can actually be improved by moving some of the tasks away from the CPU. Companies like Xilinx (Alveo), Amazon (Nitro) and NVIDIA (BlueField) have been talking about DPU architecture for awhile now, and the SmartNIC is now being called a DPU in the hyper-scale data centers.

Last month I read about a new company, Fungible, as they announced their own DPU, and for verification of the PCI Express they used VIP from Avery Design Systems. Fungible presented their F1 DPU architecture at the Hot Chips conference, and here’s the block diagram where PCIe is on one side, and Ethernet on the other:

Source: Hot Chips

 

To learn more about DPU and PCI Express VIP I scheduled a Zoom call with Chakravarthy Kosaraju, SVP, Silicon Design and Validation at Fungible, and Christopher Browy, VP of Sales/Marketing at Avery Design Systems. In the big picture of things the CPU used to be powerful enough to handle all networking tasks, but now with so much data traffic it simply overwhelms the CPU cycles, so both the SmartNIC and DPU approaches are growing in popularity to get around CPU bottlenecks.

Disaggregation is the big trend in the data center now, because it allows more efficient use of resources like storage and data. CPUs paired with GPUs are trying to coordinate all the other PUs. The PCIe slot handles data between the server and storage. The Fungible F1 DPU goes into a storage server, manages all of the SSDs, and even handles cryptography.

Avery Design Systems and Fungible have been working together for the past 3-4 year on PCIe VIP.  On the VIP side the engineering team at Avery have now developed support for over 60 protocols, where PCIe is just one of their high speed IO protocols. The PCIe standard started way back in 2003, created by Intel, Dell, HP and IBM; now the specifications are managed by the PCI-SIG, a group of 900 companies.

When using PCIe in a SoC, you really don’t want to re-invent the wheel by hand-coding your own VIP, because it takes too many man-years of effort to do so, and Avery has been involved with the PCIe standard since version 1.0, and now we’re up to version 6.0 of the spec. Avery is a member of the PCI-SIG, and has many customers using their PCIe VIP, so that provided the team at Fungible the confidence to choose Avery for the version 5.0 VIP.

The team of globally dispersed verification engineers at Fungible were able to readily contact Avery with support questions about their new VIP.  Fungible used VIP from Avery,  among others, for its F1 DPU project. Complimentary feedback was provided by Fungible regarding the helpfulness of the VIP output files for debugging, the usefulness of the tracker files, appreciation for the extensive protocol checks, the speed of switch enumeration they experienced, and the ability to verify a system topology with ease. The F1 chips came back working successfully on first silicon; proving VIP improved their success rate. There are over 100 customers using the PCIe VIP from Avery, which speaks volumes about its stability and value.

The VIP from Avery is sold as time-based licensing, and has a flexible spending model (monthly remixing). Every simulation checks out both a simulator and VIP license for users.

The latest PCIe version is 6.0 right now, and there have been two updates already, even before full approval by the PCI-SIG. Typically Avery will do a quarterly update for VIP, tracking the standards so that all features are implemented. They have thousands of test cases and protocol checks, plus bug fixes and patches are part of their normal procedure.

Summary

Fungible was able to get first silicon success using a methodology of IP and VIP re-use on their F1 DPU chip, aimed at the datacenter market. Choosing Avery Design Systems as a partner for PCIe VIP was part of a multi-year relationship between the companies, and I expect them to continue that into the future.

Related Blogs


Path Based UPF Strategies Explained

Path Based UPF Strategies Explained
by Tom Simon on 03-29-2022 at 6:00 am

Path Based UPF Semantics

The development of the Unified Power Format (UPF) was spurred on by the need for explicit ways to enable specification and verification of power management aspects of SoC designs. The origins of UPF date back to its first release in 2007. Prior to that several vendors had their own methods of specifying power management aspects of a design. The IEEE 1801 specification that emerged has become widely accepted by designers and EDA tools that are related to power. Each new revision of the IEEE 1801 specification has worked to clarify and improve the effectiveness of UPF.

Yet, with such a novel and comprehensive scope, ideas that initially seemed workable have shown to have weaknesses. The very fact that there is no guarantee of backwards compatibility between revisions of IEEE 1801 shows that the working committee is willing and able to update and improve aspects of the specification that experience has shown need to be changed. One such area that was highlighted during a presentation at the 2022 DVCon by Progyna Khondkar from Siemens EDA. His paper and presentation titled “Path Based UPF Strategies Optimally Manage Power on Your Designs” clearly and concisely covers the changes in 3.0/3.1 relating to strategies for UPF protection elements such as isolation, level shifters and repeaters.

Previously the UPF syntax and semantics used to specify the location of isolations, level-shifters or repeaters, which are used between power domains to ensure proper operation of the circuit, were ad hoc and port based. In specifying the location of these protection elements or cells, there are a few kinds of potential problems that can arise – failure to insert a needed element, incorrect insertion of an element or duplicate placement of an element. The expansion in semantics from port based to path based is a significant change that addresses all of these issues.

Path Based UPF Semantics

UPF has added explicit use of -sink and -diff_supply_only TRUE to control inferring UPF protection cells. This is coupled with new precedence rules to eliminate unnecessary cells. Previously port based semantics allowed port splitting, which led to redundant UPF protection cell insertion. Now port-splitting is an error. UPF protection elements can be placed along a net so that only connections to specified sinks are made.  This leads to the placement of UPF protection elements as close to the sink domain as possible.

There are a lot of nuances to this change in UPF. The Siemens paper and presentation do an excellent job of going through various scenarios to illustrate the effects of using various path based options, while also comparing them to how port based semantics would perform.

There are three context options that can be used for UPF protection elements: -location self, -location parent and -location fanout. They have a profound effect on protection element placement. At the same time, they allow very precise tuning of this placement and remove ambiguity – leading to more precise results. The Siemens paper goes through each of them with illustrative examples to show how they differ. There is also a comparison of how the effects of the location directives are influenced by the choice of port or path based semantics.

There is a lot to absorb with this change. Tools supporting path based UPF protection elements need to perform consistently and also issue meaningful warnings when there are going to be unexpected results. The author suggests an approach for this. The paper and presentation conclude with a number of caveats and suggestions for designers switching to path based semantics. However overall it looks as though this is a welcome addition that will improve design quality and verification efficiency. The paper and presentation are available at the DVCon 2022 website.

Also read:

Co-Developing IP and SoC Bring Up Firmware with PSS

Balancing Test Requirements with SOC Security

Siemens EDA on the Best Verification Strategy


CEVA PentaG2 5G NR IP Platform

CEVA PentaG2 5G NR IP Platform
by Kalar Rajendiran on 03-28-2022 at 10:00 am

Pentag2 Programmable Accelerators Page 1

There are currently a number of attractive markets for technology oriented businesses to pursue. One such area is the 5G cellular market with opportunities to develop products for many use cases. A recent Ericsson Mobility Report forecasts incredible growth opportunities for various use cases within the cellular market. For example, cellular IoT connections are expected to grow from 1.9 billion in 2021 to 5.5 billion in 2027. Fixed Wireless Access (FWA) connections are projected to grow from 90 million in 2021 to 230 million in 2027. With such high-growth market opportunities, many semiconductor companies and systems OEMs are already pursuing various use cases for offering products. Many companies are also pursuing new entrance into this market. With 5G New Radio (NR) as the radio access technology for 5G mobile network, market success relies on rapid, cost-effective implementation.

All businesses desire a few things that are essential for profitable growth, no matter what markets they compete in. Those few things are: Easy development efforts at low costs. Rapid time to market for their products. Not wanting to be captive to high cost suppliers. And, of course easy entry for themselves into attractive market segments. The 5G cellular market is no different and a Software Defined Radio (SDR) based implementation may appear to be a good approach to play in that market. But the downside of SW centric implementation is power consumption. A key aspect of 5G NR specification is its focus on significant enhancements to solution flexibility, scalability, efficiency and power usage. A hardware implementation approach can deliver well on these aspects and does not have to be cumbersome and cost-prohibitive.

The above is the context for a recent product announcement from CEVA. Their PentaG2 5G NR IP platform substantially lowers barriers for semiconductor companies and OEMs to enter the cellular market segments. As the leading licensor of wireless connectivity, smart sensing technologies and integrated IP solutions, CEVA was the first to offer a 5G NR IP platform (PentaG) back in 2018. The platform has found wide adoption and has shipped in millions of 5G NR smartphones and mobile broadband devices to date. The current announcement is the 2nd generation of the platform and includes all the key building blocks for a full LTE/5G modem design.

The following provides some insights into the PentaG2 5G NR IP platform.

Optimizing Modem Processing Chains

The PentaG2 IP platform integrates low power DSPs with many specialized programmable accelerators for optimal modem processing chains. The accelerators are used for a complete end-to-end acceleration of uplink and downlink processing for both data and control channels, offloading the DSP cores from all data-path operations. Platform is still highly flexible by using efficient DSP controller cores to configure the HW elements. Each accelerator comes with standard AXI interface for ease of integration and allowing customers to add their own IP and secret sauce. Accelerators can be directly cascaded to form modulation and demodulation chain pipelines, without any need to buffer or access the DSP core for each operation.  The platform includes a complete L1 SW functional implementation of the main 5G Rx and 5G Tx processing chains. The result is a 4X improvement in power efficiency over its predecessor, the PentaG platform. Refer to the Figures below for the various CEVA accelerators included with the PentaG2 platform.

DSP Capabilities

The platform also includes field-proven low-power scalar and vector DSPs. The scalar DSP is used for PHY control, hardware acceleration scheduling and running the protocol stack. The vector DSP with 5G ISA extensions is used for channel estimation related workloads.

Current PentaG2 Platform Configurations

The PentaG2 platform is currently offered in two configurations. Both configurations allow for customers to incorporate their proprietary algorithms and IP as the platform supports standard AXI interfaces.

The PentaG2-Max configuration is for supporting eMBB use cases in handsets and CPE/FWA Terminals and mmWave, NR-Sidelink and cellular V2X (C-V2X) applications as well as URLLC enabled AR/VR use cases.

The PentaG2-Lite configuration is a compact and lean implementation, supporting reduced capability (RedCap) use cases including LTE Cat1 and future 3GPP Rel 17/Rel 18 RedCap. This platform configuration is ideally suited for tight integration into SoCs and for the IoT.

To learn more details, visit the PentaG2 product page.

Support for Simulation and Emulation

The PentaG2 platform deliverables include System-C simulation environment for modeling and debugging the designs. The PentaG2 SoC simulator interfaces with MATLAB platform for algorithmic development. A PentaG2-based system can be emulated on a FPGA platform for final verification.

Availability

PentaG2 is immediately available for licensing to lead customers and for general licensing in the second half of 2022.

Intrinsix IP Integration and Design Services

Customers can implement their PentaG2-based SoC using their in-house chip design teams or leverage CEVA’s Intrinsix IP integration services division. CEVA acquired Intrinsix in 2021 to bring additional offerings and services to its customer base. An example of a recent such offering is the CEVA Fortrix™ SecureD2D IP for securing communications between heterogeneous chiplets. Read more about SecureD2D IP here.

Also read:

CEVA Fortrix™ SecureD2D IP: Securing Communications between Heterogeneous Chiplets

AI at the Edge No Longer Means Dumbed-Down AI

RedCap Will Accelerate 5G for IoT


Analog Bits and SEMIFIVE is a Really Big Deal

Analog Bits and SEMIFIVE is a Really Big Deal
by Daniel Nenni on 03-28-2022 at 6:00 am

SemiFive Analog Bits SemiWiki

Given the recent acquisitions the ASIC business is coming full circle as a critical part of the fabless semiconductor ecosystem. The most recent one being the SEMIFIVE acquisition of IP industry stalworth Analog Bits. These two companies came to the industry from opposite directions which make them a perfect match, absolutely.

Analog Bits was founded in 1995 here in Silicon Valley the traditional way. Started by a group of engineers as a consulting company. In 2003 they pivoted to an IP company in concert with the foundries. This was a bootstrap operation (no debt) focused on customer success. I don’t recall my first engagement with Analog Bits but it was many years ago and for the last 4 years we have collaborated on SemiWiki.

Analog Bits is a critical supplier of leading edge mixed signal IP in the SoC, mobile, hyperscale, AI, and automotive communities. They started with PLLs, DLL, IO’s and memory IP, and have expanded to include SERDES, PVT, and POR. They are now serving customers down to 3nm which means intimate foundry relationships.

They have customers all over the world but more importantly Anaolg Bits is closely partnered with the top foundries: TSMC, Samsung, Globalfoundries, UMC, and had a recent announcement with Intel Foundry Services. As a foundry person myself I know the inside story here and let me tell you that it is an amazing achievement for a 50 person company.

SEMIFIVE took the opposite approach. After getting his PhD in Computer Architecture from MIT in 2012, Brandon Cho spent five years at Boston Consulting Group in Korea. In 2018 he joined SiFive in Korea and SEMIFIVE was spun out eight months later. Brandon and company have raised more than $100M in Korea thus far and now with a Silicon Valley based IP division (Analog Bits) expect them to raise more funds in California.

Here is a 2020 video explaining more about SEMIFIVE and what they do:

After the Analog Bits acquisition, SEMIFIVE has more than 350 employees and a solid base in North America. My prediction is SEMIFIVE will raise more money outside of Korea, do more acquisitions, and evolve into a multinational ASIC powerhouse.

They key to the ASIC business of course is IP and foundry relationships. SEMIFIVE has a close relationship with Samsung but does not currently work with TSMC. Analog Bits works closely with all foundries but has a very close relationship with TSMC. Seriously, it seemed like every time I was in Taiwan the Analog Bits team was there. To ensure these relationships continue unaffected by the acquisition Analog Bits will operate separately to remain foundry neutral.

Bottom line: To me this acquisition is another 1+1=3. SEMIFIVE gets a strong IP base in North America plus foundry and customer relationships that have been silicon proven for 20+ years. Analog Bits gets the ability to scale rapidly and increase the depth and breadth of their IP offering.

About SEMIFIVE
SEMIFIVE is the pioneer of platform based SoC design, working with customers to implement innovative ideas into custom silicon in the most efficient way. Our SoC platforms offer a powerful springboard for new chip designs and leverage configurable domain-specific architectures and pre-validated key IP pools. We offer comprehensive spec-to-system capabilities with end-to-end solutions so that custom SoCs can be realized faster, with reduced cost and risks for key applications such as data center or AIenabled IoT. With a strong partnership with Samsung Foundry as a leading SAFETM DSP partner, as well as the larger ecosystem, SEMIFIVE provides a one-stop shop solution for any SoC design needs. For more information, please visit www.semifive.com.

About Analog Bits
Analog Bits, Inc. is the leader in developing and delivering low-power integrated clocking, sensors and interconnect IP that are pervasive in virtually all of today’s semiconductors. Products include a wide portfolio of precision clocking macros PLLs and XTAL and RC Oscillators, Sensors to monitor Temperature, Voltage Drops, Voltage Spikes, System Power Integrity with integrated or separately available bandgaps and ADC’s. We connect the logic voltage of synthesized digital logic to external physical world using our unique programmable interconnect solutions, such as multi-protocol SERDES, C2C I/Os and differential transmitters and receivers. For more information, please visit analogbits.com.

Also Read:

Low Power High Performance PCIe SerDes IP for Samsung Silicon

On-Chip Sensors Discussed at TSMC OIP

Package Pin-less PLLs Benefit Overall Chip PPA


Auto Safety – A Dickensian Tale

Auto Safety – A Dickensian Tale
by Roger C. Lanctot on 03-27-2022 at 10:00 am

Auto Safety – A Dickensian Tale

As I prepare to join the International Telecommunications Union’s Future Networked Car Symposium – today through Friday – I am reminded of Charles Dickens’ “A Tale of Two Cities” and its unforgettable opening paragraph – modified for a modern context here:

It was the best of times, it was the worst of times, it was the age of self-driving cars, it was the age of Tesla Autopilot, it was the epoch of safety system mandates, it was the epoch of consumer confusion, it was the season of LiDAR, it was the season of false positives, it was the spring of vision zero, it was the winter of escalating highway fatalities, we had solved all challenges, we had achieved nothing, we were all going to relinquish individual car ownership, we were all fleeing public transportation.

As the four-day International Telecommunications Union’s Future Networked Car Symposium kicks off this morning the transportation industry stands at the fulcrum of a transformation that promises to save lives and rejuvenate economies. Or maybe its just a mirage.

New automotive safety systems offer the promise of collision avoidance and self-driving technology suggests the possibility of driverless transport – but these opportunities appear to be farther away the faster we approach them. In spite of the widespread deployment of new sensors and systems in cars, highway fatalities continue to rise and insurance companies have yet to prepare a path toward less expensive insurance for consumers that buy cars with more safety enhancements.

LexisNexis research tells us that the wider deployment of so-called advanced driver assist systems has, in fact, reduced the number and expense of claims. Yet those results have failed to manifest in measurably lower insurance rates.

Some observers point to data showing the declining number of claims, but note the higher cost of repairing (and recalibrating) cars with sophisticated safety systems. LexisNexis itself points to the confusion of ADAS naming conventions – lane keeping, lane departure warning, etc. – that has complicated marketing messages and consumer facing educational campaigns.

A recently published report from Strategy Analytics highlights the challenges faced by automotive engineers in bringing safety and self-driving systems to market. Titled “Human Performance Properties in Automated Driving,” the report points to a range of issues and previously published research addressing topics including “trust,” “mode confusion,” “motion sickness,” “situational awareness,” “workload,” and “emotional response.”

“Human Performance in Assisted and Automated Driving” – file:///C:/Users/rlanctot/Downloads/Strategy_Analytics_Human_Performance_in_Assisted_&_Automated_Driving%20(1).pdf – Strategy Analytics

The report concludes: “Regarding the development of objective methods and thresholds, it is worth highlighting the unique work that the AVT Consortium is carrying out using real-world driver behavior data to assist OEMs, policy makers and other stakeholders to understand what is acceptable in the operation of assisted and automated driving features, and when drivers may be drifting towards unsafe conditions.”

The report very much captures my own personal experiences with semi-automated driving. Now that I drive a BMW equipped with lane keeping technology I am experiencing all of the issues described by the report’s author.

The lane keeping in my BMW is sufficiently aggressive – practically ripping the steering wheel out of my hands if I attempt a lane change without signaling – that it generates an immediate emotional response and undermines my trust. At the same time, the user interface on the start-stop system is sufficiently confusing that I am never sure whether it is on or off – until it actually engages.

I know I am not alone and I know we won’t see broader consumer adoption of active safety system technology until we, as an industry, master the engagement with the consumer – be that with better research techniques or educational outreach. These and other topics will be discussed as part of the ITU’s Future Networked Car Symposium. You can register here: https://fnc.itu.int/

The four three-hour sessions are as follows (beginning today):

March 22, 2022

Opening + Session 1: Government Authorities’ Coordination for Automated Driving and Their Intelligent Transport

13:00-16:00 CET, Geneva

Register: https://fnc.itu.int/government-authorities-advances-in-intelligent-transport-systems/

March 23, 2022

Session 2: Artificial General Intelligence Applied to Vehicle Safety, Services, and Transport Management: Current Status and Future Directions

13:00-16:00 CET, Geneva

Register: https://fnc.itu.int/session-2-artificial-general-intelligence-applied-to-vehicle-safety-services-and-transport-management-current-status-and-future-directions/

March 24, 2022

Session 3: Automated Driving Systems for Consumer and Other Vehicles (Trucks, Delivery, Shuttles, Robotaxis, etc.)

13:00-16:00 CET, Geneva

https://fnc.itu.int/session-3-automated-driving-systems-for-consumer-and-other-vehicles/

March 25, 2022

Session 4: Wireless Communications Applied to Vehicle Safety Services, and Transport Management – Current Status and Future Directions

13:00-16:00 CET, Geneva

https://fnc.itu.int/session-4-wireless-communications-applied-to-vehicle-safety-services-and-transport-management-current-status-and-future-directions/

Also read:

No Traffic at the Crossroads

GM’s Super Duper Cruise

Emergency Response Getting Sexy


Etch Pitch Doubling Requirement for Cut-Friendly Track Metal Layouts: Escaping Lithography Wavelength Dependence

Etch Pitch Doubling Requirement for Cut-Friendly Track Metal Layouts: Escaping Lithography Wavelength Dependence
by Fred Chen on 03-27-2022 at 6:00 am

Etch Pitch Doubling Requirement

The 5nm foundry node saw the arrival of 6-track standard cells with four narrow routing tracks between wide power/ground rails (Figure 1a), with minimum pitches of around 30 nm [1]. The routing tracks require cuts [2] with widths comparable to the minimum half-pitch, to enable the via connections to the next metal layer with the same minimum pitch. In order to achieve this reliably, it becomes necessary to have alternate lines made of different materials (Figure 1b) that can be selectively etched with different etch chemistry [3,4], for example, silicon nitride and spin-on carbon. In this way, etch pitch is effectively doubled from the target metal pitch. This has some beneficial impact on the lithography.

Figure 1 (a) Left: Four narrow routing lines between wider power/GND rail lines. (b) Right: Alternating lines should be constructed from materials that are etched by selective chemistry so that cut lines can cross over intervening lines without affecting them.

Producing this series of lines is possible by use of SAQP (Self-Aligned Quadruple Patterning), as detailed in Figure 2. To target a 30 nm final pitch, the starting core pitch for SAQP would be 120 nm which is a comfortably manageable pitch for a leading edge immersion tool (193 nm wavelength, 1.35 NA). This approach can be foreseeably extended down to 20 nm minimum track pitch.

Figure 2. SAQP with a starting core pattern (green) that will result in alternating material tracks. The blue lines are defined by the first spacers (yellow), while the red lines are defined by the material that fills the gaps after the second spacers (gray) are formed. Note the merger of the first spacers in the middle of the pitch.

The alternating material line arrangement can also be extended to include SRAM patterning (Figure 3) [5].

Figure 3. Alternating material line arrangement for an SRAM M0 layer example [5].

The cuts for the lines shown in the above figures are achieved in two steps, requiring two masks, one for cutting the blue lines (not affecting the red lines), the other for cutting the red lines (not affecting the blue lines). The spaces between cuts may be narrow enough to warrant double patterning without EUV as well. Self-aligned double patterning (SADP) is the preferred approach [6]. Figure 4 shows the outlines of the SADP cut patterns for the case of Figure 3.

Figure 4. Crossing SADP patterns for line cuts for the pattern of Figure 3.

Since the two-material line arrangement is necessary for the reliable line cutting in and of itself, this minimum three-mask approach (SADP or SAQP, blue line cut, red line cut) would also be required for EUV, not just DUV. Regarding via patterning, in a trench-first, via-last dual-damascene scheme [4,7], self-aligned vias would only require one DUV mask (possibly with SADP), while making use of the prior multipatterned metal trench line pattern. Thus, we see that enforcing cut-friendly layouts leads to an unexpectedly wavelength-agnostic outcome, at least as far as mask count is concerned. In light of the difficulty of getting hold of an EUV tool these days, this is indeed a welcome scenario.

References

[1] J. U. Lee et al., “SAQP spacer merge and EUV self-aligned block decomposition at 28 nm metal pitch on imec 7nm node,” Proc. SPIE 10962, 109620N (2019).

[2] W. Gillijns et al., “Impact of a SADP flow on the design and process for N10/N7 Metal layers,” Proc. SPIE 9427, 942709 (2015).

[3] F. Lazzarino et al., “Self-aligned block technology: a step toward further scaling,” Proc. SPIE 10149, 1014908 (2017).

[4] B. Vincent et al., “Self-aligned block and fully self-aligned via for iN5 metal 2 self-aligned quadruple patterning,” Proc. SPIE 10583, 105830W (2018).

[5] S. Sakhare et al., “Layout optimization and trade-off between 193i and EUV-based patterning for SRAM cells to improve performance and process variability at 7nm technology node,” Proc. SPIE 9427, 94260O (2015).

[6] K. Oyama et al., “The enhanced photoresist shrink process technique toward 22nm node,” Proc. SPIE 7972, 79722Q (2011).

[7] H. Tomizawa et al., “Robust Self-Aligned Via Process for 64nm Pitch Dual-Damascene Interconnects using Pitch Split Double Exposure Patterning Scheme,” 2011 IITC.

This article originally appeared in LinkedIn Pulse: Etch Pitch Doubling Requirement for Cut-Friendly Track Metal Layouts: Escaping Lithography Wavelength Dependence

Horizontal, Vertical, and Slanted Line Shadowing Across Slit in Low-NA and High-NA EUV Lithography Systems

Pattern Shifts Induced by Dipole-Illuminated EUV Masks

Revisiting EUV Lithography: Post-Blur Stochastic Distributions