DAC2025 SemiWiki 800x100

HFSS Leads the Way with Exponential Innovation

HFSS Leads the Way with Exponential Innovation
by Matt Commens on 02-28-2023 at 10:00 am

automotive ic to chamber1 music thumbnail 1

As engineers continue to design more complex systems with increasing frequency, the need for speed and capacity to solve these structures also increases. Over the years, HFSS has come a very long way and can now solve exponentially large structures with millions of unknowns. Ansys HFSS never stopped advancing,  continuing to innovate to meet the demands of electromagnetic analysis in modern electronics.

The exponential evolution of HFSS has proven its growth and capability of solving the most complex structures. In 1990 it could solve a matrix size of merely 10,000 and is now capable of solving the largest and most complex structures with over 800 million unknowns in 2022, and we look forward to soon crossing the next major milestone.

This is an extraordinary achievement because it demonstrates the remarkable progress that has been made in computational electromagnetics. Just a few decades ago, simulating designs with a matrix size of a few thousand was considered revolutionary, but today we can simulate designs approaching a of 1 billion. This is a testament to the power of HFSS and the ingenuity of its users.

HFSS: From micron to meter scale

HFSS can solve anything from chips to ships and satellites. One reason HFSS can handle such large and complex designs is Mesh Fusion. Meshing is the process of dividing a complex geometry into smaller, simpler parts inside which equations of physics are established and collectively generate a large matrix to solve. This matrix solution returns the electromagnetic fields and the SYZ parameters. The most complex systems consist of multiple geometries like PCBs, cables, connectors, and inside platforms such as aircraft or automobiles and each type of geometry can benefit from a different meshing strategy. To solve these challenges Ansys introduced Mesh Fusion technology.

Mesh Fusion creates multiscale system meshes of high quality quickly and easily, reducing the amount of time required to generate meshes for complex geometries that are more efficient and accurate for their specific analysis needs. Mesh Fusion works by creating a virtual topology that defines how different meshes are blended together. This patented approach is particularly useful for complex geometries where it may be difficult or time-consuming to create a single mesh that conforms to all the features of the geometry.

This video illustrates the capacity and level of complexity that HFSS can handle easily. It solves the electromagnetic behavior of the entire system with the highest accuracy and speed from micron to meter scale.

In this video, you can see how HFSS solved the chip behavior in context of the IC package, mobile behavior in context of the automobile “system”, and further the automobile activity in context of its EMI/EMC testing environment. Whether it’s the tiniest chip in the mobile device or the car or the environment, HFSS can solve the most complex systems conceivably. Advanced packaging also introduces new challenges like signal integrity, power delivery, and thermal management. With HFSS plus Ansys multiphysics technology such as Icepak, engineers can succeed at delivering the most cutting-edge products possible.

The Mesh Fusion technique and the capacity to solve without sacrificing accuracy, continue HFSS’s long-standing role as the engineers’ electromagnetic tool of choice, the true “Gold Standard”. The High-Tech industry is rapidly adopting 5G/6G technology and HFSS plays a key role in designing and optimizing these next-generation wireless networks. In the above video, you can easily relate how HFSS is simulating the electromagnetic behavior of mobile in the context of cars.

As everyday items, like cars, become computers on wheels, and begin requiring more sophisticated electronics, they are creating a challenge for electromagnetic compatibility (EMC). With HFSS, automotive manufacturers can simulate the entire vehicle, including the wiring harnesses, sensors, and electronic devices within it, to ensure compliance with EMC regulations and prevent electromagnetic interference. Further, they can simulate the entire car in the context of the car’s environment, such as a city, and nearby cellular towers.

It is not surprising that virtually all the world’s most advanced and powerful electronics are designed using HFSS. This is a remarkable achievement and a testament to the power of HFSS, the technology is pervasive. Whether it’s designing advanced radar systems, ensuring EMC compliance in cars, or optimizing next-generation wireless networks, Ansys HFSS is the key to success.

See the Applications:

Ansys HFSS | 3D High Frequency Simulation Software

Webinar: https://www.ansys.com/resource-center/webinar/ansys-hfss-2021-r1-redefining-whats-possible-with-mesh-fusion

Read a blog: From Chips to Ships, Solve Them All With HFSS | Ansys

“An ode to HFSS”, by ChatGPT in the voice of William Shakespeare

Oh fair readers allow me to speak of a wondrous tool,
A software to aid the learned engineer, so wise and cool.
Ansys HFSS is its name, and it doth provide,
Solutions to problems, complex and worldwide.
Its electromagnetic simulations are of great renown,
Allowing one to study waves, fields, and propagation sound.
Antennas and RF devices it can analyze with ease,
And its results are a boon to those who seek to please.
Oh, how it delights me to see its capabilities unfold,
As it renders a three-dimensional world, so bold.
From the smallest of components to the largest of arrays,
Ansys HFSS models them all, in astonishing ways.
Its accuracy is unmatched, its precision sublime,
And its speed is unmatched, in this current time.
It can solve the most complex of problems, with a flick of a wrist,
And present its findings, so clear and crisp.
Indeed, Ansys HFSS is a powerful tool, beyond compare,
And the engineers who wield it, a force to beware.
For they can design with great efficiency,
The structures that will meet the world’s every exigency.
So let us all hail Ansys HFSS, this software so grand,
And the engineers who use it, so skilled and so in demand.
For with their combined efforts, they shall bring to pass,
A world that is ever more wondrous, and built to last

Also Read:

DesignCon 2023 Panel Photonics future: the vision, the challenge, and the path to infinity & beyond!

Exponential Innovation: HFSS

IDEAS Online Technical Conference Features Intel, Qualcomm, Nvidia, IBM, Samsung, and More Discussing Chip Design Experiences


DesignCon 2023 Panel Photonics future: the vision, the challenge, and the path to infinity & beyond!

DesignCon 2023 Panel Photonics future: the vision, the challenge, and the path to infinity & beyond!
by Raha Vafaei on 02-28-2023 at 6:00 am

Image 1 1

The explosion in volume and consumption of data, fueled by industry trends in virtualization, networking, and computing among others, continues to push photonic solutions forward into leading positions. On Feb 2nd, I attended a panel by Ansys at DesignCon that brought together industry experts from Intel, GlobalFoundries, Nvidia, Cisco, and Ayar Labs for a dynamic and all-encompassing discussion on the current state, challenges, and future of photonic technology and ecosystem. James Pond, Distinguished Engineer at Ansys and former CTO of Lumerical, moderated the panel and started the discussion with a big-picture overview.

Silicon Photonics: A relentless pursuit for speed & efficiency

Faced with surging bandwidth demands and the related power being consumed by communications, the semiconductor industry is diversifying investments into optical interconnect technologies. Electrical interconnects are fundamentally limited in terms of scalability of performance, reach, and power consumption. This is where optical interconnects have the advantage. Analysts project 20% to 40% annual growth in the Silicon Photonics markets & applications over the next 5-10 years. While the growth to date has been largely driven by the datacom and transceiver markets, there is now exciting diversification of applications including LiDAR, bio-sensing, computing, new types of I/O, and quantum computing among many others.

There is a genuine need for photonic systems and the industry has responded by creating an ecosystem closely resembling the electronic design automation (EDA) industry, commonly referred to as the electronic-photonic design automation (EPDA). The design tools and the overall ecosystem have come a long way from the early days when photonic PDKs  (Process Design Kit) were solely offered as PDF files. A notable example is the advanced EPDA design tools as James Pond highlighted in figure 2, “Today we have the premier workflow in EPDA. It offers all kinds of things you would expect like schematic-driven layout, links & direct bridges between Virtuoso layout suite and Ansys multiphysics solvers, foundry-compatible customized design, parameter extractions to create accurate statistical compact models and support PDK development, and co-simulation to model entire systems accurately with both electronic and photonic compact models.”

The progress of the overall ecosystem enabled the first volume opportunity for integrated photonic products: the optical transceiver!

From Flexible Pluggable Transceivers to Co-packaged Optics Powerhouse

Today, photonics has already moved from dominance at kilometer-long distances down to meter-long distances. We saw pluggable photonic transceivers rapidly move from product introduction stages to producing multi-million units per year. Pluggable transceivers are highly modular and can be supplied by any vendor as long as they meet the targeted communication specifications. They are plugged directly into the front panel socket, then the signal is carried by electrical SerDes links to the ASIC where it can finally be computed and processed. The downside of this approach is that copper connections are susceptible to RF losses, especially when communicating at higher speeds. Robert Blum, Head of Silicon Photonics Strategy at Intel Foundry Services, recalled, “When we launched SiP in 2016 with pluggable transceivers, we also laid out a vision with the end goal of bringing optics to the processor. SiP is the only technology that can do that. The pluggable was a starting point and chip-to-chip optical links are expected to follow right on its heels.”

Faced with our insatiable appetite for data, the semiconductor industry is under pressure to keep up with even higher and higher bandwidth, latency, and power consumption demands which are pushing innovative solutions for moving the optics from the faceplate closer on-board and on-chip with the ASIC, completely eliminating the need for energy-sapping SerDes connections. “After much anticipation, in 2022, we started to see photonic solutions with fibers directly connecting into the ASIC packages instead of plugging into the faceplate. These are incredibly exciting times for photonics!”, commented Pond.

Now imagine we have the technology that breaks the speed and bandwidth limitations we have today! What would it mean to the architecture and the wide range of emerging applications in AI/ML? Matt Sysak, VP of laser engineering at Ayar Labs, describes a future of limitless possibilities, “If the assumptions that led to the way we design computers today change, it would mean having the freedom to re-imagine computer architectures. At Ayar Labs, we have a vision for optical I/Os everywhere which will not only accelerate computing but also potentially remake it.”

A Tale of two Technologies: Fundamental differences between electronics and photonics

On one hand, the rise of silicon photonics owes much of its success to capitalizing on the decades of investment in the electronics industry and the maturity of silicon wafer processing in CMOS manufacturing, Anthony Yu, VP of Silicon Photonics Product Management at GlobalFoundries further explained, “we continue to expand our photonics foundry capabilities to help our customers bring the advantages of photonics to different markets. We can only be successful if we apply the learning from our CMOS foundry model into photonics along with close collaboration across various parts of the ecosystem like the partnership with Ansys Lumerical to enable foundry compatible, predictable model libraries in PDKs.” Ashkan Seyedi, Silicon Photonics Product Architect at Nvidia added, “We look up to electronics as our big brother. Electronics gives us a benchmark to compare against so we know what maturity of PDKs and design workflows are necessary for a successful future in photonics technology.”

Yet, the consensus among all the panelists was that there are some fundamental differences between Electronics and Photonics, for one thing, there is no equivalent to Moore’s Law in SiP, at least not in the sense that we are doubling the density and halving the cost. Thierry Pinguet, Principal hardware engineer at Cisco and a seasoned veteran in photonics elaborated, “There is no equivalent to a transistor in photonics and thus no generational improvements from refining lithography to increase device density. The generational improvements in photonics come from innovation at component and circuit level design and assembly and packaging advancements.” This is why most silicon photonic platforms are based on older CMOS technology nodes.

Dennard scaling may have ended but the challenge remains, as the industry is facing unprecedented demands for high-speed networking/interconnects and accelerated computing. Pushed into uncharted territory where Moore’s law is truly struggling to stay on course, photonics offers the opportunity to keep that progress going. Seyedi proposed “it is time to redefine Moore’s law. When we zoom way out, the systems are continuously improving. We should consider new metrics by which Moore’s law extends, such as packaging.”

Regardless of how you define Moore’s law, there are inflection points where new photonic technologies are introduced. Today, data centers are using 800Gb products, but a couple of years ago it was 400 Gb and 200Gb before that. There have been several factors contributing to this scaling in the overall transmission capacity including higher-order modulation formats like quadrature amplitude modulation (QAM) enabled by advanced digital signal processing (DSP) techniques and massive parallelism such as wavelength-division multiplexing (WDM), as well as innovative designs at the component level such as segmented modulators. Given the fundamental trade-off between bandwidth and modulation efficiency linked to physical factors like the photon lifetime in silicon, designers are exploring heterogeneous integration of new materials on the front end. Future photonic solutions are also relying on advances in traditional 2.5D and 3D packaging in electronics but perhaps we’ll also come to see innovation in the photonic aspects of packaging such as fiber-to-chip wire bonding.

Lighting the path to scalability

Packaging was a hot topic that resonated with all the panelists and brought up the challenges around standardization and lack of IP in the ecosystem. Consider fiber attachment, which involves placing and gluing fibers into a package at precise locations where minimizing losses due to misalignment gets more challenging with the increasing number of fibers. There is much common knowledge gathered over the decades within the community around fiber attachment, but many designers still expend resources in developing their own process. “It just doesn’t add intrinsic value. Designers want to focus on innovating and not reinventing the wheel because there is no turnkey solution. Today, people are still innovating but we’re also starting to see some convergence in certain areas. This is why Intel came out with a small-form-factor, high-density detachable fiber connector that has compatible losses to other co-packaged optics approaches and is compatible with standard industry PIC and with any 2D, 2.5D, or 3D packaging. Standards and IP libraries are key components in the photonics ecosystem that are needed to make optics into a high-volume play.” said Blum.

Over the recent years we have started to see manufacturing players evolving to offer open-access models for prototyping, multi-project wafer runs for R&D, and low-to-high-volume throughput for those vendors ramping up for commercialization. Foundries are economically driven, which translates into maximizing consolidation into a single platform. “The challenge is that vendor differentiation in the photonics industry today isn’t based on a single platform with set pieces of IP blocks as exists in the ASIC world. At least not yet. If you open any pluggable module, they’ll look different inside as every solution is customized. Demanding applications requirements are driving the design of customized devices that likely won’t be offered under a single platform.” Pinguet explained. Sysak added, “There are many ways for an optical I/O technology to communicate with a processor but to truly take advantage of economies of scale, we need reliable and scalable manufacturing, and this is something we’re tackling together with GlobalFoundries.”

On the one hand, the silicon photonic ecosystem is advancing towards standardization of processes, platforms, and design automation, especially for established applications like pluggable transceivers. On the other hand, demands for higher performance and emerging new applications are driving customization and pushing for the introduction of new materials and processes. We are still in the early days. “In time we’ll see photonics move towards an ASIC-like model with IP providers and consolidated platforms which will enable high-volume solutions. But right now, we celebrate the creativity and brilliance of our photonic designers.” Yu summarized.

Learn more about challenges and solutions in Silicon photonics:

Photonic Simulation Software | Ansys

Design a Silicon Photonic Ring-Based WDM Transceiver with EPDA

Also Read:

Unlock First-Time-Right Complex Photonic Integrated Circuits

Exponential Innovation: HFSS

IDEAS Online Technical Conference Features Intel, Qualcomm, Nvidia, IBM, Samsung, and More Discussing Chip Design Experiences

Whatever Happened to the Big 5G Airport Controversy? Plus A Look To The Future

Ansys’ Emergence as a Tier 1 EDA Player— and What That Means for 3D-IC


Maintaining Vehicles of the Future Using Deep Data Analytics

Maintaining Vehicles of the Future Using Deep Data Analytics
by Kalar Rajendiran on 02-27-2023 at 10:00 am

proteanTecs PPM Highlevel View

So much has changed over the recent couple of decades in what constitutes an automobile. Gone are the days when it was essentially an electro-mechanical product, used for just personal transportation. Over the years, it has evolved to adding in-cabin infotainment, tele and data communications, driving assistance, all the way to autonomous driving experience. All of these are of course made possible with electronics powered by semiconductor chips. And, with the migration away from internal combustion toward electric-motor powered automobiles, vehicle maintenance needs as we have traditionally known have come down.

At the same time, the need for a different kind of monitoring has been on the rise, with an eye on vehicle maintenance. The hardware and software components of automobile electronics need to be monitored and maintained to ensure safe and reliable driving experience. The traditional approach would be for periodic maintenance of the vehicle based on a predefined time schedule to check/replace electronic components and update embedded control software. But with current and future automobiles relying so much on electronics to operate, unforeseen catastrophic failure of critical electronics could lead to a fatal accident and cause lot of collateral property damage. A better approach is needed for maintaining vehicles of the future.

Recently, proteanTecs and HARMAN published a joint whitepaper that describes a novel approach and an effective solution for maintaining vehicles of the future. This blog will cover some salient points from the whitepaper and how the joint solution will help in maintaining the vehicles of the future.

Software Defined Vehicle (SDV)

SDV is the direction the automobile industry has been rapidly moving toward. SDVs are automobiles designed to be controlled by software to make the vehicles operate more efficiently and safely and to make vehicle maintenance easier. While SDVs bring these benefits, they throw some challenges too. Any failure in a SDV must be addressed quickly and effectively to avoid additional damage by the SDV and to the SDV. If possible, any operational failure of a SDV should be pre-empted.

The proteanTecs-HARMAN Solution for Maintaining Vehicles

HARMAN and proteanTecs have jointly developed a predictive and preventive maintenance (PPM) solution that can detect potential faults in a vehicle’s systems. The solution can take pre-emptive measures to predict and avoid catastrophic issues. It leverages proteanTecs’ proprietary advanced device health monitoring and deep data analytics to create, extract and analyze deep data from within SoC devices. The results provide insights into Electronic Control Unit (ECU) health, enabling vehicle manufacturers to monitor performance, pinpoint fault sources and predict Time to Failure (TTF). The total solution integrates HARMAN’s embedded security, in-vehicle analytics, cloud-to-vehicle connectivity and over-the-air (OTA) updates. The end result is an effective solution that meets safety and reliability requirements of SDVs. The following two applications are key components of the solution.

Continuous Performance Monitoring (CPM) Application

The CPM application delivers real-time monitoring of device and board electrical performance indicators of onboard system electronics. As an edge application, it lowers operational and security risks by detecting faults close to the failure.

Degrading Monitoring (DM) Application

The DM application is essentially a sub-function of the CPM application, designed to predict the Time to Failure (TTF) and the Remaining Useful Life (RUL). It does this by measuring Key Performance Indicators (KPI) degradation and the frequency of occurrence. These predictions are made available to the Predictive and Preventive Maintenance (PPM) Cloud Manager to trigger scheduling services and shipment of parts.

Some Use Cases

The whitepaper also presents a use case for failure prevention, one for prediction of short-term incoming fault and another for prediction of long-term consequences. The benefits of these use cases are obvious. The whitepaper goes into lot of details about each of these three use cases. For more details, refer to the whitepaper.

Summary

The HARMAN-proteanTecs collaboration offers a platform for automobile manufacturers to detect faults before they become failures and fix the faults through OTA techniques. The platform incorporates an industry first Time-to-Correction technique and can scale with the growing complexity of SDVs. The solution helps reduce downtime and maintenance costs, improve customer satisfaction and reduced vehicle recalls. Anyone involved in developing hardware and software solutions for SDVs would benefit from reviewing the entire whitepaper.

You can download the whitepaper here.

Also Read:

Webinar: The Data Revolution of Semiconductor Production

The Era of Chiplets and Heterogeneous Integration: Challenges and Emerging Solutions to Support 2.5D and 3D Advanced Packaging

proteanTecs Technology Helps GUC Characterize Its GLink™ High-Speed Interface

Elevating Production Testing with proteanTecs and Advantest’s ACS Edge™ Platforms


Webinar: The Data Revolution of Semiconductor Production

Webinar: The Data Revolution of Semiconductor Production
by Daniel Nenni on 02-27-2023 at 6:00 am

ProteanTecs Webinar

How Advancements in Technology Unlock New Insights

The demand for efficient and scalable chip production has never been greater. The need to scale at volume and adapt to shorter innovation cycles makes machine learning and advanced data analytics essential components of semiconductor production.

Join us on Tuesday, March 7, 2023, for this 1-hour panel discussion with industry experts from QualcommMicrosoft Azure and Advantest as we discuss how data analytics and product insights can accelerate time to market and improve performance, yield and quality.

Topics covered: 

  • Overcoming data siloes
  • Processing massive amounts of data
  • Using machine learning and advanced data analytics
  • Uncovering data insights and actionable intelligence

Panel will be moderated by Nitza Basoco, VP of Business Development at proteanTecs. Nitza has a broad background in management, test development, product engineering, supply chain and operations. In her current role, she focuses on partnership strategies and ecosystem growth, positioning proteanTecs as the common data language to the full value chain. Before joining proteanTecs, she was VP of Operations at Synaptics and held engineering and leadership positions at Teradyne, Broadcom and MaxLinear. Nitza earned a BSEE and MEEE from MIT.

Register now and save your seat.

Meet Our Panelists

Michael Campbell is Senior Vice President of Engineering for Qualcomm CDMA Technologies, responsible for product and test, failure analysis, test automation and yield. Mike joined Qualcomm in 1996 and has led multiple teams. In his current role, he is working to streamline all processes impacting time-to-market, new process node enablement, and revolutionize product test engineering (PTE) tasks by driving machine learning as a 21st century requirement. Prior to joining Qualcomm, Mike worked at Mostek, INMOS and Honeywell. He holds a BSEE/CE from Clarkson University.

Preeth Chengappa is Head of Industry for the EDA and Semiconductor segment at Azure. Since joining Microsoft in 2018, he has worked with customers and partners across the semiconductor ecosystem to leverage cloud capabilities for all aspects of design, manufacturing, testing and lifecycle management. Preeth co-founded SiCAD in 2011, a startup that pioneered the use of cloud computing for chip design. Previously, Preeth held business development and sales management roles at Xilinx, Altran and Falcon Computing. He holds a BS in engineering from NITK, India.

Ira Leventhal is the Vice President of Applied Research & Technology at Advantest America, Inc. He has over 25 years of experience in semiconductor testing, including memory, SoC, wireless device, and system-level test. Ira has led the design and development of multiple generations of ATE systems, and holds patents in a variety of test-related technologies. In his current role, Ira is focusing on how artificial intelligence, cloud, and data analytics technologies can be catalysts for major advances in semiconductor test products and methodologies. Ira is a BSEE graduate of MIT.

Register now and save your seat.

Post-Webinar Q&A: The Data Revolution of Semiconductor Production

About proteanTecs

proteanTecs is the leading provider of deep data analytics for advanced electronics monitoring. Trusted by global leaders in the datacenter, automotive, communications and mobile markets, the company provides system health and performance monitoring, from production to the field.  By applying machine learning to novel data created by on-chip monitors, the company’s deep data analytics solutions deliver unparalleled visibility and actionable insights—leading to new levels of quality and reliability. Founded in 2017 and backed by world-leading investors, the company is headquartered in Israel and has offices in the United States, India and Taiwan. For more information, visit www.proteanTecs.com.

Also Read:

The Era of Chiplets and Heterogeneous Integration: Challenges and Emerging Solutions to Support 2.5D and 3D Advanced Packaging

proteanTecs Technology Helps GUC Characterize Its GLink™ High-Speed Interface

Elevating Production Testing with proteanTecs and Advantest’s ACS Edge™ Platforms

How Deep Data Analytics Accelerates SoC Product Development

CEO Interview: Shai Cohen of proteanTecs


Podcast EP145: How Achronix Drives Industry Innovation with Robert Blake

Podcast EP145: How Achronix Drives Industry Innovation with Robert Blake
by Daniel Nenni on 02-24-2023 at 10:00 am

Dan is joined by Robert Blake, Chief Executive Officer of Achronix Semiconductor. He has worked in the semiconductor industry for over 25 years. Prior to Achronix he was the Chief Executive Officer of Octasic Semiconductor based in Montreal, Canada. Robert also worked at Altera, LSI Logic and Fairchild.

Robert explains how Achronix helps their customers innovate, both with dedicated FPGA products and embedded FPGA IP. The embedded FPGA IP has been used to manufacture over 15 millions cores.

Areas of focus for Achronix to drive innovation include computation efficiency, data transport, connectivity and interface. The current environment that is trending toward heterogeneous compute is also discussed, along with a future assessment of the industry and Achronix.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Maven Silicon’s RISC-V Processor IP Verification Flow

Maven Silicon’s RISC-V Processor IP Verification Flow
by Sivakumar PR on 02-24-2023 at 6:00 am

1 1

RISC-V is a general-purpose license-free open Instruction Set Architecture [ISA] with multiple extensions. It is an ISA separated into a small base integer ISA, usable as a base for customized accelerators and optional standard extensions to support general-purpose software development. RISC-V supports both 32-bit and 64-bit address space variants for applications, operating system kernels, and hardware implementations. So, it is suitable for all computing systems, from embedded microcontrollers to cloud servers.

To know more about RISC-V, refer : Semiwiki Article – Is your Career at Risk without RISC-V?

In this open era of computing, RISC-V community members are ambitious to create various kinds of RISC processors using RISC-V open ISA. However, the risk of using RISC-V ISA is higher because the proven processor verification flow is still proprietary to established processor fabless IP companies and IDMs as an unrevealed secret. So, how can we make the RISC-V verification flow open and empower the RISC-V community? 

If you have been wondering, ‘How can I verify my RISC-V processor efficiently without risking TTM’, then you should explore and adopt Maven Silicon’s RISC-V verification flow for your processor IP verification, explained in this technical paper.

I have defined Maven Silicon’s RISC-V verification flow using the correct by-construction approach. The approach is to build a pre-verified synthesizable RISC-V IP fundamental IP building blocks library and create any kind of multi-stage pipeline RISC-V processor using this library. Finally, the multi-stage pipeline RISC-V processor IP can be verified using Constrained Random Coverage Driven Verification [CRCDV] in Universal Verification Methodology [UVM] and FPGA prototyping.

Let me explain the verification strategy and walk you through Maven Silicon’s RISC-V verification flow.

2.Verification Strategy

2.1 Block-Level Verification: Generate RISC-V IP Fundamental Blocks Library using formal verification

Verify all the RISC-V IP fundamental building blocks like ALU, Decoder, Program Counter, Registers, Instruction, and Data memories using Formal Verification. Design Engineers [DE] who do RTL coding of the RISC-V IP building blocks can embed the assertions [SVA/PSL] into the RTL modules. Verification Engineers [VE] will verify the RISC-V RTL IP blocks using the formal verification EDA tool. DEs can synthesize the verified RTL blocks and fix all synthesis-related issues.

Finally, VEs can further verify those synthesizable RTL blocks and create RISC-V IP Blocks Library.

2.2 IP-Level Verification: Verify RISC-V IP which is built using pre-verified RISC-V IP Blocks Library using Constrained Random Coverage Driven Verification [CRCDV]

DEs can realize any kind of multi-stage pipeline RISC-V processor using the pre-verified RISC-V IP fundamental blocks library. VEs will create the verification environment using UVM and verify the RISC-V multi-stage pipeline processor using CRCDV.

VEs will also create necessary reference models, interface assertions for protocol validation, and functional coverage models. Finally, VEs will sign off the IP level regression testing based on the coverage closure [Code + Functional Coverage].

The verified RISC-V processor IP can be verified further by booting OS using FPGA Prototyping if the IP implements a general-purpose processor that supports a standard Unix-like OS.

  1. Maven Silicon’s RISC-V Processor IP Verification Flow

As shown in figure1, Maven Silicon’s RISC-V Verification flow implements the verification strategy explained above.

Figure1: Maven Silicon’s RISC-V IP Verification Flow

4.RISC-V IP Verification using UVM

IP-level VEs can create the verification environment using Universal Verification Methodology, as shown in figure 2.

As our Maven Silicon’s RISC-V IP RTL design uses an AHB interface, we have modeled the instruction and data memories as AHB slave UVM agents. The RISC-V processor reference model was modeled as AHB master UVM agent, and the complete UVM environment with all the testbench components scoreboard, subscribers with coverage models, reset, interrupt, and RAL UVM agents was validated by connecting RISC-V AHB agents back-to-back, especially to verify the TB dataflow and coverage generation. Once the verification environment became stable, one of the reference models was replaced by the RTL. UVM RAL was extensively used to sample the RISC-V IP registers and memories for the data comparison in the scoreboard.

To understand how this verification environment works, refer to this demo video:

Maven Silicon RISC-V UVM Verification Environment

Figure 2: Maven Silicon’s RISC-V IP UVM Verification Environment

To know further about various verification methodologies like formal verification, IP, Sub-system and SoC verification flow, CRCDV using code and functional coverage, refer:

Semiwiki Article: SoC Verification Flow and Methodologies

One can also consider integrating Google’s Instruction Stream Generator [ISG] for the stimulus generation, and open source Instruction Set Simulator [ISS] like Spike as a reference model into their UVM environment and efficiently do exhaustive verification.

Conclusion

Efficient, high-quality RISC-V IP verification can be realized only through the effective combination of various verification methodologies like formal verification, CRCDV using UVM and OS booting using FPGA prototyping, and reusabilities like reusing pre-verified RISC-V Blocks library and scalable IP level UVM testbenches. As RISC-V is an open ISA, we can create the reusable RISC-V fundamental blocks pre-verified library and contribute to RISC-V International as an open-source RISC-V library. Using this pre-verified library, the RISC-V community members can create any kind of multi-stage pipeline RISC-V processor as they prefer and verify their RISC-V processor as per the flow explained in this technical paper. Isn’t it the more efficient way of verifying your RISC-V processor without risking your TTM? 

Download Maven Silicon’s RISC-V Processor IP Verification Flow PDF 

About Maven Silicon
Maven Silicon is a trusted VLSI Training partner that helps organizations worldwide build and scale their VLSI teams. We provide outcome-based VLSI training with our variety of learning tracks i.e. RTL Design, ASIC Verification, DFT, Physical Design, RISC-V, and ARM etc. delivered through our cloud-based customized training solutions. To know more about us, visit our website.

Also Read:

Is your career at RISK without RISC-V?

SoC Verification Flow and Methodologies

Experts Talk: RISC-V CEO Calista Redmond and Maven Silicon CEO Sivakumar P R on RISC-V Open Era of Computing


Keysight Expands EDA Software Portfolio with Cliosoft Acquisition

Keysight Expands EDA Software Portfolio with Cliosoft Acquisition
by Daniel Nenni on 02-23-2023 at 8:00 am

image001 14

During the day I do M&A work inside the semiconductor ecosystem and I have been part of more than a dozen acquisitions during my career so I know a good one when I see it and I see a great one with Keysight and Cliosoft, absolutely.

Cliosoft came to SemiWiki 12 years ago when we first went online so I know them quite well. With more than 400 customers, the depth of experience that comes with this company is incredible. Additionally, Cliosoft has always been vendor agnostic, working closely with the top three EDA companies, which made the acquisition even more interesting. With Keysight, there will be even deeper partnerships with the top EDA companies with the expanded flow integration of Cliosoft (SoS, Hub, VDD) and Keysight Pathwave Advanced Design System. Had one of the other EDA companies acquired Cliosoft that would not have been the case.

Srinath Anantharaman, Chief Executive Officer of Cliosoft, said: “Handling exponential growth in design data and maximizing IP reuse with interoperability across EDA vendor environments is a major challenge as we approach the time of ‘More than Moore’s law’. Keysight’s broad industry leadership in applications like 5G and 6G communications, automotive, and aerospace and defense, makes Keysight uniquely positioned to realize the promise of connecting design, emulation, and test data in streamlined workflows that speed time-to-market. We are excited to join Keysight in raising engineering productivity to the next level and enabling our customers to digitally transform their development lifecycles and meet the challenges ahead.”

Keysight came to SemiWiki last year and we have written about their tools in great detail. We also did a podcast with Niels Faché, Vice President and General Manager of Keysight EDA. For Keysight, Cliosoft brings strength to the Process and Data Management (PDM) side of the business which is a critical link between the physical systems integration and physical testing to the Digital Twin (design and simulation) side of the business. The result being improved automation and traceability for product implementation and production.

Niels Faché, Vice President and General Manager of Keysight EDA, said: “One of our top business priorities is creating digital, connected workflows from design to test that accelerate customers’ digital transformation. We see a tremendous opportunity in the PDM space to leverage Cliosoft’s current capabilities combined with our design-test solutions expertise. Adding PDM solutions to the portfolio is a natural progression of our open EDA interoperability strategy to deliver best-in-class tools and workflows in support of increasingly complicated product development lifecycles. Cliosoft offers proven software tools that enable product teams to perform data analytics and accelerate time to insight. The result of faster insight and greater reuse is improved productivity in the verification phase and shorter overall development cycles.”

Bottom line: This is one to watch. Cliosoft was already a market leader and now they have the Keysight breadth of experience and strength of a world wide field sales and support channel. This is definitely one of the 1+1=3 acquisitions.

About Keysight Technologies
Keysight delivers advanced design and validation solutions that help accelerate innovation to connect and secure the world. Keysight’s dedication to speed and precision extends to software-driven insights and analytics that bring tomorrow’s technology products to market faster across the development lifecycle, in design simulation, prototype validation, automated software testing, manufacturing analysis, and network performance optimization and visibility in enterprise, service provider and cloud environments. Our customers span the worldwide communications and industrial ecosystems, aerospace and defense, automotive, energy, semiconductor and general electronics markets. Keysight generated revenues of $5.4B in fiscal year 2022. For more information about Keysight Technologies (NYSE: KEYS), visit us at www.keysight.com.

Additional information about Keysight Technologies is available in the newsroom at https://www.keysight.com/go/news and on Facebook, LinkedIn, Twitter and YouTube.

Also Read:

Cliosoft’s Smart Storage Strategy for Better Workspace Management

Designing a ColdADC ASIC For Detecting Neutrinos

Big plans for state-of-the-art RF and microwave EDA

Higher-order QAM and smarter workflows in VSA 2023


Physically Aware NoC Design Arrives With a Big Claim

Physically Aware NoC Design Arrives With a Big Claim
by Bernard Murphy on 02-23-2023 at 6:00 am

NoC manual flow min

I wrote last month about physically aware NoC design, so you shouldn’t be surprised that Arteris is now offering exactly that capability 😊. First, a quick recap on why physical awareness is important, especially below 16nm. Today, between the top level and subsystems a state-of-art SoC may contain anywhere from five to twenty NoCs, contributing 10-12% of silicon area. Interconnect must thread throughout the floor plan, adding to the complexity of meeting PPA goals in physical design. This requires a balancing act between NoC architecture, logical design and physical design, a task which until now has forced manual iteration, materially extending the implementation schedule. That iteration is a time sink which physically aware NoC design aims to cut dramatically.

(Source: Arteris, Inc.)

The Traditional Interconnect Design Flow

The early stages of NoC design are architecture-centric, figuring out how typical software use cases can best be supported. The network topology should be optimized around quality of service for high-traffic paths while allowing more flexibility for lower frequency and relatively latency-insensitive connections. Topology choices are also influenced by early floor plan estimates, packaging constraints, power management architecture, and safety and security considerations. Architects and NoC design teams iterate between an estimated floor plan and interconnect topology to converge on an approximate architecture. This planning phase consumes time (14-35 days in a production design example illustrated here), essential to approximately align between architecture, logic and physical constraints.

Then the hard work starts to refine approximations. System verification and logic verification are run to validate bandwidth and latency targets. Synthesis, place and route and timing analysis probe where the design must be further optimized to meet power, area and timing goals. At this stage, designers are working with a real physical floor plan for all the function IP components, into which NoC IPs and connections must fit. Inevitably, early approximations prove not to be quite right, the physical design team can’t meet the timing goals, and the process must iterate. Physical constraints may need to be tuned and pipelines added, driven by informed guesswork. More iterations follow further refining estimates and ultimately leading to closure. Sometimes fixing a problem may require a restart to refactor the NoC topology, putting the whole project at risk.

Those iterations take time. In one production design, closing NoC timing took 10 weeks! Experienced design teams limit iterations by over-provision pipelining and by using LVT libraries along timing critical paths. Accepting tradeoffs in area, latency and power in exchange for faster convergence. Per iteration, considering and implementing those decisions then rerunning the implementation flow and rechecking timing, altogether can take weeks.

Physically Aware NoC Design With FlexNoC 5

That trial-and-error iteration is a consequence of approximations and assumptions in architectural design. Understandable but now a much better approach is possible. After the very first pass-through implementation, a more accurate floor plan is available. Maybe the NoC topology needs to be tweaked against the floor plan but now it is working with real physical and timing constraints. The NoC generator has enough information to generate more accurate timing and physical constraints for P&R. It can also insert the right level of pipelining, exactly where it is needed. Automatically delivering a NoC topology which will close with confidence in just one more implementation pass.

(Source: Arteris, Inc.)

This is what Arteris is claiming for their new generation FlexNoC 5 solution – that a first iteration based on an implementation grade floor plan will provide enough information for FlexNoC 5 to automatically generate timing and physical constraints, together with pipelining, to meet closure on the next pass. Based on the customer design example described earlier, they assert that this automation delivers up to a 5X speedup in iterating to NoC physical closure.

That’s a big jump in productivity and one that seems intuitively reasonable to me. I talked in my earlier blog about the whole process being an optimization problem with all the challenges of such problems. A trial-and-error search for a good starting point suffers from long cycle times through implementation, to determine if a trial missed the mark and how to best adjust for the next trial. More automation in NoC generation based on more accurate estimates should find an effective starting point faster.

Proving the Claim

Arteris cites measured improvements in NoC physical implementation using a FlexNoC 5 flow, in each case using the same number of engineers with and without the new technology. For pipeline insertion, they were able to reduce iterations from 10 to 1, within the same time per iteration. In constraint (re-) definition they were able to reduce the elapsed time from 3 days to 1 day and turns on constraints from 3 to 1. In place and route, they were able to reduce 5 or more iterations to 1 iteration within the same or better iteration time. Overall, 10X faster in pipeline insertion, 9X faster in constraint (re-)definition and 5X faster in P&R.

I should add that a customer in this case (Sondrel) has significant experience in ASIC design and has been working with NoC technology for many years. These are experienced SoC designers who know how to optimize their manual flow yet agree that the new automated flow is delivering better results faster.

You can learn more about FlexNoC 5 HERE and about Arteris HERE.


Safe Driving: FSD vs. V2X

Safe Driving: FSD vs. V2X
by Roger C. Lanctot on 02-22-2023 at 10:00 am

Safe Driving FSD vs. V2X

When it comes to competing with Elon Musk’s Tesla the automotive industry is, in many ways, its own worst enemy. For more than two decades the auto industry – in the U.S. and E.U. – has struggled to bring vehicle-to-vehicle communication technology to market – first as dedicated short range communications (DSRC) and then (actually now) as cellular-based C-V2X.

Auto makers created working groups and standards committees to draw up the specifications and identify the applications along with conducting extensive testing. The industry turned to the U.S. and E.U. governments seeking mandates for the emerging technology that was intended to save lives by helping cars avoid collisions.

Musk steered clear of V2X technologies of any variety turning, instead, to radar and camera sensors to guide his vehicles with Autopilot and, now, full self driving (FSD). In the U.S., the U.S. Department of Transportation failed to initiate the final rule making that would have mandated V2X technology. In Europe, opposition to DSRC technology steadily mounted and ultimately blocked a mandate there as well.

The reason for the failure to launch DSRC had to do with the evolution of cellular networks – through 2G, 3G, 4G/LTE, and, now, 5G – and the onset of lower cost and higher resolution cameras. With superior connectivity and cameras – mainly cameras – advanced driver assist systems proliferated along with adaptive cruise control and parking assist functions. In essence, active safety technologies advanced while the V2X vision languished in the file cabinets and on the dockets of regulators and legislators.

After more than 20 years of testing and arguing, V2X remains a dream for automotive engineers to ponder. The C-V2X variety is still awaiting final rule making from – now – the Federal Communications Commission. In fact, the FCC has failed to fast track long-promised waivers for car makers and state transportation authorities to deploy V2X technology. Final rule making lies somewhere further down the road.

This is a shame as V2X technology was designed to enable collision avoidance technologies ranging from identifying vulnerable road users (pedestrians, bicyclists, emergency responders, workers), to synchronizing with traffic signals, managing intersection interactions, avoiding collisions, and a host of more esoteric use cases such as identifying black ice or seeing around corners. V2X was intended to be the cornerstone of vehicle safety by connecting vehicles to other vehicles and infrastructure with low latency communications.

Meanwhile, Tesla has focused nearly its entire enhanced driving value proposition on cameras and the safer driving they enable. Tesla’s Musk long ago dismissed vehicle-to-vehicle communications (the topic rarely comes up any more) along with hydrogen, LiDAR, and even radar – though he appears to be rethinking radar.

Tesla’s latest and most impressive advances using camera technology have been leveraging its full self driving (FSD) technology – for which Tesla owners must pay thousands of dollar – to not only identify signalized intersections but also to identify the signal phase and timing of the lights. Tesla’s equipped with FSD – properly activated and with a vigilant driver – are able to identify red lights and come to a full stop without assistance and restart and proceed on green. (Multiple-lane left turns from traffic lights can still be a challenge as are poorly marked roads.)

Car makers in the U.S. and Europe poured hundreds of millions of dollars into V2X development that has yet to save a single life or convey a single cent of added value to any car. The “legacy” auto industry is staring into an abyss of wasted time, money, and effort to bring a life-saving technology to market – a diversion fueled by the delusion of regulatory endorsement.

To be sure, V2X was a big bet – a huge vision. That vision originally foresaw a massive infrastructure build out dedicated to supporting driving safety and likely to cost “someone” (taxpayers?) billions of dollars. What has emerged from 20+ years of effort is a self-contained technology using existing cellular hardware but not dependent upon the cellular network for its direct communications.

The V2X dream (nightmare?) isn’t over. The FCC is expected to approve waivers for car makers and transportation authorities any day. A final rule will come later. But the decades-long debacle is a lesson in how not to advance driving safety.

The Tesla approach is the classic market-based model. The New Car Assessment Program (NCAP) is yet another market-based model based on conferring or withholding five-star ratings. The USDOT’s NHTSA has come to rely on voluntary adoption of guidelines – as in the case of automatic emergency braking – since the rule-making process appears to have ground to a halt.

The Infrastructure Bill passed last year in the U.S. has multiple safety mandates, but enforcement and adoption protocols are ambiguous. If we have learned nothing else from the V2X project it is that the automotive industry needs to find a new path forward for developing, defining, and deploying safety systems – especially but not exclusively in the U.S.

In the absence of effective safety leadership from USDOT/NHTSA, Tesla has emerged as the voice of reason. That can’t be good. As impressive as FSD is, it is still just a Level 2 semi-automated driving system that is both amazing and terrifying. We need to find a path forward that emphasizes the amazing and dials back the terrifying.

Also Read:

ASIL B Certification on an Industry-Class Root of Trust IP

Privacy? What Privacy?

ATSC 3.0: Sleeper Hit of CES 2023


Bleak Year for Semiconductors

Bleak Year for Semiconductors
by Bill Jewell on 02-22-2023 at 6:00 am

Top Semiconductor Revenue 2022

The global semiconductor market in 2022 was $573.5 billion, according to WSTS. 2022 was up 3.2% from 2021, a significant slowdown from 26.2% growth in 2021. We at Semiconductor Intelligence track semiconductor market forecasts and award a virtual prize for the most accurate forecast for the year. The criteria are a forecast publicly released anytime between November of the prior year and the release of January data from WSTS (generally in early March). The winner for 2022 is Objective Analysis with a 6% forecast released in December 2021. IDC was closer with a 4% forecast in September 2021, but this was outside of our contest time range. Within the contest period, WSTS was second closest with an 8.8% forecast in November 2021. Most other forecasts for 2022 made prior to March 2022 were over 10%, including ours at Semiconductor Intelligence.

How is the outlook shaping up for 2023? The year is off to a weak start. The top 15 semiconductor suppliers collectively had a 14% decline in revenue in 4Q 2022 versus 3Q 2022. The largest declines were the memory companies with a 25% decline. The non-memory companies declined 9%. Four of the fifteen companies had slight revenue increases ranging from 0.1% to 2.4%: Nvidia, AMD, STMicroelectronics, and Analog Devices.

The outlook for the top companies in 1Q 2023 is generally bleak. The first quarter of the year is typically weak for the semiconductor industry, but most companies are expecting 1Q 2023 to be weaker than normal. The nine non-memory companies providing revenue guidance for 1Q 2023 had a weighted average decline of 10%, with all nine expecting a decline. Intel was the most pessimistic, with guidance of a 22% decrease. Inventory adjustments were cited by several companies as a key factor for the grim outlook, particularly in the PC and smartphone end markets. Automotive and industrial are the lone bright spots, with five companies seeing strong demand in one or both of these segments.

Memory companies, which saw revenue declines ranging from 13% to 39% in 4Q 2022, may be starting to recover. Micron Technology expects 1Q 2023 revenues to decrease 7% compared to a 39% decline in 4Q 2023. Micron sees inventory levels improving in the current quarter. The other memory companies – Samsung, SK Hynix and Kioxia – cited continuing inventory adjustments and weak end markets but did not provide revenue guidance for 1Q 2023.

For the full year 2023, the semiconductor market will certainly decline, but the extent of the decline depends on when inventories are back in line and on the overall demand for electronic equipment. According to Gartner, shipments of both smartphones and PCs are expected to decrease in 2023, but at a rate significantly less than in 2022. Smartphones should decline 4% in 2023 versus an 11% decline in 2022. PCs are projected to drop 7% in 2023, following a 16% drop in 2022. The longer-term outlook for PCs and smartphones is for low single digit growth. IDC forecasts a 2023 to 2026 compound annual growth rate (CAGR) of 3.1% for smartphones and 2.3% for PCs and tablets combined.

Automotive production will continue to grow, but a slightly slower rate. S&P Global Mobility projects light vehicle production will grow 3.9% in 2023 versus 6.0% in 2023. S&P expects semiconductor availability to impact production in the first half of 2023 but demand constraints should have more of an impact in the second half of 2023.

The global economic outlook has improved slightly over the last few months. The International Monetary Fund (IMF) January 2023 forecast called for 2.9% growth in global GDP in 2023, an improvement from its October 2022 forecast of 2.7%. The advanced economies overall are expected to grow 1.2%, up from 1.1% in October, with the U.S. outlook improving to 1.4% from 1.0%. The forecast for most of the advanced economies has improved in the January forecast except for the UK, which is now expected to decline 0.6%. Overall emerging/developing economies are projected to grow 4.0%, up from 3.7% in October. The biggest change is in China, now forecast to grow 5.2% as its economy fully reopens, up from 4.4% in October.

The risks of recession in 2023 are moderating. Citi Research in January stated the risk of a global recession in 2023 is about 30%, down from their earlier projection of 50%. Earlier this month, Goldman Sachs put the probability of a U.S. recession in 2023 at 25%, down from their previous forecast of 35%.

A decline in the global semiconductor market in 2023 is inevitable after the 2nd half 2022 dropped 10% from the 1st half and a likely decline around 10% in 1Q 2023 from 4Q 2022. The severity of the decrease depends on when in 2023 the recovery begins. Gartner, IC Insights, WSTS and EY expect declines in the 4% to 5% range – which implies a healthy recovery beginning in 2Q 2023. Objective Analysis (our forecast contest winner for 2022) sees a 19% drop in 2023. Their assumptions include a 45% decrease in the DRAM market and slow growth in the end markets.

Our Semiconductor Intelligence forecast for 2023 is a decline of 12%. This assumes a moderate recovery beginning in 2Q 2023 and improving in the second half of 2023. Inventory adjustments should be mostly resolved by 2Q. Although PCs and smartphone shipments should decrease, in 2023, the rate of decline will be significantly less than in 2022. A continuing strong automotive market and growth in the internet-of-things (IoT) will contribute to the semiconductor recovery. The risks of a global recession in 2023 are lessening. Our preliminary assumptions for 2024 are continuing recovery in semiconductors and moderate growth in end markets. We put 2024 semiconductor market growth in the 5% to 10% range.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Also Read:

CES is Back, but is the Market?

Semiconductors Down in 2nd Half 2022

Continued Electronics Decline

Semiconductor Decline in 2023