Banner 800x100 0810

Podcast EP107: The Impact of Arteris IP and Its Partnerships on the Automotive Industry and Beyond

Podcast EP107: The Impact of Arteris IP and Its Partnerships on the Automotive Industry and Beyond
by Daniel Nenni on 09-16-2022 at 10:00 am

Dan is joined by Michal Siwinski, Chief Marketing Officer for Arteris IP. Arteris provides network-on-chip interconnect semiconductor IP and deployment technology to accelerate SoC development and integration for a wide range of applications from AI to automobiles, mobile phones, IoT, cameras, SSD controllers, and servers.

Dan explores the impact Arteris is having on high-growth markets such as automotive with Michal. The company’s partnership with Arm is also explored. The impact of Arteris and its partnerships beyond the automotive market are also discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Samsung Foundry Forum & SAFE™ Forum 2022

Samsung Foundry Forum & SAFE™ Forum 2022
by Daniel Nenni on 09-16-2022 at 6:00 am

SFF SAFE GLOBAL Banner 400 400

It has been an exciting time in the semiconductor industry and the excitement is far from over. Years 2022 and 2023 will be more challenging in many different ways and live activities have just begun. The cornerstones to the semiconductor industry are the foundries so I look forward to the live foundry events coming up in October, absolutely.

Samsung Foundry kicks this series of events off with:

“Join our Samsung Foundry Forum and SAFE Forum 2022. These two forums will be held in several locations around the world. After the in-person events, please join our online on-demand event for even more exciting content. You are invited to take part in these forums where the vision and innovative technology of Samsung Semiconductor will be discussed.”

Now that events are live the ever important semiconductor networking has resumed. While most of the conferences I have attended have been 50% full the foundry events have been 100% full. That is how important they are.

REGISTER NOW

Into the gates of innovation

Samsung Foundry invites you to the Samsung Foundry Forum and SAFE Forum 2022. These forums will be the first in-person event held in three years!

At the Samsung Foundry Forum, join us to gain insight on our vision and latest technological innovations. You will hear from our top technical experts as well as an industry guest speaker. There will be opportunities to speak with our partners at their booths and network with Samsung Foundry and your peers throughout the event.

At the SAFE Forum, join us to hear about industry trends and learn from our SAFE partners as they present solutions to your EDA, IP, DSP, and Packaging challenges. There will be partner and customer keynotes, a panel session, and numerous technical sessions. In addition, similar to SFF, there will be opportunities to visit our partners at their booth and network with Samsung Foundry and your peers throughout the day.

All participants will receive a welcoming gift and be entered into a raffle. We hope you enjoy the 2022 Samsung Foundry Forum and SAFE Forum.

* SAFE Forum 2022 will be only held in U.S on October 4.

Why should you attend? I generally go for the food and Samsung has the best food. Samsung is also very generous with the gifts. But most importantly the networking. SAFE Forum will be filled with experts from the fabless semiconductor ecosystem. More than 100 Samsung ecosystem partners will attend in addition to the expert speakers from Samsung:

Day 1

Keynote: Siyoung Choi, President and GM of Samsung Foundry Business

Process Technology : Gitae Jeong, Head of Technology Development Team, Jong-Ho Lee, VP of Specialty Technology Team

Advanced Heterogenous Integration: MoonSoo Kang, Head of Business Development Team

Manufacturing Excellence: YK Hong, VP of Yield Enhancement Team, Jonathan Taylor, VP Head of Fab SAS Fab Engineering

Design Platform: Ryan Lee, EVP Head of Design Platform Foundry Business

Business Outlook and Service: Sang-Pil Sim, EVP Head of World Wide Sales and Marketing, Marco Chisari, EVP Head of Americas Office Foundry Sales and Marketing

REGISTER NOW

Day 2

Welcoming Remarks: Ryan Lee, EVP Head of Design Platform Foundry Business

Business Overview: MoonSoo Kang, Head of Business Development Team

Tech Session 1 GAA and More than Moore: Ansys, Cadence, Samsung, Siemens/Qualcomm, Synopsys

Tech Session 2: Multi- Die Integration, Amkor Technology, Cadence, Samsung, Siemens, Synopsys.

Tech Session 3: Advanced IP for HPC, Alphawave, Cadence, Rambus, Samsung, Synopsys.

Tech Session 4: Advanced Design Platform, ADT, Samsung, Samsung TSP, SiFive,

And this is a world wide event:

US 03-04 Oct, 2022 (PDT, GMT-7) 170 S Market St, San Jose, CA 95113, USA Learn more

EMEA 07 Oct, 2022 (EST, UTC-5) Terminalstraße Mitte 20, 85356

München-Flughafen, Germany Learn more

Japan 18 Oct, 2022 (JST, UTC+9) 1 Chome-9-1 Daiba, Minato City,

Tokyo 135-8625, Japan Learn more

Korea 20 Oct, 2022 (KST, UTC+9) 524 Bongeunsa-ro, Gangnam-gu, Seoul, Republic of Korea Learn more

 

REGISTER NOW


Three Ways to Meet Manufacturing Rules in Advanced Package Designs

Three Ways to Meet Manufacturing Rules in Advanced Package Designs
by Kendall Hiles on 09-15-2022 at 10:00 am

1

Often designers are amazed at the diversity of requirements fabricators and manufacturers have for metal filled areas in advanced package designs. Package fabricators and manufacturers do not like solid metal planes or large metal areas. Their strict metal fill requirements address two main issues. The dielectric and metal layers can be very thin, 15 µm or less, and during the build-up and RDL process they can suffer from areas of delamination due to trapped pockets of gas. Think of it being like adding a screen protector to your smartphone and how hard it is to get the air bubbles out. Also, uneven conductor densities on the same layer or across layer pairs can cause warpage in the package and/or the wafer.

The combination of these issues makes the designer’s job of meeting the manufacturing rules a challenge. Further, the diversity of substrate technologies from numerous vendors means there’s no one-size-fits-all solution. In this article, we will walk through three methodologies that are commonly utilized on advanced package designs to achieve foundry/OSAT requirements for metal areas and planes:

  1. Dynamic hatched filled metal areas
    By far the easiest and fastest methodology. Some additional steps may be needed based on the density requirements.
  2. Outgassing voids in metal areas
    A post process solution that can be customized for about any situation.
  3. Dummy metal fill
    This is the way most silicon designs handle low density areas. Typically used for silicon interposer designs.

Dynamic hatched filled metal areas

One of the simplest ways to solve both outgassing and metal fill coverage is to use dynamic hatched fill. When adding square or diagonal hatch, the package design tool should tell you what the base density will be across the plane, which makes hitting your target density fairly simple.

You will need to make sure to fill any incomplete or partial hatches to prevent acute angle issues, and you should also offset the hatch on adjacent layers to prevent EMI and signal integrity issues. By setting these up in the beginning of the project, you save time over doing it at the last minute just before or during tape-out. Most manufacturers and fabricators have manufacturing sign off design rules that need to be met before manufacturing can begin. These design rules check for manufacturing and yield issues, like spacing, and problem layout items, like acute angles, density, and fill. When violations are found, designers can save time finding and fixing them by cross-probing from the design rule checking tool to the layout tool, if that capability is supported.

Figure 1. 30 µm void, 40 µm pitch, 43% fill

In some technologies the base hatch can be amended to add other features required for a specific vendor.

Figure 2. HDFOWLP with pad voids and additional plane voids

Dedicated outgassing voids

It is common to see designers utilize standalone outgassing voids. Unlike the dynamic hatched fill, this is a post process. Designers use outgassing voids to get void shapes—like circles, rectangles, oblongs, octagons, or hexagons—or to stagger the voids. Once you find your formula, the process is predictable and very easy to update for layout changes. Using a density-aware, multi-pass outgassing routine enables designers to work on signal integrity and power integrity issues while simultaneously considering the manufacturing process requirements — resulting in significant time savings.

Metal balancing can be a density per layer or a layer pair target. Some manufacturers also utilize sub-layer blocks (125 µm–250 µm windows of density), like walking blocks or adjacent blocks.

Figure 3. 125 µm density sub-block regions

Whatever the rules, make sure the adjacent layer voids are offset and keep the voids from going over any differential signal routing. Differential signals or pairs can have issues if the voids are unevenly dispersed on the adjacent layers over the pair. You may also see clearance rules from the void to a micro-via/polyamide opening or to a trace.

Figure 4. Multi-pass density aware voids

Figure 5. Adjacent layer void clearance

In high-speed designs or designs with high current draw, designers utilize automation guided manual void placement. This helps users to meet the manufacturing requirements while being fully aware of where each void is getting placed. 5G packages are a perfect use case for this method and is recommended over the shotgun approach of the fully automated methods where manual cleanup of unwanted voids is too time consuming.

Figure 6. Staggered rectangle voids to differential pairs

Figure 7. Degassing void analysis identifies areas requiring void insertion

Figure 8. As voids are added, circles show the effective radius.
Green areas needing a void, and adjacent layers are shown

Dummy metal fill

Another metal balancing method utilized on interposer designs with high-bandwidth-memory (HBM) or RDL interconnects is dummy fill. Dummy fill refers to unconnected metal shapes. This can reduce capacitance and help increase manufacturing yield. It can be multi-pass with multiple shapes that can grow to a set maximum length. It can also be density aware and add fill to hit a target value.

Utilizing a density analysis tool that allows visualization of the density windows in the host layout tool is para- mount to find and fix areas and layers that do not meet the vendor rules.

In any of these methodologies you will need to simulate to make sure that the solution meets your performance specification. While foundries and OSATs are focused on manufacturability and yield, it falls to the user to ensure compliance with the performance specification. You must simulate your power delivery before you dismiss an outgassing methodology. At first glance, having signals crossing a plane area with hundreds of voids like we saw in the earlier example might sound like a bad idea: however, it can behave similarly to solid fill and may not present any issues. Without PDN simulation, you’re just guessing at its suitability.

Figure 9. Multi-pass density aware dummy fill

Analysis using the appropriate methodology will ensure that the design meets performance specifications. Recommended types of analysis include DC drop (voltage drop, current density, via currents), PDN impedance analysis, and signal integrity analysis, including return path checks.

Figure 10. Tightly coupled return currents flowing on the cross hatched plane layer underneath the trace

Conclusion

In summary, dynamic hatched fill, outgassing voids, and dummy metal fill are the most common methods to achieve foundry/OSAT requirements for metal areas and planes. The key is choosing the methodology that best meets vendor rules, meets your PDN specifications, allows rapid ECO turns, and is repeatable. To expedite verification, make sure you turn on dynamic cross-probing between the vendor sign off tool and the layout tool.

Kendall Hiles is a

Also Read:

Connecting SystemC to SystemVerilog

Today’s SoC Design Verification and Validation Require Three Types of Hardware-Assisted Engines

Resilient Supply Chains a Must for Electronic Systems


Synopsys Vision Processor Inside SiMa.ai Edge ML Platform

Synopsys Vision Processor Inside SiMa.ai Edge ML Platform
by Bernard Murphy on 09-15-2022 at 6:00 am

Dynamic range min

SiMa.ai just announced that they achieved first silicon success on their new MLSoC, for AI applications at the edge, using Synopsys’ design, verification, IP and design services solutions. Notably this design includes the Synopsys ARC® EV74 processor (among other IP) for vision processing. SiMa.ai claim their platform, now released to customers, is significantly more power efficient than competing options and provides hands-free translation from any trained network to the device. (Confirming an earlier post that software rules ML at the edge.) The company has impressive funding and experienced leadership so this is definitely a company to watch.

Strong Vision ML Starts with Strong Imaging

In modern intelligent designs AI gets the press but would be worthless if presented with low-quality images. A strong image processing stage ensures that, between the camera and the ML stage, images are optimized to the greatest extent possible. Particularly to meet or exceed how the human eye – still the golden reference – sees an image.

A dedicated ISP stage can get pretty sophisticated, up to and including its own elements of machine learning. Note: I don’t know how much if any of the Synopsys ML support is included in the SiMa solution or the range of  EV74 ML capabilities they use. You will have to ask SiMa those questions.

ISP functions include de-mosaicing which compensates for the raw pixel-based image sensor, overlaid by a color filter array, interpolating a smooth image from that pixelated input. Especially in surveillance cameras, fisheye lenses require compensation for geometric distortion, another ISP function. Add to this list de-noising, color balance and a host of other options, essential when matching to similarly compensated training images.

I personally find high dynamic range (HDR) to be one of the most interesting ISP adjustments, especially for AI apps. The opening images for this article illustrate an example HDR application. On the left is an image after other compensations not including HDR. The right image is HDR compensated. Many ISP functions optimize globally; HDR is a local optimization, balancing between bright areas and darker areas in an image. Before compensation, features in low light areas are almost invisible. After compensation, features are clear across the image despite a wide range of brightness. This is critically important for ML to detect say a pedestrian stepping off a sidewalk in a shaded area on a bright day.

While Synopsys doesn’t directly provide application software, they do offer a set of tools that designers use to create and optimize their own ISP software. The Synopsys MetaWare Development Toolkits support C/C++ and OpenCL C programming as well as vision kernels to ease application development. For those of you who don’t have applications expertise in these areas, there are also open-source solutions 😊

Intelligence in image signal processing

The EV74 processor optionally supports ML processing. I suspect this isn’t relevant to the SiMa application, but it is relevant to image processing, even before you get to object identification. Super-resolution methods aim to construct a higher resolution image from a lower resolution input using one of many possible neural net techniques. Consumer and medical applications often apply super resolution for graphic enhancement, using learning to infer reasonable interpolation pixels between existing pixels.

The EV74 DNN option can handle more than just that application. It supports direct mapping from the Caffe and TensorFlow frameworks, and the ONNX neural network interchange format. Edge AI in many applications demands a single chip solution. EV74 can support a standalone implementation (with appropriate memory and other functions in the SoC). Or integrated together with value-added specialist functionality like that from SiMa.ai.

What is coming next in solutions?

I talked more generally with Stelios Diamantidis (Distinguished Architect, Head of Strategy, Autonomous Design Solutions at Synopsys). He mentioned that edge  applications are inherently heterogeneous, as data travels from optics, to sensors, to compute, to memory, to display, etc. Maintaining low end-to-end latency across the system is contributing to the chiplet movement. One example application is for drones which demand fast response times to avoid obstacles. He also sees big pickup in industrial applications, for example LIDAR sensing in production lines to control grippers. Either case requiring strong vision and AI to support the performance requirements of increasingly complex neural network models in SoCs.

Stelios added that between industrial and vehicle applications, such designs must be robust to a lot of environmental variation. Design methods and standards prove this in part . Complemented always by an established track record in design. And supported across a wide variety of applications, from semiconductor leaders to pioneers.

Very interesting stuff. You can read the SiMa.ai press release HERE.


Podcast EP106: SoC Verification Flows and Methodologies with Sivakumar P R of Maven Silicon

Podcast EP106: SoC Verification Flows and Methodologies with Sivakumar P R of Maven Silicon
by Daniel Nenni on 09-14-2022 at 10:00 am

Dan is joined by Sivakumar P R, the Founder and CEO of Maven Silicon. He is responsible for the company’s vision, overall strategy, business, and technology. He is also the Founder and CEO of ASIC Design Technologies.

Dan and Sivakumar discuss SoC Verification Flows and Methodologies based on his article published on SemiWiki. Dan explores the SoC verification flow starting with verification IP and why verification engineers need to understand electronic systems, and how it helps their long-term career goals.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Machine Learning in the Fab at #59DAC

Machine Learning in the Fab at #59DAC
by Daniel Payne on 09-14-2022 at 8:00 am

Virtual Metrology min

It used to be true that a foundry or fab would create a set of DRC files, provide them to designers, and then the process yield would be acceptable, however if the foundry knows more details about the physical implementation of IC designs then they can improve the yield. Using a digital twin of the design, process and metrology steps is a better methodology that understands the bidirectional information flow between the physical synthesis tool and fabrication steps.  A Machine Learning (ML) framework handles the bidirectional flow while using accurate predictive models, efficient algorithms, while running in the cloud. Ivan Kissiov of Siemens EDA presented on this topic at #59DAC, so I’ll distill what I learned in this blog.

A digital twin for the fab engineer is made up of process tools, metrology tools, a virtual metrology system along with a recipe.

Virtual Metrology – 2006 IEEE International Joint Conference on Neural Network Proceedings (pp. 5289-5293)

Models for the digital twin have to predict how new ICs will be manufactured, and its response to different processes and tool variations. Goals for this approach are improved yield at lower cost, better process monitoring, higher fault detection, and superior process control.

Some process effects are only seen at the lot level, while others show up at the wafer level, and some only appear at the design feature level, so being able to fuse all of this data together becomes a key task. With a digital twin a design can be extracted, along with process and metrology models to produce a predictive process model.

Digital Twin: Process Model

An example data fusion flow from AMD shows how post-process measurements for feed-forward control go into the process model, and how the equipment model returns a modified recipe, along with in-situ sensors providing automatic fault detection.

Source: Tom Sonderman, AMD

Data fusion ties into machine learning for each of the fab process and metrology steps:

Data fusion steps

Delving inside of the train, test, validate stage there is data processing, feature engineering, model training, and finally model quality metrics:

Path to model quality metrics

Statistical Process Control (SPC) has been used in fabs for decades now, and with some adjustments has been modified into Advanced Process Control (APC), where Run-to-Run Control (RtR) and Fault Detection and Classification components are shown below in a flow from Sonderman and Spanos:

Advanced Process Control

Complex deep learning models can analyze IC fab data using Shapley value explanation to evaluate input feature importance of a given model. During the feature engineering phase, the challenge is feature extraction from image, and Principle Component Analysis (PCA) is the feature extraction method used.

Fabs have used a data mining approach for many years now, and it’s the process of extracting useful information from a great amount of data. This data can help process engineers discover new patterns, gaining meaning and ideas to make improvements.

Data mining

Machine Learning in contrast is the process of finding algorithms that have improved results, derived from the data. With the ML approach, we now have machines learning without human intervention. The site MLOps has much detail on the virtues of using ML, and the flow used in fabs is shown below:

ML flow, source: MLOps

Summary

Data is king, and this truism applies to the torrents of data streaming out of fabs each minute of the day. As fabs and foundries adopt the digital twin paradigm in design, process and metrology, there is a bidirectional flow of information between the physical synthesis tool and and each step of the wafers going through a fab. Using a machine learning framework to create predictive models with efficient algorithms helps silicon yield in the fab.

Related Blogs


Ultra-efficient heterogeneous SoCs for Level 5 self-driving

Ultra-efficient heterogeneous SoCs for Level 5 self-driving
by Don Dingee on 09-14-2022 at 6:00 am

Ultra-efficient heterogeneous SoCs target the AI processing pipeline for Level 5 self-driving

The latest advanced driver-assistance systems (ADAS) like Mercedes’ Drive Pilot and Tesla’s FSD perform SAE Level 3 self-driving, with the driver ready to take back control if the vehicle calls for it. Reaching Level 5 – full, unconditional autonomy – means facing a new class of challenges unsolvable with existing technology or conventional approaches. From a silicon perspective, it requires SoCs to scale in performance, memory usage, interconnect, chip area, and power consumption. In a new white paper, neural network processing IP company Expedera envisions ultra-efficient heterogeneous SoCs for Level 5 self-driving solutions, increasing AI operations while decreasing power consumption in realizable solutions.

TOPS are only the start of the journey

Artificial intelligence (AI) technology is central to the self-driving discussion. Sensors, processing, and control elements must carefully coordinate every move a vehicle makes. But there’s a burning question: how much AI processing is needed to get to Level 5?

If you ask ten people, ten different answers come back, usually with something in common: it’s a big number. Until recently, conversations have been in TOPS, or trillions of operations per second. Some observers talk about Level 5 needing 3 or 4 POPS – peta operations per second. It may not sound like that big a deal since earlier this year, one SoC vendor announced a chip for self-driving applications with 1 POPS performance. They describe it as an “AI data center on wheels.” But, when asked what their power consumption is, they’re less forthcoming. Ditto for the transistor count or die size, probably massive.

These aren’t issues in a data center, but they are in a car. Every watt of power and pound of weight going into self-driving electronics cuts electric vehicle range, and bigger die sizes drive up wafer and package costs. Larger, more complex chip footprints often mean higher on-chip latency. Scaling AI inference TOPS without other improvements will soon run into a wall.

The self-driving processing pipeline workload

That’s not to say having more TOPS now doesn’t reveal helpful information about the compute workload. There are many unknowns – which sensor payloads provide better information, which AI models will perform best in the self-driving software stack, and what form it ultimately takes. Expedera’s white paper takes an in-depth look at the processing pipeline looking for answers, starting from a conceptual diagram.

Changes in the sensor package are ahead. There’s a debate around camera-only systems and whether they can detect all scenarios necessary to ensure safety. More sensors of different types and higher resolutions will likely appear and drive up processing requirements. In turn, more intensive AI models will be needed – and, in a fascinating observation from Expedera based on customer conversations, a self-driving processing pipeline may have ten, twenty, or more AI models operating concurrently.

Expedera expands on each of these phases in the white paper, looking at where compute-intensive tasks may lie. To deal with this self-driving workload, they anticipate a two- to three-order of magnitude increase in AI operations. At the same time, an order of magnitude decrease in power consumption (measured as thermal design power, or TDP) must occur for realizable implementations. According to Expedera, these combined effects are leaving GPUs in the dust when used as a tool for AI inference in vehicles.

Ultra-efficient heterogeneous AI inference for scale

What could take the place of a GPU in more efficient AI inference? A neural network processing unit (NPU) as part of ultra-efficient heterogeneous SoCs, after overcoming limitations of classical NPU hardware Expedera identifies. Scaling drives latency up and determinism down. Hardware utilization is low, maybe only 30 to 40%, driving area and power consumption up. Multi-model execution poses problems in scheduling and memory usage. And partitioning TOPS to fit workloads may not be possible within choices made in a custom SoC architecture.

Some themes Expedera sees in ultra-efficient heterogeneous SoC discussions with customers:

  • Fire-and-forget task scheduling is crucial, with a simple runtime where jobs start and finish predictably, and tasks can be reordered to fit the models and workload.
  • Independent, isolated AI inference engines are a must, where available TOPS are sliced into configurable pieces of processing to dedicate to groups of tasks.
  • Higher resolution, longer-range sensors generate more intermediate data in neural networks, which can oversubscribe DDR memory.
  • IP blocks that worked well in SoCs at lower performance levels prove unrealizable when scaled up – taking too much area, too much power, or both.

Expedera’s co-designed hardware and software neural network processing solution hits new levels of TOPS area density, some 2.7x better than their competition, and pushes hardware utilization as high as 90%. It also enables OEMs to differentiate SoCs and explore different AI models, avoiding risks of impacts from models changing and growing down the road to Level 5.

We’ll save more details of the solution and the discussion of ultra-efficient heterogeneous SoCs for Level 5 self-driving for the Expedera white paper itself – which you can download here:

The Road Ahead for SoCs in Self-Driving Vehicles


WEBINAR: Scalable, On-Demand (by the Minute) Verification to Reach Coverage Closure

WEBINAR: Scalable, On-Demand (by the Minute) Verification to Reach Coverage Closure
by Synopsys on 09-13-2022 at 10:00 am

Synopsys Verification Cloud Solutions

Verification has long been the most time-consuming and often resource-intensive part of chip development. Building out the infrastructure to tackle verification can be a costly endeavor, however. Emerging and even well-established semiconductor companies must weigh the Cost-of-Results (COR) against Time-to-Results (TTR) and Quality-of-Results (QOR).

The Synopsys Cloud Verification Instance is the first scalable, on-demand verification solution. Emerging companies can kick-start their verification with pre-configured flows implemented by the minute. Organizations with a verification environment already in place can quickly scale their verification when additional computation power is needed. These ready-to-use and automated verification flows reduce the manual and often error-prone verification effort to reach coverage closure quickly and increase design quality.

Attendees will walk away with an understanding of how Synopsys Cloud Verification Instance can be easily deployed to meet verification challenges associated with COR, TTR, and QOR to achieve confidence in coverage closure.

REGISTER NOW

Speakers

Listed below are the industry leaders scheduled to speak.

Sridhar Panchapakesan

Sr. Director, Cloud Engagements
Synopsys

Sridhar Panchapakesan is the Senior Director, Cloud Engagements at Synopsys, responsible for enabling customers to successfully adopt cloud solutions for their EDA workflows. He drives cloud-centric initiatives, marketing, and collaboration efforts with foundry partners, cloud vendors, and strategic customers at Synopsys. He has 25+ years of experience in the EDA industry and is especially skilled in managing and driving business-critical engagements with top-tier customers. He has an MBA degree from the Haas School of Business, UC Berkeley, and an MSEE from the University of Houston.

Rob van Blommestein

Product Marketing Manager, Sr. Staff
Synopsys

Rob van Blommestein is the product marketing manager for the Verdi Automated Debug System at Synopsys. With over 20 years of experience in marketing verification products from startups to large-scale companies, he is a marketing executive with a demonstrated history of success establishing brands and growing business in a variety of sectors including electronic design automation (EDA), FPGA, high-performance computing, IoT, machine learning, and artificial intelligence (AI).

REGISTER NOW

Faster, Earlier System Verification

With designs at advanced nodes or those approaching reticle limits, it’s more of an imperative to verify comprehensively and thoroughly across all stages of the design flow. When in-house compute resources are limited, the cloud provides a welcome advantage. Synopsys cloud-based verification solutions can accelerate software bring-up and system validation while leveraging the scalability and fine-grained parallelism technology of the Verification Continuum® Platform, including the security and flexibility that the hosted ZeBu® Cloud solution provides.

Reduce Simulation Turnaround Time

Exhaustive functional verification calls for high-performance simulation and constraint solver engines. That’s what you get with Synopsys VCS® simulation solution in the cloud. Speed up high-activity, long-cycle tests by allocating more cores when needed. Leverage seamless data-sharing between simulations by using VCS containers.​

With our functional verification flow, you’ll benefit from verification planning, coverage analysis, and closure solutions, as well as native integration with the Synopsys Verdi® de-facto debug environment and access to industry-first verification IP (VIP) for the latest protocols and memory models.​

About Synopsys

Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software™ partner for innovative companies developing the electronic products and software applications we rely on every day. As an S&P 500 company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and offers the industry’s broadest portfolio of application security testing tools and services. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing more secure, high-quality code, Synopsys has the solutions needed to deliver innovative products. Learn more at www.synopsys.com.

Also Read:

WEBINAR: Intel Achieving the Best Verifiable QoR using Formal Equivalence Verification for PPA-Centric Designs


Connecting SystemC to SystemVerilog

Connecting SystemC to SystemVerilog
by Bernard Murphy on 09-13-2022 at 6:00 am

UVM Connect

Siemens EDA is clearly on a mission to help verifiers get more out of their tools and methodologies. Recently they published a white paper on UVM polymorphism. Now they have followed with a paper on using UVM Connect, re-introducing how to connect between SystemC and SystemVerilog. I’m often mystified by seemingly overlapping or adjacent efforts between verification capabilities and standards, here in support of co-simulation. My contribution in this article (I hope) is to resolve my own confusion and to answer why this problem is important. I’ll leave the Siemens EDA white paper to handle the details.

Groping through the fog

UVM Connect sounds like it would be a feature of UVM or UVM-SystemC, right? Wrong. UVM Connect is an independent open-source UVM-based library from Siemens EDA, introduced in 2012, enabling TLM communication between UVM/SystemVerilog and SystemC. Conversely, the UVM-SystemC Library 1.0-beta4 was released very recently. However, UVM-SystemC does not support language mixing (as of the current beta release). On the other hand, Siemens EDA is very clear that UVM Connect will continue to be valuable even in the presence of UVM SystemC.

Like I said, confusing. There are areas of apparent overlap, but maybe that overlap isn’t important. UVM Connect is an extension to the UVM standard, invented long before UVM-SystemC, to solve a real problem. Will that solution continue to be relevant? Based on the Siemens-EDA white paper it seems the answer is yes, whatever may happen to UVM-SystemC. Maybe UVM and UVM-SystemC will eventually settle into one standard. In which case I would expect the functionality of UVM Connect to be absorbed in some manner.

Why connect SystemC and SystemVerilog?

Architectural designers work in SystemC (or C/C++). Implementation designers work in Verilog – SystemVerilog if they are designing testbenches. How do they check/debug the implementation testbench? Ideally by running the architectural model under that testbench. How do they check the implementation model matches the architectural model? Through co-simulation, requiring that they run and compare the SystemC model and the implementation model under the UVM testbench. Both methods can benefit from UVM Connect, connecting the SystemC model to the UVM/SystemVerilog environment and vice-versa.

Equally, having that connection allows verification to use both RTL-based and SystemC-based VIP, expanding and accelerating testbench development. Also some might argue this capability enables UVM to stretch up system level verification. Allowing constrained random tests generated in UVM to be applied to SystemC models. Today, I think that is more of a PSS domain, but the UVM Connect approach certainly works in principle.

Why not use DPI?

Isn’t this getting a little too complicated? SystemVerilog provides a Direct Programing interface (DPI). DPI offers a standard way to connect between SV and C++. Since SystemC is C++, a solution exists; why add another solution? My guess is that the DPI approach is a bit too low level for many of these applications and solutions will invariably be non-portable. In contrast, transaction level modeling (TLM) is a well-established paradigm for handling data exchange between different domains. SystemC is intrinsically TLM-based and UVM provides TLM communication interfaces. UVM Connect simply formalizes this connection in a nice, easy to use way.

My takeaway? UVM Connect is a practical way to connect SystemC models into a UVM testbench in support of implementation verification. Certainly much easier than DPI. More ambitious goals to blend UVM with SystemC and perhaps SystemVerilog may be the long term goal but are not an answer to today’s needs. You can learn more about UVM Connect HERE.


Truechip: Customer Shipment of CXL3 VIP and CXL Switch Model

Truechip: Customer Shipment of CXL3 VIP and CXL Switch Model
by Kalar Rajendiran on 09-12-2022 at 10:00 am

CXL Block Diagram

The tremendous amount of data generated by AI/ML driven applications and other hyperscale computing applications have forced the age old server architecture to change. The new architecture is driven by the resource disaggregation paradigm, wherein memory and storage are decoupled from the host CPU and managed independently through high-speed connectivity. The Compute Express Link (CXL) standard is a direct result of this evolution in the server architecture to support high-speed, low-latency, cache-coherent interconnect. The CXL specification delivers high-performance, while leveraging PCI Express® technology to support rapid adoption. CXL switching features resource pooling, enabling the host CPU to access one or more devices from the resources pool. While CXL 2.0 specification (CXL2) supports single-level switching, CXL 3.0 specification (CXL3) supports multi-level switching wherein the host CPU could leverage different resources in a tiered fashion. CXL3 also introduces fabric capabilities and management, improved memory sharing and pooling, enhanced coherency, and peer-to-peer communication. The spec also doubles the data rate to 64GT/s with no added latency over CXL2.

The specification is also evolving fast, with CXL3 released just three years after CXL1 was released. Truechip has a long track record of offering VIP solutions to a broad list of customers worldwide. It offers an extensive portfolio of VIP solutions to verify IP components interfacing with industry-standard protocols integrated into ASICs, FPGAs and SoCs. As a Verification IP Specialist, Truechip has been offering VIP solutions to support the CXL standard right from the start. For details on their entire portfolio of VIP offerings, visit the products page.  They recently expanded their portfolio with the addition of CXL3 and CXL Switch VIP solutions. You can read their press announcement about first customer shipment of CXL 3 verification IP and CXL switch model.

Truechip’s CXL3 VIP Solution

Truechip’s CXL Verification IP provides an effective and efficient way to verify the components interfacing with CXL connectivity of an IP or SoC. Their CXL VIP is fully compliant with the latest CXL specification. This solution is light weight with an easy plug-and-play interface so that there is no impact on the design cycle time. The solution is offered in native System Verilog (UVM/OVM/VMM) and Verilog.

The following Figure depicts a block diagram of the Truechip’s CXL3 VIP environment.

Some Salient Features

  • Configurable as CXL Host and Device when operating in Flex Bus mode
  • Configurable as PCI Express Root Complex and Device Endpoint when operating in PCIe mode
  • Supports 64.0 GT/s Data Rate with backward compatibility
  • Supports Pipe Specification 6.1.1 with both Low Pin Count and Serdes Architecture
  • Supports Configurable timeout for all three layers
  • Supports different CXL/PCIe Resets
  • Supports Arbitration among the CXL.IO, CXL.cache and CXL.mem packets with interleaving of traffic between different CXL protocols
  • Offers a comprehensive user API for callbacks
  • Provides built-in Coverage analysis
  • Supports all 3 coherency models HDM-D, HDM-H and HDM-DB to access HDM memory

Deliverables

CXL Host/Device

CXL BFM/Agents for:

    • Host and Device sequences
    • Transaction layer (CXL.IO and CXL.cache, CXL.mem)
    • Link layer (CXL.IO and CXL.cache, CXL.mem)
    • Arbiter/Mux layer
    • Phy layer

CXL Monitor and Scoreboard

Test Environment & Test Suite:

    • Basic and Directed Protocol Tests
    • Random Tests Error Scenario Tests
    • Cover Point Tests
    • Compliance Tests

Integration Guide, User Manual, Quick start Guide, FAQs and Release Notes

Truechip’s CXL Switch Model

Truechip’s CXL Verification VIP provides an effective & efficient way to verify the components interfacing with the CXL Switch interface of an IP or SoC. Truechip’s CXL Switch model is fully compliant with the latest CXL specification. The model supports Hot Add and Hot Remove for a CXL Device and is available in native System Verilog (UVM/OVM/VMM) and Verilog.

The following Figure depicts a block diagram of the CXL3 VIP environment when the system implementation incorporates the switching capability.

Aspects Common to All of Truechip’s VIP Solutions

Although covered in an earlier blog, it is worth to reiterate some advantages that cut across all of Truechip’s VIP solutions. All solutions come with an easy plug and-play interface to enable a rapid development cycle. The VIPs are highly configurable by the user to suit the verification environment. They also support a variety of error injection scenarios to help stress test the device under test (DUT). Their comprehensive documentation includes user guides for various scenarios of VIP/DUT integration. Truechip’s VIP solutions work with all industry-leading dynamic and formal verification simulators. The solutions also include Assertions that can be used in formal and dynamic verification as well as with emulations. And, their solutions come with the TruEYE GUI-based tool that makes debugging very easy. This patented debugging tool reduces debugging time by up to 50%.

For more information, refer to Truechip’s website.

Also Read:

Truechip’s Network-on-Chip (NoC) Silicon IP

Truechip’s DisplayPort 2.0 Verification IP (VIP) Solution

Bringing PCIe Gen 6 Devices to Market