SILVACO 073125 Webinar 800x100

2024 Outlook with Adam Olson of Perforce

2024 Outlook with Adam Olson of Perforce
by Daniel Nenni on 02-29-2024 at 10:00 am

adam olson 2021

Perforce is a company that provides software solutions primarily focused on version control, especially for large-scale development projects. Version control systems manage changes to documents, computer programs, large web sites, or other collections of information. Perforce’s main product is Helix Core, formerly known as Perforce Helix, which is a version control system that helps software development teams manage and track changes to their source code, documents, and other digital assets. It is widely used in industries such as game development, automotive, aerospace, and finance where managing complex software projects with many contributors is essential.

Tell us a little bit about yourself and your company. 
Adam Olson, Chief Revenue Officer and General Manager for the Digital Creation business unit at Perforce, which includes our integrated semiconductor solutions, Helix Core and Helix IPLM (formerly Methodics).

What was the most exciting high point of 2023 for your company? 
We had several major account wins within our Digital Creation business unit and expanded our DevOps portfolio. Perforce also appointed a new CEO at the very end of 2023, which we’re very excited about. Technology veteran Jim Cassens brings to Perforce over 30 years of experience scaling software organizations with a customer-centric management approach. We’re thrilled to have him leading the organization.

What was the biggest challenge your company faced in 2023? 
Some of our larger semiconductor customers were facing economic headwinds heading into 2023, which slowed their projects. While these headwinds eased up a bit by the end of the year, the challenges and complexities of the semiconductor industry remain. We find that what may have looked like a relatively mundane decision in the past is now met with larger committees and a need for strong defense of ROI models.

How is your company’s work addressing this biggest challenge? 
Perforce is helping our semiconductor clients tame complexity and increase efficiencies across their design flow. We help them accomplish more with less and accelerate time to market, while keeping a lid on costs. Perforce Helix IPLM and Helix Core serve as a scalable, secure foundation for design data management. By tracking all IP and design data in Helix IPLM’s unified, hierarchical data model, our customers benefit from tighter coordination between cross-functional teams, end-to-end traceability, and more efficient requirements verification. And with Helix Core, they get robust, federated, multisite data management for enterprise scale, security, and performance.

What do you think the biggest growth area for 2024 will be, and why? 
Semiconductor IP security will be a big factor in 2024 as global instability and political uncertainty rises and bad actors spend more time trying to compromise networks and other important infrastructure. Organizations – and global, multi-site teams in particular – will need to carefully track and secure design assets to avoid violating rapidly shifting technology restrictions and export control laws. Such violations, often caused by accidental IP leakage, can result in millions of dollars in fines and legal issues, on top of the lost revenue and market setbacks.

How is your company’s work addressing this growth? 
Perforce is addressing this growing need for IP security through advanced new features like Geofencing as well as providing fine-grained security and enabling end-to-end traceability across the lifecycle. Helix IPLM’s Geofencing feature delivers dynamic security for global, multi-site teams by restricting IP availability in certain geos, regardless of user access permissions. These restrictions can be applied universally – regardless of the underlying data management systems (Perforce Helix Core, Git, DesignSync, etc). Within Helix Core, organizations can control access down to the file level and even qualify access by IP address.

What conferences did you attend in 2023 and how was the traffic?
Our Digital Creation team attended and exhibited at Embedded World and DAC. We had heavy traffic at our booth at both events – substantially more than the previous year, when conference attendance was still affected by the pandemic. We also attended the GSA Executive Forums in Europe and the US, along with Design & Reuse IP-SoC Silicon Valley.

Will you attend conferences in 2024? Same or more?
We’ll be attending the same conferences in 2024, along with a few others such as GOMACTech and DVCon.

Additional questions or final comments? 
Perforce welcomes semiconductor leaders to join our Helix IPLM Monthly User Group (MUG) sessions. It’s a great opportunity to hear from product experts, industry peers, and Helix IPLM users about topics like IP governance, release automation, and IP security, along with best practices and the latest product features. To register, visit https://www.perforce.com/products/helix-iplm/user-group.

Also Read:

The Transformation Model for IP-Centric Design

Chiplets and IP and the Trust Problem

Insights into DevOps Trends in Hardware Design


WEBINAR: Enabling Long Lasting Security for Semiconductors

WEBINAR: Enabling Long Lasting Security for Semiconductors
by Daniel Nenni on 02-29-2024 at 6:00 am

image (10)

Today we live in a world where technology is a part of our everyday lives, not only our personal data, but all devices we rely on on a daily basis including our automobiles, cell phones, and home devices. Hackers have found creative and novel ways to corrupt these products, disable systems, steal secrets and threaten our identities. As we look forward to the future and as technology becomes more entrenched in our lives and impacts our security and safety, we need to move security solutions to the forefront.

WATCH REPLAY NOW

Security is a constantly evolving problem and requires an adaptable solution. In this session, we will address common security problems that we face in today’s challenging world and solutions that can mitigate these threats.     Fixed solutions that are implemented today will inevitably be challenged in the future. Hackers today have more time, resources, training and motivation to disrupt technology. With technology increasing in every facet of our lives, defending against this presents a real challenge. We also have to consider upcoming threats, namely quantum computing. Many predict that quantum computing will be able to crack current cryptography solutions in the next few years!

Fortunately, semiconductor manufacturers have solutions that can enable cryptography agility, also known as Crypto Agility, which can dynamically adapt to evolving threats. This includes not only being able to update hardware accelerated cryptography algorithms, but also provide obfuscation to increase root of trust and protect valuable IP secrets in products. Advanced solutions like these also involve the ability for devices to randomly create their own encryption keys, making it harder for algorithms to crack encryption codes. This webinar will demonstrate a variety of solutions and reconfigurable IP from Flex Logix that can be implemented into any semiconductor device to thwart off all current threats as well as future threats. We will highlight solutions from partners who specialize in security and have ready-to-go IP that can be deployed on Flex Logix IP and add crypto agility to any semiconductor.

Watch this webinar now to learn more about enabling crypto agility in your semiconductor can provide long lasting security solutions.

Abstract:

Semiconductors are on the forefront of security to protect our identity, data and daily lives. And we live in a time where hackers have more time, resources, available training and motivation to disrupt our security than ever before. With quantum computing looming and threatening our current security implementations, it is more important than ever to start implementing crypto agile solutions that can adapt to evolving threats. And this needs occur at every level, including the transport, MAC and IP layers. Adding embedded programmable logic from Flex Logix combined with security IP solutions from Xiphera, a hybrid solution can provide long-lasting security for semiconductors.

Speaker Bios:

Jayson Bethurem is responsible for marketing and business development at Flex Logix. Jayson spent six years at Xilinx as Senior Product Line Manager responsible for about a third of revenues. Before that he spent eight years at Avnet as FAE showing customers how to use FPGAs to improve their products. Earlier, he worked at startups using FPGAs to design products.

Dr. Kimmo Järvinen is the co-founder and CTO of Xiphera. Kimmo has a 20-year long career in the academia where he has done cryptography related research in various European universities. Kimmo has a strong academic background in cryptography and cryptographic hardware engineering after having various post-doctoral, research fellow, and senior researcher positions in Aalto University (Espoo, Finland), KU Leuven (Leuven, Belgium), and University of Helsinki (Helsinki, Finland). Kimmo has published more than sixty scientific articles about cryptography and security engineering, and nearly half of them are somehow related to elliptic curve cryptography. Kimmo has substantial theoretical and practical experience in secure and efficient implementation of elliptic curve cryptosystems.

Join us in this webinar to learn more about enabling crypto agility in your semiconductor can provide long lasting security solutions.

Also Read:

Reconfigurable DSP and AI IP arrives in next-gen InferX

eFPGA goes back to basics for low-power programmable logic

eFPGAs handling crypto-agility for SoCs with PQC


Soft checks are needed during Electrical Rule Checking of IC layouts

Soft checks are needed during Electrical Rule Checking of IC layouts
by Daniel Payne on 02-28-2024 at 10:00 am

Metal1 Via Metal2 s

IC designs have physical verification applications like Layout Versus Schematic (LVS) at the transistor-level to ensure that layout and schematics are equivalent, in addition there’s an Electrical Rules Check (ERC) for connections to well regions called a soft check. The  connections to all the devices needs to have the most consistent voltage signals.  Therefore, the path should be through the Metal layers to reduce resistance and factors like IR Drop.  Detecting connections thought other materials–like Wells–in mandatory.  Soft-Checks are the method most commonly employed to detect this situation. The Calibre product line from Siemens is the most popular tool for DRC and LVS checking, so I read a technical paper from Terry Meeks to learn more about soft checks.

Connecting two metal layers in an IC layout requires precise alignment of both metal layers and the via layer. Here’s a comparison using both a side view and top-down view where the first example is not connected, because Metal1 and Metal 2 are not overlapping, while the second example is connected properly.

Connecting two metal layers with a Via layer.

We want our ERC tool to identify well connectivity errors during soft checks, so that they can be fixed. The following IC layout has a well connectivity error and is shown from the side view, where the Metal1 signal texted as Gnd is connected a diffusion region called a tap diffusion. On the right-hand side is another Metal1 layer with a tap diffusion, but this connectivity creates a high-resistance path in the Rwell to Gnd, and is flagged as an error by the soft check.

Well connectivity error – side view

Another example of soft connectivity error happens in the IC layout below where we can apply only one name per polygon. The digital power net VDD cannot coexist with the analog power net AVDD, and we need to separate these into two shapes. Soft checks help to flag these issues.

AVDD net to VDD net soft check error

An IC layout with both digital and analog power supplies can become rather complex to layout properly, so it’s even more important to have soft checks.

Undetermined areas have question marks

Soft checks are included during your LVS runs, and with Calibre nmLVS there’s a report of soft check results, which can then be viewed using the Calibre RVE viewer.

Using Calibre RVE to review Soft Check errors

Clicking on RVE results tells you which cell has the soft check error, the net names, upper and lower names, and other properties. This info helps to pinpoint what to fix in the IC layout. Clicking on a lower layer like a PWell for a soft check error displays the geometry in yellow.

Soft check result, lower layer

For the same soft check error, clicking on the upper layer shows:

Soft check result, upper layer

During debug you can also show all the upper layer shapes, the green shapes are the selected net upper layer shapes, while yellow is the rejected net upper layer shape.

All upper layer shapes

Debugging soft check errors with RVE involves clicking on the connectivity of selected and rejected nets. A Net Info windows reveals details like which layers are involved, and if shapes are missing connectivity. Looking at which ports are connected to a net reveal if there’s missing VDD or GND errors. This example shows that net 18 is rejected, because it’s missing connectivity to Metal1.

Missing connectivity to Metal1

Summary

LVS checks are mandatory to ensure that an IC has an error-free layout, and soft checks are part of your LVS checks. There’s a proven debugging flow from Siemens in their Calibre nmLVS tool that uses RVE to help layout designers quickly identify soft check failures, so that designers can make fixes and re-verify until all checks are passing. Siemens has written a technical paper for reading online, Detecting and debugging soft check connectivity errors.

Related Blogs

 

 


CEO Interview: Michael Sanie of Endura Technologies

CEO Interview: Michael Sanie of Endura Technologies
by Daniel Nenni on 02-28-2024 at 8:00 am

Michael Sanie
Michael Sanie

Michael Sanie is a veteran of the semiconductor and EDA industries. His career spans several executive roles in diverse businesses with multifunctional responsibilities. He is a passionate evangelist for disruptive technologies.

Most recently, he was the chief marketing executive and senior VP of Enterprise Marketing and Communications at Synopsys, where he also held leadership roles as VP of marketing and strategy for the Design Group and VP of product management for the Verification Group.

Michael previously held executive and senior marketing positions at Cadence, Calypto, Numerical, and Actel, as well as IC design and software engineering positions at VLSI Technology (now NXP Semiconductors).

He holds BSECE and MSEE degrees from Purdue University and an MBA from Santa Clara University.

Tell us about your company

Endura Technologies is developing an end-to-end SoC power delivery solution. In addition to our revolutionary, patented power delivery architecture, we have a diverse skillset to implement test silicon, design IP, design services, design passives (required inductors and capacitors as part of the power delivery solutions), partnerships, and silicon manufacturing relationships. This allows us to create end-to-end SoC power delivery solutions.

Our unique architecture, combined with our fully integrated approach to power delivery at the system level is changing the game for challenging applications such as data centers, automotive, and many others.

What problems are you solving?

Energy consumption for advanced products has become a major care-about across many markets and applications. Battery life and heat dissipation for aggressive form factors drive part of this. The substantial operating costs for massive compute infrastructure is another driver.

A bit more specifically, servers/AI chips are driving much higher compute demands, requiring more power to be delivered.  At the same time, these chips are built on smaller nodes, which run on lower Vdd’s.  The only way this equation can work is to provide much higher currents with several power rails, and increasingly this is only achievable by 2.5D or 3D IC integration These facts are fundamentally changing power delivery approaches.

On top of that, systems in automotive, audio, and switches typically rely on many sensory inputs ranging from MEMs devices to image sensors to radar. These devices require efficient power delivery across many load configurations and at increasing switching frequencies while maintaining ultra-low noise.

These fundamental disruptions are making people take power delivery a lot more seriously — in two ways:  Power delivery is no longer an afterthought; it needs to be designed/architected at the same time as the SoC AND it needs a much more holistic approach. Off-the-shelf PMICs are quickly running out of steam in how they meet these complex requirements.  To get the best power delivery each SoC needs its own ‘application-specific’ (or context-aware) power delivery solution.

Powering these systems at scale requires a new approach. One that takes a comprehensive view of power requirements for the chips and chiplets that implement the complete system. And one that optimizes performance, scalability, and efficiency over the broad spectrum of switching frequencies, current loads, voltage ranges, and silicon manufacturing processes.

This is the problem Endura is solving.

What application areas are your strongest?

Endura has applied its technology across a wide range of power-intensive or power-sensitive application areas – mostly data center and automotive. You can find more specific examples on our website that cover data centers, requirements for memories in data centers, a notebook design with a PCIe Gen5 solid state drive, optical modules and automotive.

What keeps your customers up at night?

Advanced system design presents a power delivery balancing act. The drivers for the requirement may differ, but all systems must operate efficiently with the lowest energy consumption possible.

These systems contain many parts, all operating at different frequencies, with varying power demands and obstacles. Solving the complete problem requires a holistic approach to power management and delivery.

But such an approach has been out of reach for most companies, requiring system designers to attempt integration of multiple tools and multiple sets of IP and software to solve the problem. This has been a very difficult problem to solve. Until now.

What does the competitive landscape look like and how do you differentiate?

The traditional approach to power delivery focuses on a component-level strategy. That is, acquire best-in-class power management solutions, typically from tier-1 suppliers and integrate these devices at the PCB level.

The substantial complexity and power demands of applications such as data centers require a new, fine-grained approach – one that integrates power delivery down to the chip level and one that co-optimizes the architecture for optimal system-level performance.

There are some design teams (typically in larger companies with a broad range of skills) that are making the investment to achieve these results across the supply chain. For everyone else, the complexity of integrating such approaches remains out of reach.  Endura is democratizing this new, system-level approach to power delivery, so it is available to every system design team.

What new features/technology are you working on?

Power management approaches include the use of traditional, discrete devices (sVR) to embedded chiplets for 2.5/3D integration (eVR) down to on-chip, integrated blocks for optimum point-of-load energy delivery (iVR).

While sVR approaches are well-understood, deployment of fully integrated eVR and iVR strategies is extremely complex and challenging. Endura has the technology and know-how to solve these problems, and this is our development focus.

How do customers normally engage with your company?

Endura Technologies has development facilities in California and Dublin, Ireland. If you would like to explore how we can help you develop a forward-looking power strategy you can reach out at info@enduratechnologies.com.

Also Read: 

CEO Interview: Vincent Bligny of Aniah

CEO Interview: Jay Dawani of Lemurian Labs

Luc Burgun: EDA CEO, Now French Startup Investor


Revolutionizing RFIC Design: Introducing RFIC-GPT

Revolutionizing RFIC Design: Introducing RFIC-GPT
by Jason Liu on 02-28-2024 at 6:00 am

Figure1 (10)

In the rapidly evolving world of Radio Frequency Integrated Circuits (RFIC), the challenge has always been to design efficient, high-performance components quickly and accurately. Traditional methods, while effective, come with a high complexity and a lengthy iteration process. Today, we’re excited to unveil RFIC-GPT, a groundbreaking tool that transforms RFIC design through the power of generative AI.

RF chips are known as the crown jewel of analog chips, and RF circuits typically contains not only the active circuits, i.e., the circuits composed of mostly active devices such as transistors, but also a large number of passive components such as inductors, transformers and matching networks. Fig. 1. is an example of a one stage RF power amplifier (PA), the active part of the circuit is a differential common source PA with cross coupled varactors, and it is connected by an input matching network and an output matching network.  The matching networks are usually a combination of passive devices such as inductors, capacitors and transformers connected in an optimized configuration.

To design such an RF circuit, both of the devices in the active circuit and the passive layout patterns in the matching networks need to be optimized. The conventional design flow of RFIC circuit is shown in the top half of Fig. 2. On one hand, active circuits need to be first designed and simulated both in schematics and in layouts. On the other hand, the passive components and circuits are iterated repeatedly using more physical and tedious electromagnetic (EM) simulation combined with their layouts, making it a key challenge in RF design.

Thereafter, the parameters of entire layouts are extracted and post layout simulations are run to compare with the design specifications (Specs). Finally, the designs of both active circuits and layouts of passive circuits are re-adjusted and re-simulated, and the results are compared again. This process is iterated for a numerous number of times until the design Specs are achieved. Among others, the main difficulties of designing RFIC can be attributed to:

(1) large design search space of both active and passive circuits;

(2) lengthy and tedious EM simulation required;

(3) Interactions between active and passive circuits, and that between RFIC and its surroundings demands numerous iterations and optimizations.

Therefore, the traditional design flow of RFIC typically takes a lot of human effort, and its design quality in a constrained time also largely depends on the experience of particular IC designers.

Recently, generative AI has been researched and explored extensively for generating contents including but not limited to dialogues, pictures, programming codes. Analogous to this concept, generative AI is also considered for the RFIC design automation in the area of IC design. The bottom half of Fig. 2 exhibits an example RFIC design flow with the assisted generative AI. Essentially, the behavior of small circuit components can be lumped into models and lengthy simulations can be omitted.

Additionally, the solution searching “experience” for the RFIC design can be “learned”, and the solutions, i.e., the initial design of RFIC schematics and layouts, can be quickly “generated”. Importantly, the simulated results of the AI generated RFIC circuits can indeed be already close to the design Specs, and IC design engineers only need to do some final optimization and verifying simulations before they can be applied to the RFIC design blocks for tape-outs. This methodology saves a large amount of the simulation iterations and drastically improves design efficiency. Furthermore, the results are more consistent run to run since the task is performed by “emotionless” computer.

As a pioneer of intelligent chip design solutions, the AI based RFIC design automation tool RFIC-GPT has been launched. Using RFIC-GPT, GDSII or schematic diagrams of RF devices and circuits meeting design specifications (such as Q/L/k of the transformer; matching degree S11 of the matching circuit, insertion loss IL; gain, OP1db of the PA etc.) can be directly generated based on AI algorithm engine. It reduces simulation iterations by over 50%, accelerating the journey from concept to production. This tool is not just about speed; it’s about precision. It generates optimized layouts and schematics that meet design specifications with up to 95% accuracy, ensuring high-quality results with fewer revisions.

What sets RFIC-GPT apart? Unlike traditional tools that rely heavily on manual input and trial-and-error, RFIC-GPT leverages AI to predict and optimize design outcomes, making the process faster and more reliable. This means designers can focus more on innovation and less on the repetitive tasks that often slow down development.

In conclusion, RFIC-GPT represents a significant leap forward in RFIC design technology. By harnessing the power of AI, it offers unprecedented efficiency, accuracy, and ease of use. We’re proud to introduce this innovative tool and are excited about the potential it holds for the future of RFIC design. Join us in this revolution, try RFIC-GPT today, and take the first step towards more efficient, accurate, and innovative RFIC designs. The author encourages designer to try RFIC-GPT online  ( www.RFIC-GPT.com )  and give feedback . The practice of RFIC-GPT only takes three steps:

(1) Input your design Specs and requirements;

(2) Consider the design trade-offs and choose the appropriate GDSII or active design;

(3) Click download for your application.

Author:

Jason Liu, Jason is a senior researcher on the design automation solution for RFIC. Jason holds a Ph.D. degree in Electrical Engineering and has been in the EDA industry for more than 15 years.

Also Read:

CEO Interview: Vincent Bligny of Aniah

Outlook 2024 with Anna Fontanelli Founder & CEO MZ Technologies

2024 Outlook with Cristian Amitroaie, Founder and CEO of AMIQ EDA


2024 Signal & Power Integrity SIG Event Summary

2024 Signal & Power Integrity SIG Event Summary
by Daniel Nenni on 02-27-2024 at 10:00 am

SIG Event Synopsys

It was a dark and stormy night here in Silicon Valley but we still had a full room of semiconductor professionals. I emceed the event. In addition to demos, customer and partner presentations, we did a Q&A which was really great. One thing I have to say is that Intel really showed up for both DesignCon and the Chiplet Summit. Quite a few Intel employees introduced themselves and a couple even took pictures with me, great networking.

The SIPI SIG 2024 event was hosted at the Santa Clara Hilton on Jan 31st on the margins of DesignCon and was over-subscribed with 100 attendees (despite inclement weather). There were 20+ customers and partners  represented including the likes of Apple, Samsung, AMD, TI, Micron, Qualcomm, Google, Meta, Amazon, Tesla, Cisco, Broadcom, Intel, Sony, Socionext, Realtek, Microchip, Winbond, Lattice Semi, Mathworks, Ansys, Keysight, and more:

Synopsys Demos & Cocktail Hour
Interposer Extraction from 3DIC Compiler & SIPI Analysis
TDECQ Measurement for High Speed PAM4 Data Links

Customer Presentations and Q&A:
Optimization of STATEYE Simulation Parameters for LPDDR5 Application
Youngsoo Lee, Senior Manager of AECG Package Development Team, AMD

IBIS and Touchstone: Assuring Quality and Preparing for the Future
Michael Mirmak, Signal Integrity Technical Lead, Intel

Signal and Power Integrity Simulation Approach for HBM3 Hisham Abed, Sr. Staff A&MS Circuit Design Engineer, Solutions Group, Synopsys

Signal Integrity at the Cutting Edge: Advanced Modeling and Verification for High-Speed Interconnects Barry Katz, Director of Engineering, RF & AMS Products, MathWorks.

All great presentations, the panelists had more than 100 years of combined experience, but I must say that Michael Mirmak from Intel was really really great. Here is a quick summary that Michael helped me with. Michael started his presentation with the standard corporate disclaimer:

“I must emphasize that my statements and appearance at the event was not intended and should not be construed as an endorsement by my employer, or by any organization of particular products or services.”

IBIS and Touchstone: Assuring Quality and Preparing for the Future
  • IBIS and Touchstone are the most common model formats for SI and PI applications today
  • Assessing model quality remains a constant concern for both model users and producers
  • The simulation output log file is often neglected but can provide very useful insights, as it includes model quality reporting and issue detection outside of outputs such as eye diagrams, before actual channel simulation begins
  • Even for high-speed IBIS AMI (Algorithmic Model Interface) simulations, problems can arise from simple analog IBIS data mismatches between impedance and transition characteristics; the simulation log can alert the user and model-maker to these early, before larger and potentially expensive batch runs
  • The simulation output log can also help find issues with the algorithmic portion of IBIS AMI models that may distort output in subtle ways that cannot (yet) be checked with the standard parsing tool
  • IBIS 7.0 and later supports standard modeling of modern, complex component package designs that tend to be represented using proprietary SPICE variants today; S-parameters under Touchstone are now included as well
  • S-parameters using the Touchstone format are frequently used for interconnect modeling, but can become unwieldy when used to describe high-speed links at the system level over manufacturing or environmental variations
  • Touchstone 3.0 is coming and is planned to include a pole-residue format that enables compression of S-parameter data

Congratulations to Synopsys and the semiconductor ecosystem, it was a great event, absolutely.

Also Read:

Synopsys Geared for Next Era’s Opportunity and Growth

Automated Constraints Promotion Methodology for IP to Complex SoC Designs

UCIe InterOp Testchip Unleashes Growth of Open Chiplet Ecosystem


BDD-Based Formal for Floating Point. Innovation in Verification

BDD-Based Formal for Floating Point. Innovation in Verification
by Bernard Murphy on 02-27-2024 at 6:00 am

Innovation New

A different approach to formally verifying very challenging datapath functions. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome. We’re planning to add a wrinkle to our verification exploration this year. Details to follow!

The Innovation

This month’s pick is Polynomial Formal Verification of Floating-Point Adders. This article was published in the 2023 DATE Conference. The authors are from the University of Bremen, Germany.

Datapath element implementations must be proved absolutely correct (remember the infamous Pentium floating point bug), which demands formal proofs. Yet BDD state graphs for floating point elements rapidly explode, while SAT proofs are often bounded hence not truly complete.

The popular workaround today is to use equivalence checking with a C/C++ reference model, which works very well but of course depends on a trusted reference. However some brave souls are still trying to find a path with BDD. These authors suggest methods to use case-splitting to limit state graph explosion, dropping from exponential to polynomial bounded complexity. Let’s see what our reviewers think!

Paul’s view

Compact easy read paper to kick-off of 2024, and on a classic problem in computer science: managing BDD size explosion in formal verification.

The key contribution of the paper is a new method for “case splitting” in formal verification of floating point adders. Traditionally, case splitting means to pick a boolean variable that causes a BDD to blow up in size, and just run two separate formal proofs, one for the “case” where that variable is true and one for the case where that variable is false. If both proofs pass, then it means that the overall proof for the full BDD including that variable must necessarily also pass. Of course, case splitting for n variables means 2^n cases so if you use it everywhere then you just trade one exponential blow up for another.

This paper observes that case splitting need not be based only on individual Boolean variables. Any exhaustive sub-division of problem is valid. For example, prior to normalizing the base-exponent, a case split on the number of leading zeros in the base can be performed – i.e. zero leading zeros in the base, one leading zero in the base, and so on. This particular choice of split combined with one other cunning split in the alignment shift step achieves a magical compromise such that the overall proof for a floating point add goes from being exponential to polynomial in complexity. A double precision floating point add circuit can now be formally proved correct in 10 seconds. Nice!

Raúl’s view

This short paper introduces a novel approach to managing the size explosion problem in formal verification of floating point adders using BDDs, a classic issue in equivalence checking. Traditionally, this is addressed by case splitting, i.e., dividing the problem based on the values of individual Boolean variables (0, 1), also leading to exponential growth in complexity with the number of variables split. Based on observations on where the explosion in size happens when constructing the BDDs, the paper proposes three innovative case splitting methods. They are not based on individual Boolean variables and are specific for floating point adders (of course they do not simplify general equivalence checking to P).

  1. Alignment Shift Case Splitting: The paper suggests splitting with regard to the shift amount or exponent difference, significantly reducing the number of cases needed for verification.
  2. Leading Zero Case Splitting: To address the explosion at the normalization shift, the paper proposes creating cases based on the number of leading zeros in the addition result.
  3. Subnormal Numbers and Rounding: Subnormal numbers are handled by adding a simplification in cases where they can occur; rounding does not trigger an explosion in BDD size.

By strategically choosing these case splits, the overall proof complexity for floating point addition can be reduced from exponential to polynomial. As a result, formal verification of double and quadruple precision floating point add circuits, which in classic symbolic simulation time out at two hours, can now be completed in 10-300 secs!


New Emulation, Enterprise Prototyping and FPGA-based Prototyping Launched

New Emulation, Enterprise Prototyping and FPGA-based Prototyping Launched
by Daniel Payne on 02-26-2024 at 10:00 am

Veloce Strato CS min

General purpose CPUs have run most EDA tools quite well for many years now, but if you really want to accelerate something like simulation then you start to look at using specializedhardware accelerators. . Emulators came onto the scene around 1986 and the processing power has greatly increased over the years, mostly in response to the demands of leading-edge companies designing CPUs, GPUs and more recently AI-based processors and hyperscalers that need to accelerate simulation to ensure that designs are bug-free and will actually boot-up and run software properly before tape out.

All modern CPU, GPU, Hyperscalers, and AI processor teams are using emulation to accelerate the design and debug of their SOCs, with transistor counts ranging from 25 billion to 167 billion transistors, often using chiplets as the massive number of transistors no longer fit within the maximum reticle size. These systems are challenging to verify, and using a general purpose CPU to run EDA simulations is no longer fast enough, so emulation must be used. Design teams on projects for AI and hyperscale applications are running software loads that demand quick analysis so that trade offs can be made between power and performance.

Emulation is used early in the design flow, when there are lots of design changes happening, so having flexible debug and fast compile features are critical for quick turn-around. When the RTL coding has become stable enough and there’s less debugging required, then a faster simulation approach using enterprise prototyping can be started as early firmware and software development can begin. The third stage of accelerated simulation follows and is traditional FPGA-based prototyping, where software developers are the main users, where performance and flexibility is prime need.

With the three hardware-assisted acceleration techniques you could opt for using three hardware systems from multiple vendors, however I just learned about a new announcement from Siemens where they have launched a next-generation family of products that covers all three use cases and they call it Veloce CS.

 

For Emulation the Veloce Strato CS is using a domain-specific chip called the CrystalX, which enables fast, predictable compile during design bring-up and speeds iterations. Designers are more productive by using native debug capabilities, and the platform has scalability to fit the biggest designs. On the prototyping side the FPGA-based Veloce Primo CS is using the latest AMD Chip, the VP1902 Adaptive SoC, which has 2X higher logic density, and an 8X faster debug performance.

 

Previous generations of emulators often had unique hardware form factors, but with the new Veloce CS Siemens adopted a blade architecture, which fits into modern data centers more easily.

The previous generation of emulators from Siemens was called the Veloce Strato+, introduced in 2021; now with the new Veloce Strato CS you enjoy 4X gate capacity, 5X performance gain, and a 5X debug throughput boost. Scalability now goes up to 40+B gates, and the modular blade approach spans from 1 to 256 blades.

Veloce Strato CS configurations

For enterprise prototyping Siemens offered the Veloce Primo beginning in 2021; with the new Veloce Primo CS your team will benefit from 4X gate capacity, 5X in performance, and a whopping 50X in debug throughput. Once again, blades are used with Veloce Primo CS, providing a range of 500M gates, all the way up to 40+B gates.

The following diagram shows the common compiler, debug and runtime software shared between the emulator and enterprise prototyping systems, with the major difference being that the emulator uses the custom CrystalX chip and the enterprise prototype employs the AMD VP1902 chips.

Emulator and Enterprise Prototype systems

By using a blade architecture these systems require only air cooling, so no expensive water cooling is needed.

The third new product introduced is Veloce proFPGA CS, and it gives you 2X gate capacity, 2X performance, and a stunning 50X debug throughput advantage over previous generation proFPGA system. Scaling starts out with just a single FPGA clocking at 100MHz, then growing up to 4B gates. The Uno and Quad configurations are well suited for desktop prototyping, then each blade system has 6 FPGAs.

Prototyping used to be limited by slow design bring-up, but now with Veloce proFPGA CS engineers will experience efficient compile without manual RTL edits, enjoy automated multi-FPGA partitioning, benefit from timing-driven performance optimization, and become more efficient with sophisticated at-speed debug due to VPS SW.

Summary

Siemens designed, built and announced three new hardware-accelerated systems that have some immediate benefits, like:

  • Lower power to cool
  • ~10Kw/Billion gates
  • Fits into data center using blades and air cooling cold aisle – hot aisle air flow
  • Multi-user support, enabling 24×7 use
  • Emulation, Enterprise Prototyping, FPGA-based prototyping

Early users of Veloce CS include tier-one names like AMD and ARM. The new Veloce has impressive credentials, certainly worth taking a closer look at, and they span all three types of hardware platforms. Your team can choose just the right size for each platform to meet your project capacity.

Related Blogs


Photonic Computing – Now or Science Fiction?

Photonic Computing – Now or Science Fiction?
by Mike Gianfagna on 02-26-2024 at 6:00 am

Photonic Computing – Now or Science Fiction?

Cadence recently held an event to dig into the emerging world of photonic computing. Called The Rise of Photonic Computing, it was a two-day event held in San Jose on February 7th and 8th. The first day of the event was also accessible virtually. I attended a panel discussion on the topic – more to come on that. The day delivered a rich set of presentations from industry and academic experts intended to help you tackle many of your design challenges. Some of this material will be available for replay in late February. Please check back here for the link. Now let’s look at a spirited panel discussion that asked the question, Photonic computing – now or science fiction?

The Panelists

There is a photo at the top of this post of the panel Moving left to right:

Gilles Lamant, distinguished engineer at Cadence moderated the panel. Gilles has worked at Cadence for almost 31 years. He is a Virtuoso platform architect and a design methodology consultant in San Jose, Moscow, Tokyo and Burlington Vermont. Gilles has a deep understanding of system design and kept the panel moving in some very interesting directions.

Dr. Daniel Perez-Lopez, CTO and co-founder of iPronics, a company that aims to expand photonics processing to all the layers of the industry with its SmartLight processors. The company is headquartered in Valencia, Spain.

Dr. Michael Förtsch, Founder and CEO of Q.ANT, a company that develops quantum sensors and photonic chips and processors for quantum computing based on its Quantum Photonic Framework. The company is headquartered in Stuttgart, Germany.

Dr. Bahvin Shastri, Assistant Professor, Engineering & Applied Physics, Centre for Nanophotonics, Queen’s University, located in Kingston, Ontario, Canada. Bhavin presented the keynote address right before the panel on Neuromorphic Photonic Computing, Classical to Quantum.

Dr. Patrick Bowen, CEO of and co-founder of Neurophos, a company that is pioneering a revolutionary approach to AI computation, leveraging the vast potential of light. Neurophos leverages metamaterials in its work, they are based in Austin, Texas.

That’s quite a lineup of intriguing worldwide startups and advanced researchers. The conversation covered a lot of topics, insights and predictions. Watch for the replay to hear the whole story. In the meantime, here are some takeaways…

The Commentary

Gilles observed that some of the companies on the panel look like traditional players in the sense that they use existing materials and fabs to build their products but others are innovating in the materials domain and therefore need to build the factory and the product. This observation highlights the fact that photonic computing is indeed a new field. The players that are building fabrication capabilities may become vertically integrated suppliers or they may become pure-play fab partners to others. It’s a dynamic worth watching.

Bahvin commented on this topic from an academic research perspective. His point of view was that, if you can get it done with mainstream silicon photonics, that’s what you do. However, new and exotic materials research is opening up possibilities that are not attainable with silicon and so advanced work like that will be important to realize the broader potential of the technology.

Other discussions on this topic pointed out that the massive compute demands of advanced AI algorithms simply cannot fit the size or power envelope required using silicon. New materials will be the only way forward. In fact, some examples were given as to how challenging applications such as transformers can be re-modeled in a way that makes them more appropriate for the analog domain offered by photonic processing.

An interesting observation was made regarding newly minted PhD students. What if part of the dissertation was to develop a pitch about the invention and try it with a VC. This would bring a reality check to the invention process – how does the invention contribute to the real world? I thought that was an interesting idea.

Here is a good quote from the discussion: “Fifty years of Moore’s Law and we are still at the stage where we haven’t found an efficient computer to simulate nature.”  This is a problem that photonic computing has a chance to solve.

Gilles ended the panel with a question regarding when photonic computing would be fully mainstream. 10 years, 20 years? No one was willing to answer. We are at the beginning of a very exciting time.

To Learn More

Much of the first day of the event will be available for replay, including this panel. Check back here around the end of February. In the meantime, you can check out what Cadence has to offer for photonic design here.  The panel Photonic computing – now or science fiction? didn’t necessarily answer the question, but it did deliver a lot of detail and insights to ponder for the future.


Intel Direct Connect Event

Intel Direct Connect Event
by Scotten Jones on 02-23-2024 at 12:00 pm

Figure 1

On Wednesday, February 21st Intel held their first Foundry Direct Connect event. The event had both public and NDA sessions, and I was in both. In this article I will summarize what I learned (that is not covered by NDA) about Intel’s business, process, and wafer fab plans (my focus is process technology and wafer fabs).

Business

Key points in the keynote address from my perspective.

  • Intel is going to organize the company as Product Co (not sure Product Co is the official name) and Intel Foundry Services (IFS) with Product Co interacting with IFS like a regular foundry customer. All the key systems will be separated and firewalled to ensure that foundry customer data is secure and not accessible by Product Co.
  • Intel’s goal is for IFS to be the number two foundry in the world by 2030. There was a lot of discussion about IFS being the first system foundry, in addition to offering access to Intel’s wafer fab processes, IFS will offer Intel’s advanced packaging, IP, and system architecture expertise.
  • It was interesting to see Arm’s CEO Rene Haas on stage with Intel’s CEO Pat Gelsinger. Arm was described as Intel’s most important business partner, and it was noted that 80% of parts run at TSMC have Arm cores. In my view this shows how seriously Intel is taking foundry, in the past it was unthinkable for Intel to run Arm IP.
  • Approximately 3 months ago IFS disclosed they had orders with a lifetime value of $10 billion dollars, today that has grown to $15 billion dollars!
  • Intel plans to release restated financials going back three years breaking out Product Co and IFS.
  • Microsoft’s CEO Satya Nadella appeared remotely to announce that Microsoft is doing a design for Intel 18A.
Process Technology
  • In an NDA session Ann Kelleher presented Intel’s process technology.
  • Intel has been targeting five nodes in four years (as opposed to the roughly 5 years it took to complete 10nm). The planned nodes were i7, i4 Intel’s first EUV process, i3, 20A with RibbonFET (Gate All Around) and PowerVia (backside power), and 18A.
  • i7 and i4 are in production with i4 being produced in Oregon and Ireland, and i3 is manufacturing ready. 20A and 18A are on track to be production ready this year, see figure 1.

 Figure 1. Five Nodes in Four Years.

I can quibble with whether this is really five nodes, in my view i7, i3 and 18A are half nodes following i10, i4, and 20A, but it is still very impressive performance and shows that Intel is back on track for process development. Ann Kelleher deserves a lot of credit for getting Intel process development back on track.

  • Intel is also filling out their offering for foundry, i3 will now have i3-T (TSV), i3-E (enhanced), and i3-P (performance versions).
  • I can’t discuss specifics, but Intel showed strong yield data for i7 down through 18A.
  • 20A and 18A are due for manufacturing readiness this year and will be Intel’s first RibbonFET processes (Gate All Around stacked Horizontal Nanosheets) and PowerVia (backside power delivery. PowerVia will be the world’s first use of backside power delivery and based on public announcement I have seen from Samsung and TSMC, will be roughly two years ahead of both companies. PowerVia leaves signal routing on the front side of the wafer and moves power delivery to the backside allowing independent optimization of the two and reduces power droop and improves routing and performance.
  • 18A appears to be generating a lot of interest and is progressing well with 0.9PDK released and several companies have taped out test devices. There will be an 18A-P performance version as well. It is my opinion that 18A will be the highest performance process available when it is released although TSMC will have higher transistor density processes.
  • After 18A Intel is going to a two-year node cadence with 14A, 10A and NEXT planned. Figure 2 illustrates Intel’s process roadmap.

Figure 2. Process Roadmap.

  • Further filling out Intel’s foundry offering they are developing a 12nm process with UMC and a 65nm process with Tower.
  • The first High NA EUV tool is in Oregon with proof points expected in 2025 and production on 14A expected in 2026.
Design Enablement

Gary Patton presented Intel’s design enablement in an NDA session. Gary is a longtime IBM development executive and was also CTO at Global Foundries before joining Intel. In the past Intel’s nonstandard design flows have been a significant barrier to accessing Intel processes. Key parts of Gary’s talk:

  • Intel is adopting industry standard design practices, PDK releases and nomenclature.
  • All the major design platforms will be supported, Synopsys, Siemens, Cadence, Ansys and representatives from all four presented in the sessions.
  • All the major foundational IP is available across Intel’s foundry offering.
  • In my view this is a huge step forward for Intel, in fact they discussed how quickly it has been possible to port various design elements into their processes now.
  • The availability of IP and the ease of design for a foundry are critical to success and Intel appears to have checked off this critical box for the first time.
Packaging

Choon Lee presented packaging and he is another outsider brought into Intel, I believe he said he had only been there 3 months. Another analyst commented that it was refreshing to see Intel putting people brought in from outside in key positions as opposed to all the key people being long time Intel employees. Packaging isn’t really my focus but a couple of notes I thought were key:

  • Intel is offering their advanced packaging to customers and referred to it as ASAT (Advanced System Assembly and Test) as opposed to OSAT (Outsourced Assembly and Test).
  • Intel will assemble multiple die products with die sourced from IFS and from other foundries.
  • Intel has a unique capability for testing singulated die that enables much faster and better temperature control.
  • Figure 3 summarizes Intel’s foundry and packaging capabilities.

Figure 3. Intel’s Foundry and Packaging.

Intel Manufacturing

Also under NDA Keyvan Esfarjani presented Intel’s manufacturing. Key disclosable points are:

  • Intel is the only geographically diverse foundry with Fabs in Oregon, Arizona, New Mexico, Ireland and Israel and planned fabs in Ohio and Germany. Intel builds infratsutures around the fabs at each location.
  • The IFS foundry model will enable Intel to ramp up processes and keep them in production as opposed to ramping up processes and then ramping them down several years later the way they previously did as an IDM.
  • Intel fab locations:
    • Fab 28 in Israel is producing i10/i7 and fab 38 is planned for that location.
    • Fab 22/32/42 in Arizona are running i10/i7 with fabs 52/62 planned for that site in mid 2025 to run 18A.
    • Fab 24 in Ireland is running 14nm with i16 foundry planned, Fab 34/44 also at that location are running i4 now and ramping i3. They will eventually run i3 foundry.
    • Fab 9/11x in new Mexico is running advanced packing and will add 65nm with Tower in 2025.
  • Planned expansions in Ohio and Germany.
  • Oregon wasn’t discussed in any detail presumably because it is a development site although it does do early manufacturing. Oregon has Fabs D1C, D1D and 3 phase of D1X running with rebuilds of D1A and an additional 4th phase of D1X being planned.
Conclusion

Overall, the event was very well executed, and the announcements were impressive. Intel has their process technology development back on track and they are taking foundry seriously and doing the right things to be successful. TSMC is secure as the number one foundry in the world for the foreseeable future, but given Samsung’s recurring yield issues I believe Intel is well positioned to challenge Samsung for the number two position.

Also Read:

ISS 2024 – Logic 2034 – Technology, Economics, and Sustainability

Intel should be the Free World’s Plan A Not Plan B, and we need the US Government to step in

How Disruptive will Chiplets be for Intel and TSMC?