BannerforSemiWiki 800x100 (2)

CEO Interview: Larry Zu of Sarcina Technology

CEO Interview: Larry Zu of Sarcina Technology
by Daniel Nenni on 03-01-2024 at 6:00 am

Larry Zu Photo 091516

Larry has grown Sarcina from designing semiconductor packages for a few small companies, to doing package designs for top semiconductor companies around the world. From 2014 to 2018, Larry led the expansion of Sarcina beyond package design into final test and wafer sort hardware and software development.

Larry is a semiconductor veteran who started his career at Bell Labs, before moving on to DEC, Intel, and TSMC.  Along the way he developed a proven track record of delivering successful products including the Alpha, Itanium 2, Pentium 4, and XBOX 360 microprocessors. Over his career, he has tapped out nearly 1,000 packages with a greater than 99% first tape-out success rate.

Larry received his B.S. in Physics from Peking University and his Ph.D. in Electrical & Computer Engineering from Rutgers University.  He has many refereed IEEE publications and holds multiple U.S. patents which have been used in leading US companies’ key products.

Tell us about your company?
Sarcina was founded in Palo Alto, CA in October of 2011. The name “Sarcina” refers to the backpack carried by Roman soldiers. Although it didn’t include their arms and armor, Sarcinas provided the essentials for daily living necessary to accomplish their military missions.

We are an Application Specific Advanced Package (ASAP) company that provides integrated WIPO (Wafer-In, Product-Out) services to customers around the globe. Our vision is to be the leading post-silicon service, setting standards for excellence by providing high-quality, dependable, creative, and assured package, test, and production services to our customers.

What problems are you solving?
As the complexity of designing an advanced semiconductor package becomes more challenging, and the cost of sustaining an internal packaging team for small to mid-sized chip companies and system companies becomes less economical, outsourcing chip packaging makes more sense for many ASIC and system companies. Companies often have to work with multiple independent vendors in Asia to accomplish most of their post-silicon tasks. So, outsourcing these tasks makes sense. That’s why we formed Sarcina: to meet these specific demands.

What application areas are your strongest?
We are the experts in high-power, high pin-count, and high data rate semiconductor packages for high-performance computing applications. Our 100% right-the-first-time success substantiates this claim.

What keeps your customers up at night?
Unresolved technical problems and missed deadlines.

We understand these two pain points. Either one causes companies to work double-shifts: one for their regular day job and the other at night to fix past mistakes. Sarcina’s job is to make sure that never happens and to make working with Sarcina a seamless, time-saving process.

What Does the Competitive Landscape Look Like, and How Do You Differentiate?
That’s an interesting question, and you may find the answer surprising. If you look at this from a service perspective, you’d think we have a large number of competitors across a broad swath of the semiconductor value chain: wafer foundries, ASIC companies, OSAT (Outsourced Semiconductor Assembly and Test) houses. However, if you view this from a business problem-solving perspective, the ASAP space is unique.

We solve both technical and business problems for small to mid-sized ASIC companies and system houses with understaffed post-silicon teams. Our value proposition is advanced packaging, test, assembly, and production at cost points lower than what can be achieved in-house. In the packaging area, our biggest competitors are the low-cost, low-tech, and mature technology entities.

Fortunately, with the boom of AI, mobile devices, autonomous driving, IoT, and the desire to win tomorrow’s tech wars, the market has expanded significantly. We believe the market is large enough to accommodate all of these players. Over time, the inefficient small players may drop out of the race.

Sarcina’s strength and fundamental differentiation is our ability to complete high-performance engineering projects. Over the past 12 years, Sarcina has taped out more than 100 packages, all first-time successes. We’ve never re-taped out a single package. At the same time, we’re able to complete advanced projects with a fraction of the headcount required by other companies. Our engineering efficiency is several times that of the industry norm.

In the networking business, there is a famous rule of thumb: if your product can offer a 10X performance increase … such as speed, efficiency, or capability …. but costs only 2X as much as the existing solution, your business will take off. In our business, we believe that if our engineering efficiency is several times that of our competitor, we’ll effectively compete, regardless of the size of the competitor.

What New Features/Technology Are You Working On?
Every four years, SerDes and PCIe data rates double, and DDR technology advances by a generation. Today, people are working on 112 Gb/s and 224 Gb/s PAM4 SerDes; 32 Gb/s NRZ PCIe-5 and 64 Gb/s PAM4 PCIe-6, as well as 6400 Mb/s to ~10 Gb/s LPDDR5/DDR5/GDDR6. Sarcina’s package design technology is ready for these high data rate chips in an HVM (High Volume Manufacture) environment. IP companies usually provide a live demo of their highest data rate IP with a few lanes of data communication. In a real chip, there will be many lanes with limited routing space. Our job is to provide the package design that meets our customers’ data rate requirements for their real chips. As of today, Sarcina has designed packages for 112 Gb/s SerDes, 64 Gb/s PCIe-6, and 6.4 Gb/s LPDDR5. Our next task is 224 Gb/s PAM4 SerDes with approximately 100 lanes of data communication inside a single package. We are also supporting these data rates on our final test loadboard loopback test.

How Do Customers Normally Engage With Your Company?
Surprisingly, word-of-mouth remains our most efficient way to land business. However, we are stepping up our overall industry presence and visibility as the demand for advanced technology packaging expands. We’re investing more in building collaborations with technology partners and implementing multiple one-on-one outreach channels. While businesses still value face-to-face meetings, we’ve significantly expanded our marketing campaign and assets. We’re appearing at more trade shows, ramping up our earned and paid media, and rebuilding our website, while refreshing our brand assets. All these efforts have dramatically increased our company’s visibility, opening doors to more advanced technology decision-makers.

Also Read:

CEO Interview: Vincent Bligny of Aniah

CEO Interview: Jay Dawani of Lemurian Labs

Luc Burgun: EDA CEO, Now French Startup Investor


WEBINAR: Chipmakers can leverage generative AI to speed up RTL design and verification

WEBINAR: Chipmakers can leverage generative AI to speed up RTL design and verification
by Daniel Nenni on 02-29-2024 at 2:00 pm

planorama blog ai

The subjects of Generative AI and Large Language Models (LLMs) permeate businesses and the public conversation.  It’s not without good reason!  While this emergent field of AI develops, it is now seen at a minimum as a valuable assistant, or, often, a dramatic accelerant to productivity, even to technical workflows.

As we’re now seeing with AI-assisted coding in software development, generative AI will play a similar role in IC logic design and verification with the same dramatic effect.  Increasing in criticality, centralized requirements will be the foundational source of truth for both human engineers and automated “AI assistants” responsible for writing RTL and verification requirements, executing tests, ultimately shrinking time to market.

WATCH THE REPLAY

AI-assisted Software Dev: Requirements to Code

Applications of generative AI are disrupting traditional tech-centric fields, like pure software application development where we see movement from all-human to AI-assisted coding.  Tools like GitHub Copilot can act as your micro-level pair programmer. Even with minimal access to small portions of your codebase, it assists developers by proposing sections of code on demand.  Copilot is limited, in part, by its inability to understand the high-level requirements that drive the code.  AI needs the human-developer to interpret the human-readable requirements and thus, clocks out when the developer clocks out.

Software developers recognize this limitation and the community at large is working towards the audacious vision of full automation from centralized, human-readable requirements.  As large-scale commercial LLMs advance, alongside the open and more specialized language models, the fruits of these efforts are becoming increasingly tangible.  Nascent projects like GPT-Engineer, Aider, and GPT-Pilot are blazing the trail and moving us closer to this vision of automated 24×7 requirements to code software development.

AI-accelerated RTL Design and Verification

Clearly documented requirements as a single source of truth are key in automating development with generative AI.  We see logic design and verification activities ideally suited to AI-accelerated development, unlike pure software development discussed prior. Pure software applications have human-centric GUI’s which AI has yet to automate. To compound, software often suffers from poorly documented requirements which lead to bugs.

By contrast, the logic designed into semiconductor chips originates from rigorous, well-documented specifications – ripe to be accurately interpreted through LLMs. The requirements (or specs) are the bedrock of the entire IC logic design and verification endeavor. Whether the work is carried out by skilled engineers or sophisticated AI assistants, these detailed specifications serve as the single source of truth upon which every aspect of the design is built.

To fully harness the potential of AI assistants, organizations will be tasked to build out AI systems that have access to a unified hub for all product documentation and specifications. This is key now in human-centric workflows but will become mission critical as design and verification is accelerated by AI assisted processes. It’s imperative that these AI systems have access to a unified hub for all product information to ensure that AI tools are aligned with the overarching requirements, operating within the same framework of understanding as their human engineering counterparts.

Meet Sinfonia in our upcoming webinar

Planorama Design is laser focused on the problems of traditional software and IC design and development. We strongly believe that solid, rigorously documented requirements and user experience design are the catalysts to accelerate software and software-hardware (IoT) systems time to market.  To accelerate our internal processes and enhance our tools, Planorama Design built Sinfonia from the ground up with a focus on centralization of requirements.

On March 21, 2024 join us for our webinar, “From Specs to Verilog: AI-assisted logic design on a RISC-V implementation” where we will demonstrate Sinfonia.  In this webinar, we will show how Sinfonia, with knowledge of RISC-V specification documents can support user-directed enhancements to an existing RISC-V implementation. This approach exemplifies the potential of AI to accelerate logic design and verification, offering a glimpse into future engineering capabilities available to semiconductor and hardware companies.

WATCH THE REPLAY

Learn more about Sinfonia

Connect with Matt Genovese of Planorama

Also Read:

A Bold View of Future Product Development with Matt Genovese

LIVE WEBINAR – The ROI of User Experience Design: Increase Sales and Minimize Costs

CEO Interview: Matt Genovese of Planorama Design


2024 Outlook with Adam Olson of Perforce

2024 Outlook with Adam Olson of Perforce
by Daniel Nenni on 02-29-2024 at 10:00 am

adam olson 2021

Perforce is a company that provides software solutions primarily focused on version control, especially for large-scale development projects. Version control systems manage changes to documents, computer programs, large web sites, or other collections of information. Perforce’s main product is Helix Core, formerly known as Perforce Helix, which is a version control system that helps software development teams manage and track changes to their source code, documents, and other digital assets. It is widely used in industries such as game development, automotive, aerospace, and finance where managing complex software projects with many contributors is essential.

Tell us a little bit about yourself and your company. 
Adam Olson, Chief Revenue Officer and General Manager for the Digital Creation business unit at Perforce, which includes our integrated semiconductor solutions, Helix Core and Helix IPLM (formerly Methodics).

What was the most exciting high point of 2023 for your company? 
We had several major account wins within our Digital Creation business unit and expanded our DevOps portfolio. Perforce also appointed a new CEO at the very end of 2023, which we’re very excited about. Technology veteran Jim Cassens brings to Perforce over 30 years of experience scaling software organizations with a customer-centric management approach. We’re thrilled to have him leading the organization.

What was the biggest challenge your company faced in 2023? 
Some of our larger semiconductor customers were facing economic headwinds heading into 2023, which slowed their projects. While these headwinds eased up a bit by the end of the year, the challenges and complexities of the semiconductor industry remain. We find that what may have looked like a relatively mundane decision in the past is now met with larger committees and a need for strong defense of ROI models.

How is your company’s work addressing this biggest challenge? 
Perforce is helping our semiconductor clients tame complexity and increase efficiencies across their design flow. We help them accomplish more with less and accelerate time to market, while keeping a lid on costs. Perforce Helix IPLM and Helix Core serve as a scalable, secure foundation for design data management. By tracking all IP and design data in Helix IPLM’s unified, hierarchical data model, our customers benefit from tighter coordination between cross-functional teams, end-to-end traceability, and more efficient requirements verification. And with Helix Core, they get robust, federated, multisite data management for enterprise scale, security, and performance.

What do you think the biggest growth area for 2024 will be, and why? 
Semiconductor IP security will be a big factor in 2024 as global instability and political uncertainty rises and bad actors spend more time trying to compromise networks and other important infrastructure. Organizations – and global, multi-site teams in particular – will need to carefully track and secure design assets to avoid violating rapidly shifting technology restrictions and export control laws. Such violations, often caused by accidental IP leakage, can result in millions of dollars in fines and legal issues, on top of the lost revenue and market setbacks.

How is your company’s work addressing this growth? 
Perforce is addressing this growing need for IP security through advanced new features like Geofencing as well as providing fine-grained security and enabling end-to-end traceability across the lifecycle. Helix IPLM’s Geofencing feature delivers dynamic security for global, multi-site teams by restricting IP availability in certain geos, regardless of user access permissions. These restrictions can be applied universally – regardless of the underlying data management systems (Perforce Helix Core, Git, DesignSync, etc). Within Helix Core, organizations can control access down to the file level and even qualify access by IP address.

What conferences did you attend in 2023 and how was the traffic?
Our Digital Creation team attended and exhibited at Embedded World and DAC. We had heavy traffic at our booth at both events – substantially more than the previous year, when conference attendance was still affected by the pandemic. We also attended the GSA Executive Forums in Europe and the US, along with Design & Reuse IP-SoC Silicon Valley.

Will you attend conferences in 2024? Same or more?
We’ll be attending the same conferences in 2024, along with a few others such as GOMACTech and DVCon.

Additional questions or final comments? 
Perforce welcomes semiconductor leaders to join our Helix IPLM Monthly User Group (MUG) sessions. It’s a great opportunity to hear from product experts, industry peers, and Helix IPLM users about topics like IP governance, release automation, and IP security, along with best practices and the latest product features. To register, visit https://www.perforce.com/products/helix-iplm/user-group.

Also Read:

The Transformation Model for IP-Centric Design

Chiplets and IP and the Trust Problem

Insights into DevOps Trends in Hardware Design


WEBINAR: Enabling Long Lasting Security for Semiconductors

WEBINAR: Enabling Long Lasting Security for Semiconductors
by Daniel Nenni on 02-29-2024 at 6:00 am

image (10)

Today we live in a world where technology is a part of our everyday lives, not only our personal data, but all devices we rely on on a daily basis including our automobiles, cell phones, and home devices. Hackers have found creative and novel ways to corrupt these products, disable systems, steal secrets and threaten our identities. As we look forward to the future and as technology becomes more entrenched in our lives and impacts our security and safety, we need to move security solutions to the forefront.

WATCH REPLAY NOW

Security is a constantly evolving problem and requires an adaptable solution. In this session, we will address common security problems that we face in today’s challenging world and solutions that can mitigate these threats.     Fixed solutions that are implemented today will inevitably be challenged in the future. Hackers today have more time, resources, training and motivation to disrupt technology. With technology increasing in every facet of our lives, defending against this presents a real challenge. We also have to consider upcoming threats, namely quantum computing. Many predict that quantum computing will be able to crack current cryptography solutions in the next few years!

Fortunately, semiconductor manufacturers have solutions that can enable cryptography agility, also known as Crypto Agility, which can dynamically adapt to evolving threats. This includes not only being able to update hardware accelerated cryptography algorithms, but also provide obfuscation to increase root of trust and protect valuable IP secrets in products. Advanced solutions like these also involve the ability for devices to randomly create their own encryption keys, making it harder for algorithms to crack encryption codes. This webinar will demonstrate a variety of solutions and reconfigurable IP from Flex Logix that can be implemented into any semiconductor device to thwart off all current threats as well as future threats. We will highlight solutions from partners who specialize in security and have ready-to-go IP that can be deployed on Flex Logix IP and add crypto agility to any semiconductor.

Watch this webinar now to learn more about enabling crypto agility in your semiconductor can provide long lasting security solutions.

Abstract:

Semiconductors are on the forefront of security to protect our identity, data and daily lives. And we live in a time where hackers have more time, resources, available training and motivation to disrupt our security than ever before. With quantum computing looming and threatening our current security implementations, it is more important than ever to start implementing crypto agile solutions that can adapt to evolving threats. And this needs occur at every level, including the transport, MAC and IP layers. Adding embedded programmable logic from Flex Logix combined with security IP solutions from Xiphera, a hybrid solution can provide long-lasting security for semiconductors.

Speaker Bios:

Jayson Bethurem is responsible for marketing and business development at Flex Logix. Jayson spent six years at Xilinx as Senior Product Line Manager responsible for about a third of revenues. Before that he spent eight years at Avnet as FAE showing customers how to use FPGAs to improve their products. Earlier, he worked at startups using FPGAs to design products.

Dr. Kimmo Järvinen is the co-founder and CTO of Xiphera. Kimmo has a 20-year long career in the academia where he has done cryptography related research in various European universities. Kimmo has a strong academic background in cryptography and cryptographic hardware engineering after having various post-doctoral, research fellow, and senior researcher positions in Aalto University (Espoo, Finland), KU Leuven (Leuven, Belgium), and University of Helsinki (Helsinki, Finland). Kimmo has published more than sixty scientific articles about cryptography and security engineering, and nearly half of them are somehow related to elliptic curve cryptography. Kimmo has substantial theoretical and practical experience in secure and efficient implementation of elliptic curve cryptosystems.

Join us in this webinar to learn more about enabling crypto agility in your semiconductor can provide long lasting security solutions.

Also Read:

Reconfigurable DSP and AI IP arrives in next-gen InferX

eFPGA goes back to basics for low-power programmable logic

eFPGAs handling crypto-agility for SoCs with PQC


Soft checks are needed during Electrical Rule Checking of IC layouts

Soft checks are needed during Electrical Rule Checking of IC layouts
by Daniel Payne on 02-28-2024 at 10:00 am

Metal1 Via Metal2 s

IC designs have physical verification applications like Layout Versus Schematic (LVS) at the transistor-level to ensure that layout and schematics are equivalent, in addition there’s an Electrical Rules Check (ERC) for connections to well regions called a soft check. The  connections to all the devices needs to have the most consistent voltage signals.  Therefore, the path should be through the Metal layers to reduce resistance and factors like IR Drop.  Detecting connections thought other materials–like Wells–in mandatory.  Soft-Checks are the method most commonly employed to detect this situation. The Calibre product line from Siemens is the most popular tool for DRC and LVS checking, so I read a technical paper from Terry Meeks to learn more about soft checks.

Connecting two metal layers in an IC layout requires precise alignment of both metal layers and the via layer. Here’s a comparison using both a side view and top-down view where the first example is not connected, because Metal1 and Metal 2 are not overlapping, while the second example is connected properly.

Connecting two metal layers with a Via layer.

We want our ERC tool to identify well connectivity errors during soft checks, so that they can be fixed. The following IC layout has a well connectivity error and is shown from the side view, where the Metal1 signal texted as Gnd is connected a diffusion region called a tap diffusion. On the right-hand side is another Metal1 layer with a tap diffusion, but this connectivity creates a high-resistance path in the Rwell to Gnd, and is flagged as an error by the soft check.

Well connectivity error – side view

Another example of soft connectivity error happens in the IC layout below where we can apply only one name per polygon. The digital power net VDD cannot coexist with the analog power net AVDD, and we need to separate these into two shapes. Soft checks help to flag these issues.

AVDD net to VDD net soft check error

An IC layout with both digital and analog power supplies can become rather complex to layout properly, so it’s even more important to have soft checks.

Undetermined areas have question marks

Soft checks are included during your LVS runs, and with Calibre nmLVS there’s a report of soft check results, which can then be viewed using the Calibre RVE viewer.

Using Calibre RVE to review Soft Check errors

Clicking on RVE results tells you which cell has the soft check error, the net names, upper and lower names, and other properties. This info helps to pinpoint what to fix in the IC layout. Clicking on a lower layer like a PWell for a soft check error displays the geometry in yellow.

Soft check result, lower layer

For the same soft check error, clicking on the upper layer shows:

Soft check result, upper layer

During debug you can also show all the upper layer shapes, the green shapes are the selected net upper layer shapes, while yellow is the rejected net upper layer shape.

All upper layer shapes

Debugging soft check errors with RVE involves clicking on the connectivity of selected and rejected nets. A Net Info windows reveals details like which layers are involved, and if shapes are missing connectivity. Looking at which ports are connected to a net reveal if there’s missing VDD or GND errors. This example shows that net 18 is rejected, because it’s missing connectivity to Metal1.

Missing connectivity to Metal1

Summary

LVS checks are mandatory to ensure that an IC has an error-free layout, and soft checks are part of your LVS checks. There’s a proven debugging flow from Siemens in their Calibre nmLVS tool that uses RVE to help layout designers quickly identify soft check failures, so that designers can make fixes and re-verify until all checks are passing. Siemens has written a technical paper for reading online, Detecting and debugging soft check connectivity errors.

Related Blogs

 

 


CEO Interview: Michael Sanie of Endura Technologies

CEO Interview: Michael Sanie of Endura Technologies
by Daniel Nenni on 02-28-2024 at 8:00 am

Michael Sanie
Michael Sanie

Michael Sanie is a veteran of the semiconductor and EDA industries. His career spans several executive roles in diverse businesses with multifunctional responsibilities. He is a passionate evangelist for disruptive technologies.

Most recently, he was the chief marketing executive and senior VP of Enterprise Marketing and Communications at Synopsys, where he also held leadership roles as VP of marketing and strategy for the Design Group and VP of product management for the Verification Group.

Michael previously held executive and senior marketing positions at Cadence, Calypto, Numerical, and Actel, as well as IC design and software engineering positions at VLSI Technology (now NXP Semiconductors).

He holds BSECE and MSEE degrees from Purdue University and an MBA from Santa Clara University.

Tell us about your company

Endura Technologies is developing an end-to-end SoC power delivery solution. In addition to our revolutionary, patented power delivery architecture, we have a diverse skillset to implement test silicon, design IP, design services, design passives (required inductors and capacitors as part of the power delivery solutions), partnerships, and silicon manufacturing relationships. This allows us to create end-to-end SoC power delivery solutions.

Our unique architecture, combined with our fully integrated approach to power delivery at the system level is changing the game for challenging applications such as data centers, automotive, and many others.

What problems are you solving?

Energy consumption for advanced products has become a major care-about across many markets and applications. Battery life and heat dissipation for aggressive form factors drive part of this. The substantial operating costs for massive compute infrastructure is another driver.

A bit more specifically, servers/AI chips are driving much higher compute demands, requiring more power to be delivered.  At the same time, these chips are built on smaller nodes, which run on lower Vdd’s.  The only way this equation can work is to provide much higher currents with several power rails, and increasingly this is only achievable by 2.5D or 3D IC integration These facts are fundamentally changing power delivery approaches.

On top of that, systems in automotive, audio, and switches typically rely on many sensory inputs ranging from MEMs devices to image sensors to radar. These devices require efficient power delivery across many load configurations and at increasing switching frequencies while maintaining ultra-low noise.

These fundamental disruptions are making people take power delivery a lot more seriously — in two ways:  Power delivery is no longer an afterthought; it needs to be designed/architected at the same time as the SoC AND it needs a much more holistic approach. Off-the-shelf PMICs are quickly running out of steam in how they meet these complex requirements.  To get the best power delivery each SoC needs its own ‘application-specific’ (or context-aware) power delivery solution.

Powering these systems at scale requires a new approach. One that takes a comprehensive view of power requirements for the chips and chiplets that implement the complete system. And one that optimizes performance, scalability, and efficiency over the broad spectrum of switching frequencies, current loads, voltage ranges, and silicon manufacturing processes.

This is the problem Endura is solving.

What application areas are your strongest?

Endura has applied its technology across a wide range of power-intensive or power-sensitive application areas – mostly data center and automotive. You can find more specific examples on our website that cover data centers, requirements for memories in data centers, a notebook design with a PCIe Gen5 solid state drive, optical modules and automotive.

What keeps your customers up at night?

Advanced system design presents a power delivery balancing act. The drivers for the requirement may differ, but all systems must operate efficiently with the lowest energy consumption possible.

These systems contain many parts, all operating at different frequencies, with varying power demands and obstacles. Solving the complete problem requires a holistic approach to power management and delivery.

But such an approach has been out of reach for most companies, requiring system designers to attempt integration of multiple tools and multiple sets of IP and software to solve the problem. This has been a very difficult problem to solve. Until now.

What does the competitive landscape look like and how do you differentiate?

The traditional approach to power delivery focuses on a component-level strategy. That is, acquire best-in-class power management solutions, typically from tier-1 suppliers and integrate these devices at the PCB level.

The substantial complexity and power demands of applications such as data centers require a new, fine-grained approach – one that integrates power delivery down to the chip level and one that co-optimizes the architecture for optimal system-level performance.

There are some design teams (typically in larger companies with a broad range of skills) that are making the investment to achieve these results across the supply chain. For everyone else, the complexity of integrating such approaches remains out of reach.  Endura is democratizing this new, system-level approach to power delivery, so it is available to every system design team.

What new features/technology are you working on?

Power management approaches include the use of traditional, discrete devices (sVR) to embedded chiplets for 2.5/3D integration (eVR) down to on-chip, integrated blocks for optimum point-of-load energy delivery (iVR).

While sVR approaches are well-understood, deployment of fully integrated eVR and iVR strategies is extremely complex and challenging. Endura has the technology and know-how to solve these problems, and this is our development focus.

How do customers normally engage with your company?

Endura Technologies has development facilities in California and Dublin, Ireland. If you would like to explore how we can help you develop a forward-looking power strategy you can reach out at info@enduratechnologies.com.

Also Read: 

CEO Interview: Vincent Bligny of Aniah

CEO Interview: Jay Dawani of Lemurian Labs

Luc Burgun: EDA CEO, Now French Startup Investor


Revolutionizing RFIC Design: Introducing RFIC-GPT

Revolutionizing RFIC Design: Introducing RFIC-GPT
by Jason Liu on 02-28-2024 at 6:00 am

Figure1 (10)

In the rapidly evolving world of Radio Frequency Integrated Circuits (RFIC), the challenge has always been to design efficient, high-performance components quickly and accurately. Traditional methods, while effective, come with a high complexity and a lengthy iteration process. Today, we’re excited to unveil RFIC-GPT, a groundbreaking tool that transforms RFIC design through the power of generative AI.

RF chips are known as the crown jewel of analog chips, and RF circuits typically contains not only the active circuits, i.e., the circuits composed of mostly active devices such as transistors, but also a large number of passive components such as inductors, transformers and matching networks. Fig. 1. is an example of a one stage RF power amplifier (PA), the active part of the circuit is a differential common source PA with cross coupled varactors, and it is connected by an input matching network and an output matching network.  The matching networks are usually a combination of passive devices such as inductors, capacitors and transformers connected in an optimized configuration.

To design such an RF circuit, both of the devices in the active circuit and the passive layout patterns in the matching networks need to be optimized. The conventional design flow of RFIC circuit is shown in the top half of Fig. 2. On one hand, active circuits need to be first designed and simulated both in schematics and in layouts. On the other hand, the passive components and circuits are iterated repeatedly using more physical and tedious electromagnetic (EM) simulation combined with their layouts, making it a key challenge in RF design.

Thereafter, the parameters of entire layouts are extracted and post layout simulations are run to compare with the design specifications (Specs). Finally, the designs of both active circuits and layouts of passive circuits are re-adjusted and re-simulated, and the results are compared again. This process is iterated for a numerous number of times until the design Specs are achieved. Among others, the main difficulties of designing RFIC can be attributed to:

(1) large design search space of both active and passive circuits;

(2) lengthy and tedious EM simulation required;

(3) Interactions between active and passive circuits, and that between RFIC and its surroundings demands numerous iterations and optimizations.

Therefore, the traditional design flow of RFIC typically takes a lot of human effort, and its design quality in a constrained time also largely depends on the experience of particular IC designers.

Recently, generative AI has been researched and explored extensively for generating contents including but not limited to dialogues, pictures, programming codes. Analogous to this concept, generative AI is also considered for the RFIC design automation in the area of IC design. The bottom half of Fig. 2 exhibits an example RFIC design flow with the assisted generative AI. Essentially, the behavior of small circuit components can be lumped into models and lengthy simulations can be omitted.

Additionally, the solution searching “experience” for the RFIC design can be “learned”, and the solutions, i.e., the initial design of RFIC schematics and layouts, can be quickly “generated”. Importantly, the simulated results of the AI generated RFIC circuits can indeed be already close to the design Specs, and IC design engineers only need to do some final optimization and verifying simulations before they can be applied to the RFIC design blocks for tape-outs. This methodology saves a large amount of the simulation iterations and drastically improves design efficiency. Furthermore, the results are more consistent run to run since the task is performed by “emotionless” computer.

As a pioneer of intelligent chip design solutions, the AI based RFIC design automation tool RFIC-GPT has been launched. Using RFIC-GPT, GDSII or schematic diagrams of RF devices and circuits meeting design specifications (such as Q/L/k of the transformer; matching degree S11 of the matching circuit, insertion loss IL; gain, OP1db of the PA etc.) can be directly generated based on AI algorithm engine. It reduces simulation iterations by over 50%, accelerating the journey from concept to production. This tool is not just about speed; it’s about precision. It generates optimized layouts and schematics that meet design specifications with up to 95% accuracy, ensuring high-quality results with fewer revisions.

What sets RFIC-GPT apart? Unlike traditional tools that rely heavily on manual input and trial-and-error, RFIC-GPT leverages AI to predict and optimize design outcomes, making the process faster and more reliable. This means designers can focus more on innovation and less on the repetitive tasks that often slow down development.

In conclusion, RFIC-GPT represents a significant leap forward in RFIC design technology. By harnessing the power of AI, it offers unprecedented efficiency, accuracy, and ease of use. We’re proud to introduce this innovative tool and are excited about the potential it holds for the future of RFIC design. Join us in this revolution, try RFIC-GPT today, and take the first step towards more efficient, accurate, and innovative RFIC designs. The author encourages designer to try RFIC-GPT online  ( www.RFIC-GPT.com )  and give feedback . The practice of RFIC-GPT only takes three steps:

(1) Input your design Specs and requirements;

(2) Consider the design trade-offs and choose the appropriate GDSII or active design;

(3) Click download for your application.

Author:

Jason Liu, Jason is a senior researcher on the design automation solution for RFIC. Jason holds a Ph.D. degree in Electrical Engineering and has been in the EDA industry for more than 15 years.

Also Read:

CEO Interview: Vincent Bligny of Aniah

Outlook 2024 with Anna Fontanelli Founder & CEO MZ Technologies

2024 Outlook with Cristian Amitroaie, Founder and CEO of AMIQ EDA


2024 Signal & Power Integrity SIG Event Summary

2024 Signal & Power Integrity SIG Event Summary
by Daniel Nenni on 02-27-2024 at 10:00 am

SIG Event Synopsys

It was a dark and stormy night here in Silicon Valley but we still had a full room of semiconductor professionals. I emceed the event. In addition to demos, customer and partner presentations, we did a Q&A which was really great. One thing I have to say is that Intel really showed up for both DesignCon and the Chiplet Summit. Quite a few Intel employees introduced themselves and a couple even took pictures with me, great networking.

The SIPI SIG 2024 event was hosted at the Santa Clara Hilton on Jan 31st on the margins of DesignCon and was over-subscribed with 100 attendees (despite inclement weather). There were 20+ customers and partners  represented including the likes of Apple, Samsung, AMD, TI, Micron, Qualcomm, Google, Meta, Amazon, Tesla, Cisco, Broadcom, Intel, Sony, Socionext, Realtek, Microchip, Winbond, Lattice Semi, Mathworks, Ansys, Keysight, and more:

Synopsys Demos & Cocktail Hour
Interposer Extraction from 3DIC Compiler & SIPI Analysis
TDECQ Measurement for High Speed PAM4 Data Links

Customer Presentations and Q&A:
Optimization of STATEYE Simulation Parameters for LPDDR5 Application
Youngsoo Lee, Senior Manager of AECG Package Development Team, AMD

IBIS and Touchstone: Assuring Quality and Preparing for the Future
Michael Mirmak, Signal Integrity Technical Lead, Intel

Signal and Power Integrity Simulation Approach for HBM3 Hisham Abed, Sr. Staff A&MS Circuit Design Engineer, Solutions Group, Synopsys

Signal Integrity at the Cutting Edge: Advanced Modeling and Verification for High-Speed Interconnects Barry Katz, Director of Engineering, RF & AMS Products, MathWorks.

All great presentations, the panelists had more than 100 years of combined experience, but I must say that Michael Mirmak from Intel was really really great. Here is a quick summary that Michael helped me with. Michael started his presentation with the standard corporate disclaimer:

“I must emphasize that my statements and appearance at the event was not intended and should not be construed as an endorsement by my employer, or by any organization of particular products or services.”

IBIS and Touchstone: Assuring Quality and Preparing for the Future
  • IBIS and Touchstone are the most common model formats for SI and PI applications today
  • Assessing model quality remains a constant concern for both model users and producers
  • The simulation output log file is often neglected but can provide very useful insights, as it includes model quality reporting and issue detection outside of outputs such as eye diagrams, before actual channel simulation begins
  • Even for high-speed IBIS AMI (Algorithmic Model Interface) simulations, problems can arise from simple analog IBIS data mismatches between impedance and transition characteristics; the simulation log can alert the user and model-maker to these early, before larger and potentially expensive batch runs
  • The simulation output log can also help find issues with the algorithmic portion of IBIS AMI models that may distort output in subtle ways that cannot (yet) be checked with the standard parsing tool
  • IBIS 7.0 and later supports standard modeling of modern, complex component package designs that tend to be represented using proprietary SPICE variants today; S-parameters under Touchstone are now included as well
  • S-parameters using the Touchstone format are frequently used for interconnect modeling, but can become unwieldy when used to describe high-speed links at the system level over manufacturing or environmental variations
  • Touchstone 3.0 is coming and is planned to include a pole-residue format that enables compression of S-parameter data

Congratulations to Synopsys and the semiconductor ecosystem, it was a great event, absolutely.

Also Read:

Synopsys Geared for Next Era’s Opportunity and Growth

Automated Constraints Promotion Methodology for IP to Complex SoC Designs

UCIe InterOp Testchip Unleashes Growth of Open Chiplet Ecosystem


BDD-Based Formal for Floating Point. Innovation in Verification

BDD-Based Formal for Floating Point. Innovation in Verification
by Bernard Murphy on 02-27-2024 at 6:00 am

Innovation New

A different approach to formally verifying very challenging datapath functions. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome. We’re planning to add a wrinkle to our verification exploration this year. Details to follow!

The Innovation

This month’s pick is Polynomial Formal Verification of Floating-Point Adders. This article was published in the 2023 DATE Conference. The authors are from the University of Bremen, Germany.

Datapath element implementations must be proved absolutely correct (remember the infamous Pentium floating point bug), which demands formal proofs. Yet BDD state graphs for floating point elements rapidly explode, while SAT proofs are often bounded hence not truly complete.

The popular workaround today is to use equivalence checking with a C/C++ reference model, which works very well but of course depends on a trusted reference. However some brave souls are still trying to find a path with BDD. These authors suggest methods to use case-splitting to limit state graph explosion, dropping from exponential to polynomial bounded complexity. Let’s see what our reviewers think!

Paul’s view

Compact easy read paper to kick-off of 2024, and on a classic problem in computer science: managing BDD size explosion in formal verification.

The key contribution of the paper is a new method for “case splitting” in formal verification of floating point adders. Traditionally, case splitting means to pick a boolean variable that causes a BDD to blow up in size, and just run two separate formal proofs, one for the “case” where that variable is true and one for the case where that variable is false. If both proofs pass, then it means that the overall proof for the full BDD including that variable must necessarily also pass. Of course, case splitting for n variables means 2^n cases so if you use it everywhere then you just trade one exponential blow up for another.

This paper observes that case splitting need not be based only on individual Boolean variables. Any exhaustive sub-division of problem is valid. For example, prior to normalizing the base-exponent, a case split on the number of leading zeros in the base can be performed – i.e. zero leading zeros in the base, one leading zero in the base, and so on. This particular choice of split combined with one other cunning split in the alignment shift step achieves a magical compromise such that the overall proof for a floating point add goes from being exponential to polynomial in complexity. A double precision floating point add circuit can now be formally proved correct in 10 seconds. Nice!

Raúl’s view

This short paper introduces a novel approach to managing the size explosion problem in formal verification of floating point adders using BDDs, a classic issue in equivalence checking. Traditionally, this is addressed by case splitting, i.e., dividing the problem based on the values of individual Boolean variables (0, 1), also leading to exponential growth in complexity with the number of variables split. Based on observations on where the explosion in size happens when constructing the BDDs, the paper proposes three innovative case splitting methods. They are not based on individual Boolean variables and are specific for floating point adders (of course they do not simplify general equivalence checking to P).

  1. Alignment Shift Case Splitting: The paper suggests splitting with regard to the shift amount or exponent difference, significantly reducing the number of cases needed for verification.
  2. Leading Zero Case Splitting: To address the explosion at the normalization shift, the paper proposes creating cases based on the number of leading zeros in the addition result.
  3. Subnormal Numbers and Rounding: Subnormal numbers are handled by adding a simplification in cases where they can occur; rounding does not trigger an explosion in BDD size.

By strategically choosing these case splits, the overall proof complexity for floating point addition can be reduced from exponential to polynomial. As a result, formal verification of double and quadruple precision floating point add circuits, which in classic symbolic simulation time out at two hours, can now be completed in 10-300 secs!


New Emulation, Enterprise Prototyping and FPGA-based Prototyping Launched

New Emulation, Enterprise Prototyping and FPGA-based Prototyping Launched
by Daniel Payne on 02-26-2024 at 10:00 am

Veloce Strato CS min

General purpose CPUs have run most EDA tools quite well for many years now, but if you really want to accelerate something like simulation then you start to look at using specializedhardware accelerators. . Emulators came onto the scene around 1986 and the processing power has greatly increased over the years, mostly in response to the demands of leading-edge companies designing CPUs, GPUs and more recently AI-based processors and hyperscalers that need to accelerate simulation to ensure that designs are bug-free and will actually boot-up and run software properly before tape out.

All modern CPU, GPU, Hyperscalers, and AI processor teams are using emulation to accelerate the design and debug of their SOCs, with transistor counts ranging from 25 billion to 167 billion transistors, often using chiplets as the massive number of transistors no longer fit within the maximum reticle size. These systems are challenging to verify, and using a general purpose CPU to run EDA simulations is no longer fast enough, so emulation must be used. Design teams on projects for AI and hyperscale applications are running software loads that demand quick analysis so that trade offs can be made between power and performance.

Emulation is used early in the design flow, when there are lots of design changes happening, so having flexible debug and fast compile features are critical for quick turn-around. When the RTL coding has become stable enough and there’s less debugging required, then a faster simulation approach using enterprise prototyping can be started as early firmware and software development can begin. The third stage of accelerated simulation follows and is traditional FPGA-based prototyping, where software developers are the main users, where performance and flexibility is prime need.

With the three hardware-assisted acceleration techniques you could opt for using three hardware systems from multiple vendors, however I just learned about a new announcement from Siemens where they have launched a next-generation family of products that covers all three use cases and they call it Veloce CS.

 

For Emulation the Veloce Strato CS is using a domain-specific chip called the CrystalX, which enables fast, predictable compile during design bring-up and speeds iterations. Designers are more productive by using native debug capabilities, and the platform has scalability to fit the biggest designs. On the prototyping side the FPGA-based Veloce Primo CS is using the latest AMD Chip, the VP1902 Adaptive SoC, which has 2X higher logic density, and an 8X faster debug performance.

 

Previous generations of emulators often had unique hardware form factors, but with the new Veloce CS Siemens adopted a blade architecture, which fits into modern data centers more easily.

The previous generation of emulators from Siemens was called the Veloce Strato+, introduced in 2021; now with the new Veloce Strato CS you enjoy 4X gate capacity, 5X performance gain, and a 5X debug throughput boost. Scalability now goes up to 40+B gates, and the modular blade approach spans from 1 to 256 blades.

Veloce Strato CS configurations

For enterprise prototyping Siemens offered the Veloce Primo beginning in 2021; with the new Veloce Primo CS your team will benefit from 4X gate capacity, 5X in performance, and a whopping 50X in debug throughput. Once again, blades are used with Veloce Primo CS, providing a range of 500M gates, all the way up to 40+B gates.

The following diagram shows the common compiler, debug and runtime software shared between the emulator and enterprise prototyping systems, with the major difference being that the emulator uses the custom CrystalX chip and the enterprise prototype employs the AMD VP1902 chips.

Emulator and Enterprise Prototype systems

By using a blade architecture these systems require only air cooling, so no expensive water cooling is needed.

The third new product introduced is Veloce proFPGA CS, and it gives you 2X gate capacity, 2X performance, and a stunning 50X debug throughput advantage over previous generation proFPGA system. Scaling starts out with just a single FPGA clocking at 100MHz, then growing up to 4B gates. The Uno and Quad configurations are well suited for desktop prototyping, then each blade system has 6 FPGAs.

Prototyping used to be limited by slow design bring-up, but now with Veloce proFPGA CS engineers will experience efficient compile without manual RTL edits, enjoy automated multi-FPGA partitioning, benefit from timing-driven performance optimization, and become more efficient with sophisticated at-speed debug due to VPS SW.

Summary

Siemens designed, built and announced three new hardware-accelerated systems that have some immediate benefits, like:

  • Lower power to cool
  • ~10Kw/Billion gates
  • Fits into data center using blades and air cooling cold aisle – hot aisle air flow
  • Multi-user support, enabling 24×7 use
  • Emulation, Enterprise Prototyping, FPGA-based prototyping

Early users of Veloce CS include tier-one names like AMD and ARM. The new Veloce has impressive credentials, certainly worth taking a closer look at, and they span all three types of hardware platforms. Your team can choose just the right size for each platform to meet your project capacity.

Related Blogs