SNPS1670747138 DAC 2025 800x100px HRes

CEO Interview: GP Singh from Ambient Scientific

CEO Interview: GP Singh from Ambient Scientific
by Daniel Nenni on 12-06-2024 at 6:00 am

GP Sir Photo

Gajendra Prasad Singh, also known as GP Singh, is a seasoned tech professional with over 26 years of experience in advanced semiconductor chips. With a zeal to solve the most complex technical problems, he harped on a difficult journey to create programmable AI Microprocessors, that provide high-performance in a cost-effective manner while still consuming low power. To realize this vision, he co-founded Ambient Scientific along with a team of visionaries from California’s Silicon Valley. GP’s extensive technical experience and successful leadership record within global prestigious companies from building cutting-edge chips contributed to his deep understanding of not only the scientific first principles required for such breakthrough innovations at the grassroots level but also the business acumen to ensure practical feasibility. With an innate passion for everything electronics and computers, GP Singh is a fierce advocate of using semiconductors for the betterment of human lives.

Tell us about your company?

Ambient Scientific is a fabless semiconductor company born in Silicon Valley, pioneering ultra-low power AI processors that are fully programmable to enable endless AI applications.

Our breakthrough Analog In-Memory Compute technology called DigAn® is making AI computing more powerful and efficient than ever before, without compromising on flexibility and programmability. Compared to traditional AI hardware, our processors deliver thousands of time more AI performance at the same power consumption or thousands of time less power consumption for the same AI performance.

Our first product GPX10 leverages the DigAn® architecture to bring battery-powered, cloud-free, on-device AI applications to life, something considered nearly impossible before. From always-on voice detection to FaceID to predictive maintenance, GPX10 is enabling endless applications in various industries, all while running on as little as a coin cell battery with no dependence on the cloud or an internet connection.

With a full stack SDK designed to support industry standard AI frameworks (Tensorflow, Keras, etc.) and an AI compiler to enable custom neural networks, we enable rapid time to market for your AI applications. Order our DVK today and bring the power of AI away from the cloud, right on to your fingertips.

What problems are you solving?

While the AI application and software landscape has exploded in complexity, hardware has failed to keep up. Current chips used for AI processing (GPUs) were designed for graphics processing and not AI computing in mind, making them inefficient and extremely expensive. This is clearly visible with the rising compute costs as well as power consumption for all AI ranging from gigantic LLMs to edge AI for smaller electronic devices. We at Ambient Scientific have solved these problems by inventing not just analog in-memory computing but also new instruction set architecture designed specifically for AI computing. Our analog matrix multiplication engines deliver 40X AI performance at 70X lower power consumption compared to equivalent GPUs. Built with scalability and flexibility in mind, our architecture enables AI processors all the way from cloud and server level to MCU level for a wide variety of applications across several industries. Ambient Scientific’s mission is to make AI computing powerful, energy efficient and affordable for everyone alike.

What application areas are your strongest?

Our first product GPX10 is an AI processor targeted towards on-device AI applications for the tiniest of battery powered devices. It helps move AI processing from the confines of the cloud directly on to the device even if its running on as little as a coin cell battery. This improves application reliability, latency, data security as well as total cost of ownership. Some of our strongest application areas popular with customers are industrial predictive maintenance at the edge, anomaly detection on MedTech devices and cloud-free voice control for consumer products. While commonly these application would struggle with latency or reliability or miniscule battery lives due to AI processing, our processor solves all these problems without forcing any compromises or even affordability.

What keeps your customers up at night?

With the widespread utility of AI, product makers have realized the importance of incorporating AI features into their product roadmap to remain competitive and maintain differentiation. These products makers are now faced with a difficult choice:

  1. Run AI processing in the cloud and sacrifice latency, data privacy and reliability due to complete dependence on a network connection.
  2. Run AI on device and sacrifice accuracy and power efficiency which translates into significantly compromised battery life.

These limitations which ultimately translate into higher costs or compromised product quality are an absolute function of the current processors available in the market, none of which were designed for AI processing. They force debilitating sacrifices for the product maker that keeps them up at night, stuck between a rock and a hard place.

What does the competitive landscape look like and how do you differentiate?

The AI compute market for small electronic devices includes either MCUs, entry level GPUs or new age NPUs. While MCUs cannot deliver enough performance required for meaningful AI compute, entry level GPUs consume too much power, occupy too much area and are not affordable enough to fit within the boundaries of commercial viability for battery powered on-device AI applications. Several new age NPUs claim to be able to deliver low power AI solutions but with a heavy price to pay in lack of programmability. They tend to be fixed function with pre-defined neural networks and minimal room for customization. Our Ultra-low power AI chips not only deliver the highest performance/unit of power consumption (>7 TOPs/W), they’re smaller than a fingernail, affordable and most importantly completely programmable. Product makers care about programmability so they can differentiate their products from competitors’ by owning the software such as their proprietary AI algorithms. Programmability also makes their products future proof with the ability to push updates over the air as the software and application landscape evolves. Compared to fixed function or application specific NPUs, our processors offer a versatile and flexible platform for product makers to differentiate themselves with ultra-low power AI features well into the future.

What new features/technology are you working on?

Our claim to fame is a breakthrough on analog in-memory computing technology that enables us to leverage a combination of high speed digital and analog circuits designed specifically for AI computing. By leveraging cubic in-memory architecture and the analog matrix multiplication circuit, we’ve solved all the bottlenecks for AI computing while minimizing energy consumption to a fraction of contemporary architectures. Not only this, we’ve also created custom instruction set architecture from the ground up to enable flexibility and scalability in AI computing. This means we can build a wide range of processors from AI MCUs to high speed computer vision processors. Similarly our end to end software stack scales with our processors to adapt to the application needs of software developers for a wide variety of applications in several industries.

Also Read:

CEO Interview: Ollie Jones of Sondrel

CEO Interview: Dr. Yunji Corcoran of SMC Diode Solutions

CEO Interview: Rajesh Vashist of SiTime

CEO Interview: Dr. Greg Newbloom of Membrion


SystemReady Certified: Ensuring Effortless Out-of-the-Box Arm Processor Deployments

SystemReady Certified: Ensuring Effortless Out-of-the-Box Arm Processor Deployments
by Lauro Rizzatti on 12-05-2024 at 10:00 am

SystemReady Certified Ensuring Out of the Box Effortless Arm Processors Deployments Figure 1

When contemplating the Lego-like hardware and software structure of a leading system-on-chip (SoC) design, a mathematically inclined mind might marvel at the tantalizing array of combinatorial possibilities among its hardware and software components. In contrast, the engineering team tasked with its validation may have a more grounded perspective. Figuratively speaking, the team might be more concerned with calculating how much midnight oil will need to be burned to validate such a complex system.

The numerous interactions between hardware components such as large arrays of various processor types, memory types, interconnect networks, and a wide assortment of standard and custom peripherals and logic, with those of the software, like bare-metal software, drivers, and OS hardware-dependent layers, demand exhaustive functional verification. This process is computationally intensive, requiring billions of cycles to establish confidence in a bug-free design before manufacturing. The challenge is magnified by the relentless pace of technology, with new hardware and software versions constantly emerging while support for older iterations persists.

The Economies of Design Debug

A well-known axiom in the field of electronic design emphasizes that the cost of fixing a design bug soars an order of magnitude at each successive stage of the verification process. What might cost a mere dollar to correct at the basic block-level verification stage can skyrocket to a million dollars when the issue surfaces at the full SoC level, where hardware and software tightly interact.

The stakes become even higher if a design flaw goes undetected until after silicon fabrication. Post-silicon bug detection not only challenges engineering teams but can also lead to exorbitant costs that may drain a company’s financial resources. For small enterprises, such a scenario could be catastrophic, potentially leading to bankruptcy due to the redesign expenses and missed revenues caused by delayed product launches.

In the fiercely competitive semiconductor industry, the margin for error is razor thin. Therefore, rigorous verification at each stage of the design process is not just a best practice—it’s a critical safeguard against the potentially ruinous consequences of post-silicon bug detection.

On the bright side, the electronic design automation (EDA) industry has been investing heavily in resources and innovation to tackle the challenge of pre-silicon verification. The Shift-Left verification methodology is a testament to the industry’s commitment to addressing this challenge.

Arm: Linchpin Example of the Hardware/Software Integration Challenges

Among processor companies Arm is a case-study, because of its vast catalog of IP solutions. Arm offers a wide range of IPs, platforms and solutions, including CPUs, GPUs, memory controllers, interconnects, security, automotive, AI, IoT, and other technologies, each designed to meet the needs of different markets and applications. While the exact number is not publicly known, when adding updates and new releases, it amounts to thousands of different parts.

SoC designers using Arm components face an uphill verification challenge. Once they have selected the IP components, they must integrate them into complex SoC designs, add a software stack to bring the design to life, and ensure compliance, that is, compatibility or interoperability of the software with the hardware.

This process is fraught with uncertainties and risks.

Often, root causes of integration issues can be traced to non-compliant hardware, such as non-standard PCIe ECAM, PCIe ghost devices, or customized components like universal asynchronous receiver-transmitters (UARTs) or GICs. These issues can lead to design malfunctions, and potentially to serious failures. For instance, systems with complex PCIe hierarchies may lack firmware workarounds, custom OS distributions may receive limited security updates, and Windows servers and clients may be incompatible with non-compliant PCI ECAM.

To address these issues, a widely used but increasingly outdated method in the electronics industry is post-silicon testing. While it serves the purpose of debugging hardware flaws after fabrication, it is inherently inefficient. This approach contradicts the well-established principle of exponential cost increase, summarized by the phrase “the sooner, the cheaper.” By delaying the detection of design flaws until after silicon manufacturing, companies incur costly silicon re-spins and face extended timelines.

Fortunately, these issues can be mitigated much earlier in the development cycle through pre-silicon design verification. Pre-silicon verification, which includes simulation, emulation, formal and timing verification, allows engineers to identify and resolve problems before chips are fabricated, significantly reducing both costs and risks.

Arm’s Game-Changing Solution: From ServerReady to SystemReady

To mitigate this challenge, specifically to eliminate or at least reduce design re-spins and accelerate time-to-market, Arm introduced the SystemReady Certification Program in 2022. Building on the success of the ServerReady program, which was launched in 2018 and targeted server applications, SystemReady expands the coverage to include designs like edge devices, IoT applications, automotive systems, and more.

In general, hardware platforms provided by semiconductor partners come with their own software stacks, i.e., firmware, drivers and operating systems. These are often siloed, creating challenges for OS vendors and independent-software-vendors (ISVs) who need to run applications across different platforms, as these setups tend to be highly specific and fragmented. SystemReady aims to break down these silos, enabling software portability and interoperability across all Arm-based A-Class devices. When third-party operating systems are run on devices complying with a minimum set of hardware and firmware requirements based on Arm specifications, they boot seamlessly, and applications run smoothly.

SystemReady Program Foundation

The foundation of the Arm’s SystemReady program lies in two key specifications. First, the Base System Architecture (BSA), a formal set of compute platform definitions to encompass a variety of systems from the cloud to the IoT edge, ensures that in-house developed or 3rd-party sourced software works seamlessly across a universe of Arm-based hardware. Second, a set of accompanying firmware specifications called the Base Boot Requirements (BBR), complements the BSA definitions. These sets of rules are encapsulated in the BSA Compliance Suite, accessible on GitHub.

The suite is designed to run compliance tests during pre-silicon validation, eliminating the need for executing full operating systems to validate the environment. This early-stage validation prevents costly silicon respins, expedites system-level debugging, and accelerates time-to-market.

Arm’s Thriving SystemReady Partner Ecosystem

To reach a vast and diverse customer base while considerably enhancing the value of the Arm ecosystem, Arm has strategically partnered with a wide array of companies, including leaders in EDA, IP, and silicon providers. These collaborations play a critical role in driving the success of Arm’s SystemReady program, a certification initiative that ensures seamless compatibility across hardware platforms and software stacks.

Leading EDA Firms Accelerate SystemReady Certification Success

The pre-silicon validation of software stacks on newly designed hardware platforms demands hardware-assisted verification platforms, such as emulation and FPGA prototyping. These platforms are crucial for ensuring that new designs function correctly across the range of real-world conditions they will face. Best-in-class emulators and FPGA prototypes support comprehensive verification and validation processes, including hardware debugging, hardware-software co-verification, power and performance analysis, and even post-silicon testing for final checks.

Prominent suppliers of hardware-assisted verification platforms have joined Arm’s SystemReady program to enable their customers developing Arm SoCs and components to validate BSA compliance on HAV platforms using Transactors and Verification IPs. By participating in this program, EDA companies enable developers to validate software before silicon is even taped out, significantly reducing risks and development costs while accelerating time-to-market. The “PCIe SystemReady Certification Case Study” is an example of how a collaborative approach to pre-silicon validation can lead to successful certification and market-ready products.

Case Study: PCIe SystemReady Certification

The PCIe protocol is one of the most widely adopted and popular interfaces in the electronics industry, supporting a broad spectrum of applications, including networking, storage, GPU accelerators, and network accelerators. Each of these applications has distinct workload profiles that interact uniquely with system components, making PCIe a versatile yet complex protocol to integrate into hardware platforms.

Arm’s SystemReady certification program for the Arm architecture implementation including the complex PCIe subsystems is designed to ensure that these diverse applications can run seamlessly across various hardware environments. Achieving this certification requires adherence to a stringent set of compliance rules. These rules involve injecting specific sequences into the PCI port and monitoring responses at the PCI protocol layer, ensuring that the system can handle different types of workloads in real-world scenarios.

Synopsys and PCIe SystemReady Compliance

To streamline this process, Synopsys provides a PCI endpoint model specifically designed to meet Arm’s BSA certification standards. As shown in Figure 1, the SystemReady compliance program is a collaborative effort between Arm, Synopsys, and silicon providers. While the silicon partner focuses on developing the boot code, Synopsys contributes the Platform Abstraction Layer (PAL), a crucial software component that ensures smooth execution of Arm’s Compliance Suite tests on the SoC.

Figure 1: Block diagram describing how Arm and partners (Arm, Synopsys, Silicon Providers) work together

The PAL acts as an intermediary, enabling the Compliance Suite to communicate effectively with Synopsys’ transactors and Verification IPs (VIP) thus maximizing test coverage and capturing corner cases that may otherwise be overlooked. This integration ensures thorough testing of PCIe subsystems, providing developers with the confidence that their designs meet the highest standards of compatibility and performance.

Performance Verification and PCIe Protocol Evolution

In addition to compliance testing, performance verification is a critical aspect of pre-silicon design validation for PCIe interfaces. When systems upgrade to newer PCIe protocol generations, such as moving from PCIe Gen 5 to PCIe Gen 6, it involves significant investment. However, it’s vital to verify that the system is fully equipped to handle the additional bandwidth and performance enhancements offered by the newer protocol. Performance validation helps determine whether a developing SoC can manage various workloads and uncover any potential bottlenecks that might prevent the system from realizing the full benefits of the upgrade.

Synopsys’ support for integrating the Compliance Suite adds an additional layer of performance validation, allowing users to run comprehensive performance scenarios, particularly focused on the PCI subsystem. This ensures that the PCIe subsystem not only complies with Arm architectural requirements but also achieves optimal performance across a range of SoC applications.

Conclusion

By ensuring that software stacks are portable and interoperable across a diverse range of platforms—from cloud servers to edge devices and IoT applications—Arm’s SystemReady program plays a pivotal role in minimizing design risks. This standardization significantly reduces design costs and accelerates time-to-market, enabling companies to deliver products that function seamlessly out-of-the-box.

SystemReady not only enhances design efficiency but also opens new avenues for Total Addressable Market (TAM) expansion. By ensuring compatibility and reducing development complexity, the program allows Arm’s partners to target a broader range of industries and applications, providing them with a distinct competitive advantage.

These efforts underscore Arm’s commitment to empowering its ecosystem and driving innovation across the industry.

Also Read:

The Immensity of Software Development the Challenges of Debugging (Part 1 of 4)

The Immensity of Software Development and the Challenges of Debugging Series (Part 2 of 4)

The Immensity of Software Development and the Challenges of Debugging (Part 3 of 4)

The Immensity of Software Development and the Challenges of Debugging Series (Part 4 of 4)


PDF Solutions Hosts Executive Conference December 12 on AI’s Power to Transform Semiconductor Design and Manufacturing

PDF Solutions Hosts Executive Conference December 12 on AI’s Power to Transform Semiconductor Design and Manufacturing
by Daniel Nenni on 12-05-2024 at 6:00 am

PDF Solutions LI Post on Conference

PDF Solutions, Inc. will host an AI Executive Conference Thursday, December 12, in San Francisco featuring keynotes, presentations, panels and demonstrations offering insights into the power of AI to transform semiconductor design and manufacturing. The conference immediately follows the 70th Annual IEEE International Electron Devices Meeting (IEDM).

Talks will cover the state of art and best practices to design, deploy, scale and manage AI/ML solutions across the global semiconductor industry from PDF Solutions executives, other industry thought leaders, solutions experts and partners and users.

Three keynote presentations will look at how AI is currently being deployed in semiconductor manufacturing. Aziz Safa, Vice President and General Manager at Intel, will describe “How Analytics and AI are helping to transform a leading semiconductor company.” Smitha Mathews from ADI will discuss how semiconductor companies can “Get ready for AI” and the lessons learned from a real-life deployment. John Kibarian, PDF Solutions’ CEO will explain how AI is the next evolution of PDF Solutions portfolio.

The event includes talks on the state of art and best practices to design, deploy, scale and manage AI/ML solutions across the global semiconductor industry from PDF Solutions executives, other industry thought leaders, solutions experts and partners and users.

Five panels will appraise use cases for GenAI, AI for 3D device test, trust, AI-enabled digital transformation and digital twin including:

  • “GenAI for semiconductor: use cases, solutions and demonstrations” with panelists from PDF Solutions, SAP, Voltai and Yurts.
  • “AI for test in a world of hybrid 3D devices” includes Advantest, Siemens, Teradyne, a leading foundry and Outsourced Semiconductor Assembly and Test (OSAT) service spokespersons.
  • Panelists from PDF Solutions, Yurts and an Enterprise Applications from an independent software vendor discuss “Revisiting the notion of trust in an AI solutions world.”
  • “How can semiconductor companies accelerate their digital transformation with AI” has spokespersons from ADI, PDF Solutions, a Foundry and IDMs.
  • A final panel “AI enabled digital twin for semiconductor manufacturing equipment” has panelists from PDF Solutions and Equipment OEMs.
Additional speakers are:

Mike Campbell, Vice President of Engineering at Qualcomm; Shyam Gooty, Microsoft’s Senior Director Product Engineering; Jean Philippe Fricker, Founder and Chief System Architect at Cerebras; Anton Devilliers, TEL’s Vice President of R&D; and Siemens’ Jayant D’Souza, Principal Technical Product Manager, and Marc Hunter, Director Product Management.

Also Ken Butler, Senior Director of Applications Marketing with Advantest; Eli Roth, Product Manager at Teradyne; SAP’s Sunil Gandhi, Senior Director, Industry Executive, High Tech; and Yurt’ Jason Schnitzer, CTO and Steve Mahoney, Vice President of Product Management. Handel Jones, Founder and CEO of International Business Strategies (IBS), will be the dinner keynote speaker.

As part of the program, PDF Solutions will demonstrate its ModelOps product portfolio, the AI infrastructure for the global semiconductor supply chain.

Registration

The one-day Executive Conference will take place Thursday, December 12, at the St. Regis Hotel in San Francisco starting with 8 a.m. registration. The conference begins at 9 a.m. and concludes at 5:30 p.m. A reception and dinner follow. Registration is open.

Date: December 1‌2, 2‌0‌2‌4, following the 70th Annual IEEE International Electron Devices Meeting.

Location: St. Regis Hotel 125 3rd St San Francisco, Calif., 94103

About PDF Solutions

PDF Solutions (Nasdaq: PDFS) provides comprehensive data solutions designed to empower organizations across the semiconductor and electronics ecosystems to improve the yield and quality of their products and operational efficiency for increased profitability. The Company’s products and services are used by Fortune 500 companies across the semiconductor ecosystem to achieve smart manufacturing goals by connecting and controlling equipment, collecting data generated during manufacturing and test operations, and performing advanced analytics and machine learning to enable profitable, high-volume manufacturing.

Founded in 1991, PDF Solutions is headquartered in Santa Clara, California, with operations across North America, Europe, and Asia. The Company (directly or through one or more subsidiaries) is an active member of SEMI, INEMI, TPCA, IPC, the OPC Foundation, and DMDII. For the latest news and information about PDF Solutions or to find office locations, visit https://www.pdf.com/.

Also Read:

WEBINAR: Elevate Your Analog Layout Design to New Heights

Silicon Creations is Fueling Next Generation Chips

I will see you at the Substrate Vision Summit in Santa Clara


Accelerating Electric Vehicle Development – Through Integrated Design Flow for Power Modules

Accelerating Electric Vehicle Development – Through Integrated Design Flow for Power Modules
by Kalar Rajendiran on 12-04-2024 at 10:00 am

Existing EV Power Module Flow

The development of electric vehicles (EVs) is key to transitioning to sustainable transportation. However, designing high-performance EVs presents significant challenges, particularly in power module design. Power modules, including inverters, bulky DC capacitors, power management ICs (PMICs), and battery packs, are critical in managing the high voltage and current systems in EVs. These modules often operate at over 1,000V and can supply hundreds of amperes, generating substantial heat, with temperatures potentially rising to 200-250°C. As power distribution systems shrink, effective thermal management becomes essential. Power modules also must meet strict safety standards, making a system-level approach for integrating ICs, packages, and PCBs, crucial for avoiding safety risks and delays.

Cadence recently sponsored a webinar on the topic of integrated design flow for power modules for electric vehicles. The webinar was hosted by Amlendu Shekhar Choubey, Director Product Management,  Athar Kamal, a Lead Product Engineer and Ritabrata Bhattacharya, a Senior Principal Product Engineer, both from Cadence.

Current Challenges in Power Module Design

The design process for power modules is often fragmented, with insufficient integration between electrical, mechanical, and thermal design. This leads to miscommunication, delays, and increased costs. Simulation tools are limited, especially for electromagnetic (EM) analyses, requiring specialized expertise. Many designers resort to lab testing, which can be too late to address critical issues impacting safety, performance, and reliability.

Thermal management and parasitic effects are significant challenges in power module design. High power requirements generate heat that must be managed to avoid component failure. Parasitic inductances from bondwires, copper traces, and other components can lead to overshoot during switching, causing performance degradation and electromagnetic interference (EMI). Addressing these issues early in the design cycle is crucial to avoid critical system failures later in the process.

The Ideal Power Module Design Flow

An ideal power module design flow integrates electrical, mechanical, and thermal considerations from the start. Schematic-driven layouts with SPICE-enabled simulations ensure functionality is validated before layout. Quick extraction of parasitics using the 3D-Quasi-Static solver and integration back into simulations is essential to understand their impact. Auto-generating post-layout schematics aligns the design with the original schematic, reducing errors. Thermal analysis tools, such as Celsius Thermal Solver, help optimize cooling solutions early on. 3D EM tools like Clarity 3D Solver help with management of electromagnetic effects.

Cadence’s Advanced Tools for Optimization

Cadence’s advanced solutions, including Allegro X, PSpice, Clarity 3D Solver, and Celsius Thermal Solver, offer an integrated and thermally aware design flow that ensures both functional safety and reliability. Allegro X enables PCB layout, with advanced capabilities for component placement, routing, and thermal management, integrated with other Cadence tools. PSpice allows for electrical simulations and parasitic effect analysis, ensuring the design meets functional and safety requirements.

Clarity 3D Solver provides EM simulation, optimizing the power module’s electromagnetic characteristics and reducing overshoot to improve reliability. Celsius Thermal Solver predicts temperature distribution, identifies hot spots, and optimizes cooling solutions to mitigate thermal issues early. Together, these tools create an integrated design process, reducing thermal runaway risk and addressing EMI and parasitic effects before they affect the final product.

Seamless Integration of Cadence’s Tools

Cadence’s platform supports collaboration across engineering disciplines. Designers do not need to be experts in every area but must understand how their expertise fits within the larger ecosystem. They can execute the complete design flow without leaving their preferred environment to complete all the analyses needed for a reliable design. This approach improves decision-making and ensures optimized designs. Thermal and warpage analysis can be triggered from within the layout tool, streamlining workflows and reducing errors.

The Future of Power Module Design and Reliability

Looking ahead, the next challenge for EV power module design will be estimating Mean Time Between Failures (MTBF) based on data generated during the design process. Predictive analytics will be key for assessing reliability and preventing failures, ensuring the durability of EV systems.

Summary

An integrated approach to power module design is essential for addressing the complex challenges in EV development. Advanced tools that combine electrical, thermal, mechanical, and EM simulations within a unified platform help streamline the design process, reduce costs, and accelerate time-to-market. Cadence’s design flow bridges traditional gaps, enabling the creation of safer, more efficient, and reliable power modules for the next generation of EVs. With tools like Allegro X, PSpice, Clarity 3D Solver, and Celsius Thermal Solver, the EV industry can benefit from a thermally aware, end-to-end integrated design solution that enhances functional safety and reliability.

For more details, refer to the following:

Cadence whitepaper titled “Power Module Design for Electric Vehicles – Addressing Reliability and Safety.”

Cadence Automotive Solutions page.

Also Read:

Compiler Tuning for Simulator Speedup. Innovation in Verification

Cadence Paints a Broad Canvas in Automotive

Analog IC Migration using AI


A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design
by Mike Gianfagna on 12-04-2024 at 6:00 am

A Master Class with Ansys and Synopsys, The Latest Advances in Multi Die Design

2.5D and 3D multi-die design is rapidly moving into the mainstream for many applications. HPC, GPU, mobile, and AI/ML are application areas that have seen real benefits. The concept of “mix/match” for chips and chiplets to form a complex system sounds deceptively simple. In fact, the implementation and analysis techniques required to achieve success are substantial.

For many years, Synopsys and Ansys have been creating design flows that escort design teams through early exploration, implementation, and final signoff. The two companies are deeply engaged with many customers on advanced multi-die projects and have helped bring many successful designs to market. Synopsys and Ansys recently teamed up to present a webinar on the latest technology for multi-die design. The result is nothing short of a master class. Let’s explore the latest advances in multi-die design.

The Presenters

The effectiveness of a webinar is heavily influenced by the capabilities of the presenters. A slick polished and shallow presentation may entertain you, but you won’t learn much. A very detailed, but scattered presentation may deliver a lot of information but it’s often hard to find it amidst the noise. The presenters for this webinar delivered a perfect blend of professional polish and deep technical knowledge. The event runs for 45 minutes, but it will seem more like 20 minutes given how engaging both speakers are. The presenters for this webinar are:

Marc Swinnen

Marc Swinnen, Product Marketing Director for semiconductor products at Ansys. Before joining Ansys, Marc was Director of Product Marketing at Cadence Design Systems and has worked in Marketing and Technical Support positions at Synopsys, Azuro, and Sequence Design, where he gained experience with a wide array of digital and analog design tools.

 

 

 

Keith Lanier

Keith Lanier, Product Management Director at Synopsys. Keith focuses on multi-die and 3D heterogeneous integration (3DHI) solutions involving the latest advanced packaging technology. He brings over 30 years of experience in custom design, analog/mixed signal (AMS) and RF/mmWave product experience, including 8 years designing high speed data converters and amplifiers at Analog Devices.

A link to the webinar replay is coming. I highly recommend you watch it if 2.5/3D is in your future. First, let’s look at the structure of the webinar and a few key takeaways.

The Topics Covered

Here is the agenda for the webinar:

  • Multi-Die Design Motivation, Adoption, and Challenges
  • Combined Synopsys-Ansys Solutions for Implementation and System Analysis
  • Multi-Die Design Implementation
  • System Analysis
    • Power Integrity: Electromigration / IR Drop
    • Thermal integrity: Multi-Die Design
    • Signal Integrity: High Frequency Electromagnetic Analysis
  • Golden Sign-off Analysis
  • Customer Successes
  • Summary

This is a lot to cover, but Marc and Keith do a great job covering it all in under 30 minutes. What follows is about 15 minutes of Q&A from the webinar audience. The questions are deep and insightful, and the responses are concise and on-point. You will learn a lot.

Some Takeaways

Some macro-trend motivations for multi-die design are worth repeating. Here are the ones mentioned in the webinar:

  • Accelerated scaling of system functionality at a cost-effective price (>2X reticle limits)
  • Reduced risk & time-to-market by re-using proven designs/die
  • Lower system power while increasing throughput (up to 30%)
  • Rapid creation of new product variants for flexible portfolio management

The size, projected growth and application footprint for this design style are also covered. The numbers and scope will surprise you. Some recent examples of completed multi-die designs were covered. Some of these details surprised me. Below are the results presented during the webinar.

Examples of Recent Commercial Multi Die Designs

What followed was a deep dive into the combined Synopsys-Ansys implementation, analysis and optimization techniques used by both companies to deliver a production, unified flow. A lot of the webinar goes into the details of the complete exploration to signoff flow for multi-die designs, how to use the flow with the multi-physics models and how the tools work together. Case studies are also provided to show application on real designs. Below is a high-level overview of what was covered.

Combined Synopsys Ansys Implementation, Analysis and Optimization Techniques

Real customer success stories are then presented. This really crystallized for me how advanced this flow is and what kind of impact is being achieved. You need to see the results for yourself, but here are the projects covered:

  • Sanechips Builds Comprehensive Ansys Thermal Signoff Flow for Multi-Die Design
  • GUC Leverages Synopsys 3DIC Compiler to Enable 2.5D/3D Multi-Die Designs

To Learn More

What I’ve covered here is a very small subset of the content of this important webinar. As I mentioned, if 2.5/3D design is in your future it’s a must-see event. You can access the webinar replay here. And that’s a master class with Ansys and Synopsys to explore the latest advances in multi-die design.


SystemC Update 2024

SystemC Update 2024
by Daniel Payne on 12-03-2024 at 10:00 am

SystemC ecosystem min

SystemC version 1.0 came out in 2000 as a C++ class library for system-level modeling and simulation, and on SemiWiki.com there are some 497 references to the language. I wanted to provide an update in this blog so that engineering teams can become more efficient in using SystemC on their SoC projects, saving time and improving product quality.

SystemC Evolution Day

This one day workshop was co-located with DVCon Europe on October 17th in Munich and had participants from the user community, EDA vendors, and Accellera Working Groups. The keynote from Alex Bennee, Linaro, talked about synergies between QEMU – an open-source machine emulator for running OS and apps for a guest on a host, and SystemC. With co-simulation both software and hardware components can be simulated together.

A panel discussed what SystemC 4.0 should do to widen the simulation community and what the standardization cycle looks like, with panelists from Qualcomm, Infineon, Arteris, MachineWare and Robert Bosch.

The five working groups presented updates:

  • Language – SystemC 3.0.1 update published
  • AMS – LRM for IEEE 1666.1 update started, call to participate in P1666.1
  • CCI (Configuration, Control & Inspection) – latest developments, established regression/CI flow in GitHub
  • Verification – UVM-SystemC library 1.0beta6 released July 2024
  • Synthesis – restarted in early 2024,  plans and development from Fika in May

SystemC ecosystem

Engineers from Intel and MachineWare did a CCI update for an Inspection Proposal in draft form, plus a live demo.

GUI for controlling, inspecting simulation

Chapman University presented on using SystemC TLM 2.0 for loosely-timed contention-aware modeling, looking at trade-offs between simulation speed and timing accuracy. The final hour was an open room discussion about having SystemC multi-kernel support and thread safety, with presenters from MachineWare and COSEDA.

SystemC Fikas

Two to three times per year there are free, virtual workshops, called Fikas – in the Swedish tradition of sharing a coffee and talking with community.  The most recent Fika was May 30th, where three of the working groups presented updates: Language, CCI, Synthesis. You can view the latest presentations and watch the recording online.

SystemC 3.0.1

The latest release of the SystemC Class Library, SystemC 3.0.1, represents a significant step forward in aligning with the IEEE 1666-2023 Language Reference Manual. This update introduces a variety of enhancements, bug fixes, and expanded platform support.

Key bug fixes and improvements include:

  • Alignment with IEEE 1666-2023:
    • Completed the remaining changes to match sc_bind with the revised IEEE 1666-2023 definition
    • Updated the implementation of reset event notification in sc_process_b::trigger_reset_event to comply with IEEE 1666-2023
  • Performance Enhancements:
    • Refactored integer tracing and file writing for improved performance.
    • Enhanced the non-regression test suite
  • Compiler and Build Improvements:
    • Cleaned up various compiler warnings and improved support for sanitizers
    • Addressed issues in autotools and enhanced CMake build flows
    • Updated the list of supported operating systems and compilers
    • Removed configurations that are no longer supported from build flows
  • Datatype Management:
    • Fixed various issues in datatype management, resulting in better performance

These updates ensure that SystemC 3.0.1 is more robust, efficient, and compliant with the latest standards, providing a better experience for developers and users alike.

SystemC Synthesis

At DVCon US 2024 the Synthesis Working Group leaders asked for your participation to further high-level synthesis, and high level verification.

Synthesis Working Group

The Synthesis Working Group has all their status online, and invite you to join their group.

SystemC Community Portal

The best place to visit on the web for all things SystemC is https://systemc.org/ , where each working group keeps you up to date, the annual SystemC Evolution Day presentations and Fikas are archived, plus there are publications, libraries and projects for you to be part of the community.

Summary

In the past 24 years the SystemC language has grown in scope and acceptance for system-level modeling, so that engineering teams can design and verify both hardware and software components together at a higher level of abstraction than RTL. SystemC is also useful as a verification framework and even mixed-signal modeling. Follow up by clicking the links to get more details and improve your engineering skillset.

View presentations from the SystemC Evolution Day online.

Related Blogs


Innexis Product Suite: Driving Shift Left in IC Design and Systems Development

Innexis Product Suite: Driving Shift Left in IC Design and Systems Development
by Kalar Rajendiran on 12-03-2024 at 6:00 am

Full Spectrum Development Inexis Developer Pro

At the heart of the shift-left strategy is the goal of moving traditionally late-stage tasks—such as software development, validation, and optimization—earlier in the design process. This proactive approach allows teams to identify and resolve issues before they escalate, reducing costly rework and shortening the overall development timeline. As IC designs become more complex and software demands increase, shifting left becomes critical. Early defect detection, quicker iterations, and the ability to validate performance and power early in the design process help prevent delays and reduce costs. Ultimately, this approach ensures a higher quality product, faster time-to-market, and a more efficient development cycle.

Siemens EDA recently launched its Innexis Product Suite, a comprehensive set of tools designed to reshape the development and validation of integrated circuits (ICs) and complex systems. Building on the success of its Veloce™ hardware-assisted verification and validation system, the Innexis product suite is engineered to support shift-left software development. And its integration with Veloce ensures that both hardware and software are validated in parallel throughout the development cycle. By enabling early software testing, continuous validation, and rapid debugging across virtual and hardware environments, Innexis complements Veloce to optimize the entire verification process.

The following insights were gained from the various talks at the Innexis Product launch event.

The Innexis Product Suite

The product suite is specifically built to enable this shift-left methodology across various stages of the design process. The suite includes several components, each offering unique capabilities and use cases but all aligned with the goal of accelerating development and enabling early validation of both hardware and software.

Innexis Developer Pro

The Innexis Developer Pro plays a pivotal role in supporting the continuous development flow from virtual models to hybrid systems and eventually to full RTL simulations. This tool offers a seamless platform for hardware-software co-development, validation, and analysis. Developers can work across virtual, hybrid, and RTL environments, ensuring that designs are continuously tested and optimized from the very start. By enabling early power and performance analysis, Innexis Developer Pro helps teams identify issues early in the cycle, preventing rework later on. It supports a wide range of use cases, such as enabling pre-silicon validation and accelerating the creation of complex SoCs with heterogeneous cores.

Samsung shared with the audience, how the Innexis suite has accelerated their software development by providing a configurable reference platform that mimics a Samsung A75-based CPU subsystem and integrates Samsung GPU IP. With the Innexis stack, Android boots in under 10 minutes, compared to 20+ hours on traditional emulators, and Veloce Strato enables faster pre-silicon performance analysis by executing GPU RTL. Samsung’s successful shift-left with Innexis has streamlined their development process, enabling software development as early as the first RTL milestone, with RTL to Innexis readiness in just one week, pre-verified software stacks, and a configurable testbench for efficient custom driver integration and testing.

Innexis Architecture Native Acceleration (ANA)

For teams looking to develop software early in the process, Innexis ANA provides a high-speed, cloud-based platform. By utilizing Arm-based servers, ANA enables the execution of software workloads at speeds up to 2-4 GHz, significantly faster than traditional simulation-based platforms. The cloud-native environment offers scalable resources and enables team collaboration by allowing the sharing of models and workloads across different locations. With Innexis ANA, engineers can develop and test software long before RTL or silicon are available, optimizing performance and identifying software defects early. It also integrates seamlessly with other parts of the suite, enhancing the shift-left workflow and ensuring continuous development without delays.

Arm shared with the audience, Innexis ANA benchmark numbers that demonstrate 50-100X boot time performance improvement when using realistic software workloads compared to a QEMU-based Instruction Set Simulator (ISS) virtual platform.

Innexis Virtual System Interconnect (VSI)

Another key component, Innexis VSI, facilitates the creation of system-level digital twins. This tool integrates multi-behavioral models of various subsystems, such as sensors, ECUs, and environmental models, to simulate the interactions within a complete system. By providing visualization and analysis capabilities, VSI helps engineers understand system behavior before physical prototypes are available. It is especially useful in industries like automotive, where system-level validation is critical for complex designs such as autonomous driving systems or electric powertrains. VSI can also be cloud-enabled, offering scalable simulations and real-time collaboration, which accelerates the design process and ensures all system components function together as intended.

Innexis Product Suite Benefits

The Innexis suite’s benefits are far-reaching. First, it helps accelerate time-to-market by enabling earlier testing and identification of defects, thus reducing design iterations and re-spins. Second, it offers cost savings by allowing issues to be addressed early, preventing expensive last-minute fixes. Third, it fosters collaboration by enabling teams to work seamlessly across geographic locations, sharing models, data, and workflows in real time. Finally, Innexis contributes to performance optimization by providing tools to run realistic software workloads early, ensuring that power and performance benchmarks are met before hardware is finalized.

Shifting Left Using AWS

The shift-left approach using Software and Digital Twin through virtual Hardware in Loop (vHIL) testing in the cloud accelerates the development cycle by enabling silicon virtualization before target hardware is available.

AWS highlighted to the audience, how Arm’s validated IP subsystems and AWS’s scalable cloud infrastructure ensure that teams have access to high-performance, cloud-native tools, enabling them to scale their development efforts quickly and efficiently. With Innexis ANA offering cloud-based benchmarking and software profiling, these capabilities ensure that developers can test and validate their designs in real-world conditions long before physical hardware is available. By utilizing Arm64-based Graviton instances on AWS, native execution of embedded software offers performance and efficiency gains over traditional emulation, allowing early software development before silicon is available. This approach reduces reliance on upfront HIL testing, enables early issue discovery, and offers scalable cloud-based resources for improved software quality and faster development cycles.

Summary

The Innexis Product Suite represents a paradigm shift in IC and systems development. By enabling shift-left in hardware/software co-design, early defect detection, and comprehensive system-level validation, Innexis empowers engineers to meet the challenges of modern IC design and accelerate the development of complex systems. With its cloud-native capabilities, powerful simulation tools, and integration with Veloce, Innexis provides the tools necessary to deliver high-quality products faster, more cost-effectively, and with higher reliability.

To learn more, visit

The Innexis solution page.

Press announcement page.

Also Read:

Relationships with IP Vendors

Handling Objections in UVM Code

Next Generation of Systems Design at Siemens


How Breker is Helping to Solve the RISC-V Certification Problem

How Breker is Helping to Solve the RISC-V Certification Problem
by Mike Gianfagna on 12-02-2024 at 10:00 am

How Breker is Helping to Solve the RISC V Certification Problem

RISC-V cores are popping up everywhere. The growth of this open instruction set architecture (ISA) was quite evident at the recent RISC-V summit. You can check out some of the RISC-V buzz on SemiWiki here. While all this is quite exciting and encouraging, there are hurdles to face before true prime-time, ubiquitous application of RISC-V processors are commonplace. A big one is certification. I’m not referring to verification of the design, but rather certification of the RISC-V ISA implementation. Does the processor reliably do what is expected across its broad range of applications? Can we trust these devices?

It turns out this is a large and complex problem. The graphic at the top of this post illustrates its breadth. Solving it is critical to allow broad deployment of the RISC-V architecture. I decided to poke around and see what was being done. Breker has “verification” in the company name, so it seemed that would be a good place to start. I contacted my good friend Dave Kelf and I wasn’t disappointed. There is a lot going on here and Breker is indeed in the middle of a lot of it. Let’s see how Breker is helping to solve the RISC-V certification problem.

The CEO Perspective

Dave Kelf

I’ve known Dave Kelf a long time. He is currently CEO of Breker Verification Systems. Dave explained there are a lot of RISC-V design efforts underway at large companies, startups and advanced research. These programs include open-source projects, commercial programs and universities. He told me Breker alone is being used in 15 RISC-V development programs underway at present.

Dave explained that in the processor world, there are devices from companies such as Arm, Intel and AMD that come with a certification from the vendor. These devices undergo extensive testing. This creates a level of “comfort” that the device will perform as advertised under all conditions. The tests done by these companies can take on the order of 1015 clock cycles to run. That is indeed a mind-boggling statistic.

Dave provided an overview of what is involved in certifying a processor architecture like RISC-V. He explained that it’s important to understand that this task is a lot more complex than certifying a point-to-point communication protocol (think Wi-Fi). It’s also a lot broader than verifying a specific processor design. Before the processor gets that golden stamp of approval, it needs to be checked for all potential use cases, not just the one being used on a particular design.

The architecture of the certification test suite needs to be developed and agreed to by a steering committee that has a sufficiently broad ecosystem perspective. Then the actual tests need to be built and verified. Then comes the task of running the certification suite. Do companies self-certify, or does an independent lab do that work? And finally, how is all this funded?

A complex and daunting set of problems to solve, but this kind of proof of capability is what will be needed to achieve mainstream use across a broad range of applications for RISC-V. The good news is that RISC-V International has taken up the cause. Dave explained that after the RISC-V Summit last year work began. So, this project is about 10 months old. Breker, along with many other members of the RISC-V community is providing support and effort to realize these important goals.

Dave explained that there was a presentation on this work at the recent RISC-V Summit. This presentation filled in a lot more details for me.

The President and CTO Perspective

Adnan Hamid

Adnan Hamid, Executive President and CTO at Breker gave the presentation on RISC-V certification. As mentioned, this effort started after last year’s RISC-V Summit. It is being driven at the RISC-V International Board level. A key part of the RISC-V organization is the technical steering committee (TSC). This is where all the details for components of the RISC-V ecosystem are developed, both hardware and software. A certification steering committee (CSC) has been created that exists at a peer level to the TSC.  One way to think about this is that the CSC has the mandate to check the TSC to ensure a coherent path to certification can be developed. The diagram below illustrates the entities involved in the program. The goal is to deliver holistic brand value.

Organizations Involved in Certification

Adnan shared some of the details of the program. Although it is still early days, these are likely to include:

  • Allows implementations to ensure compliance with RISC-V standards
  • Goal is to provide confidence to the RISC-V ecosystem that it will correctly operate on certified implementations
  • Certifies processors, SoC components, and platforms
  • Certifies RTL and silicon
  • Includes commercial-grade certification materials
  • Customers pay to obtain certificate
  • Fee based on certification cost
  • “RISC-V Compatible” Branding Program
  • Certificate must meet customer requirements
    • Must be available in a timely fashion
    • Must be based on ratified RVI standards

Certification is planned to be done in phases as shown below.

Certification Deployment Phases

The CSC is taking shape. It currently has the following five working groups:

Certification Working Groups

You can see a recording of the full presentation Adnan gave here.

Dave and Adnan are actively involved in the Customer Survey and Tests & Models groups. There are about 24 companies involved in this effort so far, and that number is growing.

How You Can Help

There is a lot being done on RISC-V certification. And a lot more to do as well. If you’re like most folks in semiconductors today, you are thinking seriously about how the open architecture of RISC-V could help. If you are interested in RISC-V, the certification team wants to hear from you. They need your input which will be used to shape this program. There is a survey underway to better understand your needs.

Let your voice be heard. You can access the survey here. Do it today! And that’s how Breker is helping to solve the RISC-V certification problem.

Also Read:

Breker Brings RISC-V Verification to the Next Level #61DAC

System VIPs are to PSS as Apps are to Formal

Breker Verification Systems at the 2024 Design Automation Conference


CEO Interview: Ollie Jones of Sondrel

CEO Interview: Ollie Jones of Sondrel
by Daniel Nenni on 12-02-2024 at 6:00 am

xr:d:DAFvo1pPrb0:9,j:897746449552040023,t:23092711

Sondrel has just appointed a new CEO, Ollie Jones, so we had a chat with him to find out his vision for the company.

Ollie is a highly driven, commercially astute senior leader with 20+ years Commercial and Business Development experience across Technology and Engineering sectors.

Ollie has worked extensively across Europe, North America and Asia and has held a variety of commercial leadership roles in FTSE 100, private equity owned and start-up companies.

Most recently Ollie was Chief Commercial Officer for an EV battery start up where he led the acquisition of new customer partnerships with some of the world’s leading car brands.

Prior to that, roles held include VP Commercial and Business Development for a market leading global automotive engineering firm with responsibility for driving the sales growth of its electrification business unit, and VP Customer Business Group where he was responsible for leading multiple large and complex key accounts across Europe and Asia with over $1bn cumulative revenues.

Sondrel was founded over two decades ago in 2002 and is a well-known name in Europe but not in the USA. Why do you think that is?
Sondrel is fundamentally a service company. To give customers the best possible service when you are starting out, you need to be close to them so, being headquartered in the UK, Sondrel focussed on the UK, European and Israeli market for the first phase of the company’s growth as it is our home region. That enabled us to ensure that we built up a reputation with customers for going above and beyond in order to deliver high quality products. As we grew from pure “design services” towards more turnkey “ASIC” developments, we expanded our skills with design centers to include Morocco and India. It is only recently that we have started addressing the American market as we believe that we offer something that US customers want.

So how are you differentiating yourselves?
We sit perfectly in the zone of just the right size to be able to deliver chips with a high level of personal service. Our rivals are often too large with too many projects to give the level of personal service that we provide or too small to have the expertise needed to deliver the kinds of ultra-complex custom chips that are our speciality. This is 100% aligned with my own career, which has always been customer focussed. At Sondrel, we want customers to be successful and are completely focused on that mission. That means giving each customer a level of care and attention to details that will be almost impossible to match.

How does that tie in with delisting from being on the stock exchange and going private?
The challenge with being a listed company is that you have two objectives that can often conflict. Firstly, to deliver to investors who often have short timeframe goals and, secondly, to deliver to customers where the timescales are measured in many months. And there are times when it is very difficult to do both effectively as many companies have found and have gone back to being private. Sondrel is company with amazing engineers, huge experience and a stellar reputation with customers. That’s a solid foundation for the future of any company. And so that became the course of action with a delisting and restructuring. Then, as planned, I became the CEO to really focus, capitalise and commercialize the company’s strengths – customer focus, personal service, high quality and world-class design skills for ultra-complex custom chips.

Your background is from the commercial side of technology.  Does that help or hinder you as a CEO?
Absolutely it helps. When you think about it, the commercial aspects are critical to both our customers and our own ability to grow.  Over the past year, when I was VP of Marketing & Sales, I met each of Sondrel’s regular customers. Some of them have been using Sondrel for many years for project after project. In every case, when I asked them why pick Sondrel, the answer was always because Sondrel cares. We care passionately about the customer, the customer’s project and the commercial success of the chip – and our strong engineering team reflects this. We will do everything we can to be outstanding partners for our customers. The real shame is that most of our work is covered by NDAs so we cannot talk publicly about all our successes. This customer-first approach is something I’ve embraced throughout my career and was rooted in always seeking solid, mutual commercial success. So, “yes” I think a commercial background is a significant advantage to being a CEO.

It sounds like you are going to be very hands on?
This is very much the way I do things and how Sodrel operates. Frequent face-to-face meetings and continuous communications. People always want to do business with people they know, like and trust. We really dig into a new business opportunity to fully understand what the customer is trying to achieve and what matters most to them. And then we invariably exceed expectations by providing insights and ideas to make the project better by drawing on our experience from hundreds of successful projects. A design project for a billion-transistor chip is incredibly detailed and complex and we have the in-house tools, design flows and experience to deliver to agreed budgets and timeframes.

We work with the customer early in their definition stages, helping the customer where we can to make the right decisions for their chip project, unlike many of our rivals. This builds a huge level of trust and confidence in our ability to deliver chip designs which means that the customers then started asking us to handle the whole chip supply chain process right through to final silicon. This is now a standard turnkey service that enable customers to focus on their skill sets safe in the knowledge that their silicon will be delivered. This is particularly of interest to startups who are skilled in innovation but not in all the challenges of taking a chip through the supply chain stages of the manufacturing, testing and packaging so they need to outsource that to someone like us.

And that is why our US office is located in the heart of Silicon Valley so we can provide a personal service to all the exciting innovative startups located there.

There is only one of you, so how can Sondrel provide the level of personal service that you described for every customer?
We do that by having a mindset of customer-first, across the company. We even assign a Customer Success Manager to customer projects. Their job is to ensure that everything is running to schedule and that the customer is always in the loop, ensuring that the customer’s project is successful, which means we are successful in meeting the customer’s expectations. It’s how we deliver a very personal service to every customer.

You have said personal service is what differentiates Sondrel. What does that mean in practice?
Companies come to Sondrel because they want a chip that is custom made to their exact specifications. Basing your project on standard, off-the-shelf chips means that anyone can copy it. A custom chip is unique and the Power, Performance and Area have been tuned precisely to deliver the performance and cost required. And determining those parameters is done right at the start of a project discussion at the Architectural Specification stage. This is a perfect example of where we are different. Our team creates an in-depth, holistic view of the chip, what its functions and features are, and how to make it. For example, what node to use, what IP will be needed, which of our Architecting the Future reference architectures to use, how to incorporate Design for Test, etc. This means that Sondrel provides customers with an incredible detailed plan of how it will successfully design the chip. Often it includes improvements that Sondrel has brought to the table based on its engineers having experience in successfully delivering hundreds of other projects over the years.

This is level of intense personal service inspires confidence and trust that Sondrel will deliver to schedule and requirements that is continued throughout the project with regular meetings so that the customer is always fully informed on progress along with new ideas to make the design even better. For example, in one recent project we were able to reduce the power requirement of the chip, much to the delight of the customer.

You mentioned Architecting the Future. What is that?
This is a family of pre-defined reference architectures that provide a fast start for a new project rather than starting from scratch every time. This means that not only can we deliver a project faster but it also means that we can handle more projects simultaneously as our engineers can focus on the complex novel parts of a design knowing that the framework is already tested and ready to be built on.

Reusing trusted IP is fundamental to the ability to design chips and that’s what these architectures are. They reduce risk and time to market to help ensure customer success.

Talking about IP, I note that you have started licensing IP?
That was one of my first tasks as CEO to realise and commercialize existing assets. We have a library of IP blocks that we have created over the years for various projects where we found there was no commercial IP available with the performance or functionality required. They might be a bit unusual but that’s what we need when creating our ultracomplex chip design so, if we needed it for a chip design, then others might as well. In fact, we have just licensed our first IP block – our Firewall IP.

It’s yet another way to help customers be successful through our personal service of ensuring that they get exactly what they need.

Also Read:

Sondrel Redefines the AI Chip Design Process

Automotive Designs Have No Room for Error!

Sondrel’s Drive in the Automotive Industry

Transformative Year for Sondrel


Podcast EP263: The Current and Future Impact of the CHIPS and Science Act with Sanjay Kumar

Podcast EP263: The Current and Future Impact of the CHIPS and Science Act with Sanjay Kumar
by Daniel Nenni on 11-29-2024 at 10:00 am

Dan is joined by Sanjay Kumar. Most recently, Sanjay was senior director at the Department of Commerce on the team implementing the CHIPS and Science Act. Before that, he was in the industry for more than 20 years, up and down the semiconductor value chain working at systems companies such as Meta, fabless companies such as Infineon, NXP, Broadcom and Omnivision and manufacturing companies such as Intel Foundry.

Sanjay provides a detailed analysis of the impact across the semiconductor value chain resulting from the CHIPS and Science Act. He details the significant industry investments that have resulted from the initial funding from the US Government.

Sanjay describes the collaboration between ecosystem companies and what the impact has been, and could be in the future. He discusses the impact AI has had as well. He describes possible future collaboration scenarios and the potential positive impact on the US semiconductor manufacturing sector.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.