SILVACO 073125 Webinar 800x100

Enhancing the RISC-V Ecosystem with S2C Prototyping Solution

Enhancing the RISC-V Ecosystem with S2C Prototyping Solution
by Daniel Nenni on 04-11-2024 at 6:00 am

ChipLink

RISC-V’s popularity stems from its open-source framework, enabling customization, scalability, and mitigating vendor lock-in. Supported by a robust community, its cost-effectiveness and global adoption make it attractive for hardware innovation across industries.

Despite its popularity, evolving RISC-V architectures pose design and verification challenges. A significant concern is the potential fragmentation in RISC-V system integration. Exploring RISC-V microarchitectures may result in variants incompatible with each other. Moreover, as the RISC-V ecosystem matures, design complexity escalates, necessitating enhanced verification procedures.

S2C plays a pivotal role in the RISC-V ecosystem as a member of RISC-V International. Let’s explore how S2C aids chip designers in optimizing and differentiating their RISC-V processor-based SoCs across diverse applications.

Key Benefits of the S2C FPGA Prototyping Solution for RISC-V

S2C offers an extensive array of FPGA prototyping systems, ranging from the desktop prototyping platform Prodigy Logic System to the high-performance enterprise prototyping solution Logic Matrix, catering to the diverse needs of RISC-V System Verification or Demonstration. Multiple options are available to meet the diversity of RISC-V, regardless of the scale of the design. In addition to traditional partitioning schemes, S2C also provides ChipLink IP, which ensures high-performance AXI chip-to-chip partitioning.

Robust bring-up and debugging methods enhance user efficiency, including FPGA download via Ethernet/USB/SD card, UART/Virtual UART, Ethernet-based AXI transactor, and a custom logic analyzer for Multi-FPGA (MDM).

S2C also provides a utility to download operating systems & applications from PC to FPGA’s DDR4.

The high-bandwidth transmission enables a much faster boot-up of software, accelerating time to operation.

General Purpose Partitioning and ChipLink

S2C offers a General-Purpose TDM interconnect communication solution, which is applicable regardless of IP logic scale or bus interface type limitations. Configured as a 25Gbps Line Rate, S2C’s General-Purpose Serdes TDM IP can provide up to 20MHz of TDM partitioning for large IP design partitions. With a multiplexing ratio of up to 8K:1, it enables long-distance data communication via optical fiber cables, streamlining the networking process for large-scale SoC prototype designs with simplicity and efficiency.

ChipLink, an AXI-based partitioning solution, facilitates multi-core SoC verification. This low-latency AXI Chip to Chip IP connects RISC-V cores and peripherals across multiple FPGAs efficiently. S2C’s ChipLink AXI IP boasts high speed and low latency, supporting AXI DATA_WIDTH of up to 1024 bits. Each bank accommodates up to four sets of AXI protocols. With multiple Serdes line rates including 12.5G, 16.25G, 20.625G, and 25G, it enables communication at 100MHz between multi-core processors.

Strengthened by a Broad Prototype Tools

S2C offers a comprehensive suite of tools to facilitate and optimize RISC-V SoC design verification. Notably, Prototype Ready IP features over 90 readily deployable daughter cards, simplifying prototyping setup and significantly reducing initialization time and effort.

Additionally, S2C’s multidimensional prototyping software, Prodigy PlayerPro-RT, enables seamless FPGA/Die downloads via USB, Ethernet, and SD Card interfaces. Beyond downloads, PlayerPro-RT offers real-time hardware monitoring, remote system management, and extensive hardware self-testing functionalities, ensuring a smooth and efficient verification process.

S2C further enhances verification with the inclusion of the high-bandwidth AXI transactor, Prodigy ProtoBridge, facilitating swift and efficient data transmission between PC and FPGA prototypes at PCIe speeds of up to 4000MB/s. By offering high bandwidth and fast read/write capabilities, ProtoBridge significantly boosts design productivity.

In the competitive realm of RISC-V SoC development, differentiation is crucial. S2C Prototyping Solutions emerge as a trusted ally, offering a streamlined pathway for verification and demonstration, empowering developers to amplify the unique value propositions of their SoCs.

For more information: https://www.s2cinc.com/riscv.html

Also Read:

2024 Outlook with Toshio Nakama of S2C

Prototyping Chiplets from the Desktop!

S2C’s FPGA Prototyping Accelerates the Iteration of XiangShan RISC-V Processor


Intel is Bringing AI Everywhere

Intel is Bringing AI Everywhere
by Mike Gianfagna on 04-10-2024 at 10:00 am

Intel is Bringing AI Everywhere

On April 8 and 9 Intel held its Intel Vision event in Phoenix Arizona. This is Intel’s premier event for business and technology executive leaders to come together and learn about the latest industry trends and solutions in advancements from client, to edge, to data center and cloud. The theme of this year’s event was Bringing AI Everywhere. The event was packed with impressive information from all over the industry. Intel provided a briefing before the event that dove into some of the announcements and advances that would be presented. I will dig into what was presented in this post, along with a summary of Pat Gelsinger’s keynote at the event. The content is compelling – indeed it appears that Intel is bringing AI everywhere.  

Briefing Overview

Attending the briefing were three key members of the Intel team. Their combined experience is quite impressive. They are:

Sachin Katti, Senior Vice President & General Manager of Network and Edge Group. Prior to his current role, Sachin was CTO of the Network and Edge Group. Prior to Intel, he had a long career as an Associate Professor at Stanford University. He also founded or co-founded several companies as well. Sachin holds a Ph.D. in Computer Science from the Massachusetts Institute of Technology.

 

 

Das Kamhout, Vice President & Senior Principal Engineer in the Intel Data Center and AI Group. Das has worked at Intel for 27 years across many areas including AI, cloud, enterprise software, and storage. He has also been a Board member of the Cloud Native Computing Foundation.

 

 

Jeff McVeigh, Corporate Vice President & General Manager of Software Engineering Group. Jeff has also worked at Intel for 27 years. He has held leadership positions in the Software Engineering Group, Super Compute Group, Data Center XPU Products & Solutions, and Visual Computing Products. He holds a Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University.

 

 

The presentation began with some macro-observations. Enterprises have reached an AI inflection point, signified by swift adoption and supercharged by GenAI. Gartner estimates that 80% of enterprises will use GenAI by 2026 and at least 50% of edge computing deployments will involve machine learning. IDC expects the $40B enterprise spend on GenAI in 2024 to grow to $151B by 2027.

All this is ahead of us only if we’re able to unlock AI’s full potential. Intel reported that only 10% of organizations launched generative AI solutions to production in 2023. Furthermore, 46% of experts cited infrastructure as the biggest challenge in productionizing large language models. Barriers to adoption persist, openness and choice are limited and transparency, privacy and trust concerns are rising.

Against this backdrop Intel is making several announcements to take down the barriers to adoption, bringing AI everywhere. The five broad areas of focus were defined as follows:

  • A scalable systems strategy to address all segments of AI within the enterprise with an open ecosystem approach
  • Enterprise customer AI deployments, successes and wins
  • Open ecosystem approach to advance enterprise AI
  • Intel Gaudi® 3 AI accelerator to serve unmet demand for Generative AI solutions
  • Edge Platform and Ethernet-based networking connectivity products targeted for AI workloads

Let’s look at some of the details.

A Tour of the Announcements

Today, enterprise data and AI models live in two distinct worlds. Enterprise data is secure and confidential, rooted in specific locations, mature and predictable and has a CPU-based processing model. AI models, on the other hand are based on public data, are characterized by rapid change with varied degrees of security and have an accelerator-based processing model.

Intel aims to unlock the enterprise AI model through the power of open ecosystems. Attributes of this approach include:

  • An Application Ecosystem that is easy and open, by working with industry leaders to provide end-to-end AI enterprise solutions at scale
  • A Software Ecosystem that is secure and responsible, by driving an open software ecosystem that bridges enterprise data and AI models
  • An Infrastructure Ecosystem that is scalable and reference based, by shaping the enterprise AI infrastructure through reference architectures, together with partners
  • A Compute Ecosystem that is accessible and confidential, by building safe and AI capable compute platforms from client to data center

The diagram below is a top-level view of how these pieces fit together. Many more details of the approach were presented, along with a description of the enterprise AI software stack and planned enhancements.

Intel Enteprise AI

The presentation also discussed Intel Developer Cloud that is used by leading AI companies. Intel explained that the platform provides everything you need to build and deploy AI at scale. The diagram below shows today’s processor lineup.

The newest version of the Intel Gaudi AI accelerator brings speedups of 2X – 4X for AI compute, 2X for network bandwidth and 1.5X for memory bandwidth.  Benchmark data includes 40% faster time-to-train vs. H100 and 50% faster inferencing vs. H100. The launch partners for this accelerator are impressive, with Dell Technologies, HP Enterprise, Lenovo, and Supermicro.

The Intel Xeon6 Processor with E-cores was also discussed with a 2.4x performance per watt improvement and 2.7x performance per rack improvement. Comparing the second generation Intel Xeon processor to Xeon 6, there is over one megawatt of power reduction delivered. To put that number in perspective, it represents the energy savings of a full year’s worth of electricity use for over 1,300 homes.

The work Intel is doing with high-profile partners on confidential computing was also discussed.  A preview of work to deliver connectivity designs for AI was previewed as well. The AI PC era was also discussed. Here, Intel plans to ship 100 million AI accelerators by the end of 2025. The company’s footprint in this market is substantial.  Comprehensive strategies and platforms to support AI processing at the edge were also detailed, with 90,000+ edge deployments and 200M+ processors sold.

Pat Gelsinger’s Keynote

Pat Gelsinger

Pat was introduced as Intel’s Chief Geek. He lived up to that description with a 90-minute technology tour-de-force describing Intel’s impact, announcements, and plans. AI was front and center for most of Pat’s presentation. He described Intel Foundry as the systems foundry for the AI era and Intel products as modular platforms for the AI era. A memorable quote from Pat is “every company becomes an AI company.”

Pat then described the major re-tooling that is underway to deploy AI PCs across the entire enterprise. He discussed products that enable AI across the enterprise while reducing power and increasing efficiency. There were several impressive live demos of new technology and its impact, including an AI PC demo livestreamed from inside an Intel fab.

Pat also invited many distinguished guests to join him on stage or via the Internet to describe what their organizations are doing with Intel technology. Among those organizations were Accenture, Supermicro, Arizona State University, Bosch, Naver Corporation, and Dell Technologies (with Michael Dell).

Pat also unveiled, for the first time, the Intel Gaudi 3 AI Accelerator. This is a short summary of a great keynote presentation.

To Learn More

The pre-brief presentation and Pat Gelsinger’s keynote covered a lot of detail across an open scalable system strategy, customer/partner momentum, and next-generation products/services. You can learn more about Intel Vision 2024 here  and you can watch a replay of Pat Gelsinger’s keynote here. You will see that Intel is bringing AI everywhere.

Also Read:

Intel is Bringing AI Everywhere

Intel Direct Connect Event

ISS 2024 – Logic 2034 – Technology, Economics, and Sustainability

 


Arteris Frames Network-On-Chip Topologies in the Car

Arteris Frames Network-On-Chip Topologies in the Car
by Bernard Murphy on 04-10-2024 at 6:00 am

Automotive use case min

On the heels of Arm’s 2024 automotive update, Arteris and Arm announced an update to their partnership. This has been extended to cover the latest AMBA5 protocol for coherent operation (CHI-E) in addition to already supported options such as CHI-B, ACE and others. There are a couple of noteworthy points here. First, Arm’s new Automotive Enhanced (AE) cores upgraded protocol support from CHI-B to CHI-E and Arm/Arteris have collaborated to validate the Arteris Ncore coherent NoC generator against the CHI-E standard. Second, Arteris has also done the work to certify Ncore-generated networks with the CHI-E protocol extension for ASIL B and ASIL D. (Ncore-generated networks are already certified for earlier protocols, as are FlexNoC-generated non-coherent NoC networks.) In short, Arteris coherent and non-coherent NoC generators are already aligned against the latest Arm AE releases and ASIL safety standards. Which prompts the question: where are coherent and non-coherent NoCs required in automotive systems? Frank Schirrmeister (VP Solutions and Business Development at Arteris) helped clarify my understanding.

Automotive, datacenter/HPC system contrasts

Multi-purpose datacenters are highly optimized for task throughput per watt per $. CPU and GPU designs exploit very homogenous architectures for high levels of parallelism, connecting through coherent networks to maximize the advantages of that parallelism while ensuring that individual processors do not trip over each other on shared data. Data flows into and out of these systems through regular network connections, and power and safety are not primary concerns (though power has become more important).

Automotive systems architectures are more diverse. Most of the data comes from sensors – drivetrain monitoring and control, cameras, radars, lidars, etc. – streaming live into one or more signal processor stages, commonly implemented in DSPs or (non-AI) GPUs. Processing stages for object recognition, fusion and classification follow. These stages may be implemented through NPUs, GPUs, DSPs or CPUs. Eventually, processed data flows into central decision-making, typically a big AI system that might equally be at home in a datacenter. These long chains of processing must be distributed carefully through the car architecture to meet critical safety goals, low power goals and, of course, cost goals. As an example, it might be too slow to ship a whole frame from a camera through a busy car network to the central AI system, and then to begin to recognize an imminent collision. In such cases, initial hazard detection might happen closer to the camera, reducing what the subsystem must send to the central controller to a much smaller packet of data.

Key consequences of these requirements are that AI functions are distributed as subsystems through the car system architecture and that each subsystem is composed of a heterogenous mix of functions, CPUs, DSPs, NPUs and GPUs, among others.

Why do we need coherence?

Coherence is important whenever multiple processors are working on common data like pixels in an image, where there is opportunity for at least one processor to write to a logical address in a local cache and another processor to read from the same logical address in a different cache. The problem is that the second processor doesn’t see the update made by the first processor. This danger is unavoidable in multiprocessor systems sharing data through hierarchical memory caches.

Coherent networks were invented to ensure disciplined behavior in such cases, through behind-the-scenes checking and control between caches. A popular example can be found in coherent mesh networks common in many-core processor servers. These networks are highly optimized for regular structures, to preserve the performance advantages of using shared cache memory while avoiding coherence conflicts.

Coherence needs are not limited to mesh networks threading through arrays of homogenous processors. Most of the subsystems in a car are heterogeneous, connecting the multiple different types of functions already discussed. Some of these subsystems equally need coherence management when processing images through streaming operations. Conversely, some functions may not need that support if they can operate in separate logical memory regions, or if they do not need to operate concurrently. In these cases, non-coherent networks will meet the need.

A key consequence is that NoCs in an automotive chip must manage both coherent and non-coherent networks on a chip for optimal performance.

Six in-car NoC topologies

Frank illustrated with Arm’s use cases from their recent AE announcement, overlaid with the Arteris view of NoC topologies on those use cases (see the opening figure in this blog).

Small microcontrollers at the edge (drivetrain and window controllers for example) don’t need coherency support. Which doesn’t mean they don’t use AI – predictive maintenance support is an active trend in MCUs. But there isn’t need for high-performance data sharing. Non-coherent NoCs are ideal for these applications. Since these MCUs must sit right next to whatever they measure/control, they are located far from central or zonal controllers and are implemented as standalone (monolithic) chips.

Per Frank, zonal controllers may be non-coherent or may support some coherent interconnect, I guess reflecting differences in OEM architecture choices. Maybe streaming image processing is handled in sensor subsystems, or some processing is handled in the zonal controller. Then again, he sees vision/radar/lidar processing typically needing mostly non-coherent networks with limited coherent network requirements. While streaming architectures will often demand coherence support, any given sensor may generate only one or a few streams, needing at most a limited coherent core for initial recognition. Zonal controllers, by definition, are distributed around the car so are also monolithic chip solutions.

Moving into the car cockpit, infotainment (IVI) is likely to need more of a mix of coherent and non-coherent support, say for imaging overlaid with object recognition. These systems may be monolithic but also lend themselves to chiplet implementations. Centralized ADAS control (fusing inputs from sensors for lane recognition, collision detection, etc.) for SAE level 2+ and beyond will require more coherence support, yet still with need for significant non-coherent networks. Such systems may be monolithic today but are trending to chiplet implementations.

Finally, as I suggested earlier the central AI controller in a car is fast becoming a peer to big datacenter-like systems. Arm has already pointed to AE CSS-based Neoverse many-core platforms (2025) front-ending AI accelerators (as they already do in Grace-Hopper and I’m guessing Blackwell). Add to that big engine more specialized engines (DSPs, NPUs and other accelerators) in support of higher levels of autonomous driving, to centrally synthesize inputs from around the car and to take intelligent action on those inputs. Such a system will demand a mix of big coherent mesh networks wrapping processor arrays, a distributed coherent network to connect to some of those other accelerators and non-coherent networks to connect elsewhere. These designs are also trending to chiplet-based systems.

In summary, while there is plenty of complexity in evolving car architectures and the consequent impact on subsystem chip/chiplet designs, connected on-chip through both coherent and non-coherent networks, the intent behind these systems is quite clear. We just need to start thinking in terms of total car system architecture rather than individual chip functions.

Listen to a podcast in which Dan Nenni interviews Frank on this topic. Also read more about Arteris coherent network generation HERE  and non-coherent network generation HERE.


LIVE WEBINAR: Automating the Integration Workflow with IP Centric Design

LIVE WEBINAR: Automating the Integration Workflow with IP Centric Design
by Daniel Nenni on 04-09-2024 at 10:00 am

hip webinar automating integration workflow social

Subsystem and full-chip integration plays a crucial role in any project – particularly for large SoCs. Our upcoming webinar on April 30 confronts the typical challenges of this process and provides a detailed view into how IP centric  design can help you solve them. Join us to learn how transforming your design flow can help your team reliably meet integration milestones, quickly debug issues, and enhance work quality and transparency.

View Video

In today’s landscape, the ongoing challenges of integrating design blocks into SoCs are clear. Teams are often working with a globally distributed workforce, overwhelmingly complex design data, and a lack of expertise on design blocks that were not developed locally.

As teams become more geographically dispersed, integration is complicated by the inclusion of more externally sourced and reused IPs, multiple design sites, and the difficulties of working across time zones. At the same time, the size, volume, and complexity of design files have also increased. This high volume of larger, more complex files can strain existing infrastructure and processes, causing delays and confusion. Lastly, a lack of local expertise on design blocks created by geographically distant teams makes it harder to address integration problems as they arise – leading to longer lead times, inefficient, spreadsheet-based debugging, and repeatedly missed integration milestones.

These complex, interrelated pain points introduce the need for a new and innovative approach. This is where an IP centric design methodology steps in. An IP, also referred to as a design block or module, is an abstraction of data files that defines an implementation, along with the meta-data that defines its state. In IP centric design, each element of the design – from internally reused and externally acquired IPs, to the design environment and the whole platform – is modeled as an IP. This allows the entire project and all related metadata to be modeled as a complete, hierarchical collection of IPs, including all versions and dependencies.

By leveraging an IP centric methodology, along with the use of “IP Aliases” and quality-based integration rules, teams can establish a streamlined, controlled, and transparent integration flow. This automated flow will enable teams to reliably meet key integration milestones, more easily debug integration issues, and improve overall quality.

We have some expert tips and essential best practice guidelines for creating, enforcing, and maintaining an IP centric design flow. The Automating the Integration Workflow presentation will give a more in-depth look at what elements are necessary to establishing an effective IP centric model, including annotation with rich metadata and a versioned, hierarchical Bill of Materials (BoM) that all team members can reference. We’ll also dive into how to support and hone your integration flow over time, giving examples of common governance rules your team can implement, as well as tips for how to consistently enforce them.

Getting your team onboard with this transformation and new approach can have a dramatic effect on your collaboration, productivity, go-to-market timeline, and quality. It will greatly reduce the SoC integration challenges your team struggles with, plus set a minimum quality requirement across all IP and provide an at-a-glance view into IP status and which blocks are ready for integration. And finally, establishing a secure and traceable IP centric design flow can make future IP reuse and integration easier.

Join us for our upcoming webinar, where we’ll walk through each step of the integration process through the lens of IP centric design. Register now to learn how to boost both efficiency and quality through a streamlined integration flow!

View Video

Also Read:

2024 Outlook with Adam Olson of Perforce

The Transformation Model for IP-Centric Design

Chiplets and IP and the Trust Problem


The Data Crisis is Unfolding – Are We Ready?

The Data Crisis is Unfolding – Are We Ready?
by Kalar Rajendiran on 04-09-2024 at 6:00 am

Global Data Sphere for Healthcare Data

The rapid advancement of technology, including generative AI, IoT, and autonomous vehicles, is revolutionizing industries and enhancing efficiency. At the same time, such advances also generate huge amounts of data to be transmitted and processed to make sense and provide value to consumers and society as a whole. In essence, heavy reliance on seamless data movement and processing have become integral to various aspects of modern life, from transportation logistics to healthcare and climate control. While the benefits are great and many, and include enhanced decision making, personalization, improved healthcare, and efficient resource allocation, there are various types of potential dangers that go hand-in-hand. At a broad level, we could call what we are marching toward as a potential data crisis in the making.

While this potential data crisis has many aspects to it, the most fundamental concern is the ability to continue to transmit and process data at increasingly higher speeds and very low latencies, without any disruption. Alphawave Semi has published a whitepaper on this specific aspect of the data crisis. Such a data crisis could have far-reaching consequences for individuals, society and the global economy. Businesses make market entry decisions based not only on potential opportunities and risk/reward calculations but also on consequential damages claims exposure. But with a heavy reliance on data and a highly interconnected world, it is difficult to isolate oneself or individual applications from this data crisis.

For example, an autonomous vehicle is expected to process 19 terabytes (TB) of data per hour to conduct itself. At a projected 840,000 autonomous vehicles hitting the streets by 2030, this translates to 1.6 million terabytes of data per hour. A disruption of even the slightest degree could have fatal and widespread catastrophic consequences. Another example involves the medical industry which uses digital health records for patient management.

For example, as of 2021, 88% of US-based doctors were relying on a rugged data infrastructure to support their usage. Any inability to process large volumes of data could lead to misdiagnosis events with dire consequences.

In essence, the overall global data infrastructure needs to be aggressively updated and kept up, to meet the ever-growing demand for data connectivity, integrity, safety and privacy.

Securing our Data Infrastructure

Generative AI, heralded as a transformative force in various industries, relies heavily on data infrastructure to realize its full potential. While it holds promise in improving efficiency and driving innovation, the associated power consumption and computational demands underscore the need for sustainable practices and energy-efficient solutions. While accommodating the demanding requirements of AI applications, our data infrastructure must continue to accommodate regular workloads like streaming videos and video calls.

Whether it’s facilitating seamless data transmission or enhancing interconnectivity within hyperscale data centers, semiconductor innovation takes center stage to meet the growing demands of data-intensive workloads. Legacy technologies with monolithic chip structures are insufficient for addressing the mounting computational pressure. Chiplets and custom silicon solutions emerge as game-changers in maximizing efficiency, reducing power consumption, and minimizing latency within data centers. Companies like Alphawave Semi and other industry leaders are spearheading efforts to leverage these technologies, pushing the boundaries of connectivity and scientific advancements.

As we navigate the complexities of the unfolding data crisis, collaboration and adaptability are key. Stakeholders across industries must come together to address the challenges and opportunities presented by the data-driven era. By investing in sustainable practices, embracing technological advancements, making investments, and fostering an ecosystem of innovation, we can look forward to a resilient, efficient, and interconnected digital future.

Summary

The unfolding data crisis presents both challenges and opportunities for our society. By leveraging connectivity, AI, and semiconductor innovation, we can overcome obstacles, drive progress, and usher in a new era of digital transformation and avert a data crisis.

The Alphawave Semi whitepaper on this topic can be downloaded from here.

Also Read:

Accelerate AI Performance with 9G+ HBM3 System Solutions

Alphawave Semiconductor Powering Progress

Will Chiplet Adoption Mimic IP Adoption?


Simulation World 2024 Virtual Event

Simulation World 2024 Virtual Event
by Daniel Nenni on 04-08-2024 at 10:00 am

ANSYS Inc Racecar Simulation

ANSYS Simulation World is an annual conference hosted by ANSYS, Inc., a leading provider of engineering simulation software. The event typically brings together engineers, designers, researchers, and industry experts from around the world to discuss the latest advancements, best practices, and case studies in engineering simulation and virtual prototyping.

Simulation World 2024 is a free global virtual event

Attendees have the opportunity to participate in keynote presentations, technical sessions, hands-on workshops, and networking events. The conference covers a wide range of topics, including computational fluid dynamics (CFD), finite element analysis (FEA), electromagnetics simulation, Multiphysics simulation, additive manufacturing, and more.

The event provides a platform for users of ANSYS software to learn new skills, exchange ideas, and explore innovative applications of simulation technology across various industries, such as aerospace, automotive, electronics, energy, healthcare, and consumer goods.

Additionally, ANSYS Simulation World often features keynote speakers from industry-leading companies, showcasing how simulation-driven engineering has helped them solve complex engineering challenges, improve product performance, and accelerate time-to-market.

Overall, ANSYS Simulation World serves as a premier gathering for the simulation community, offering valuable insights, practical knowledge, and networking opportunities to help engineers and designers stay at the forefront of simulation technology.

EVENT TRACKS

Inspire: Automative and Transportation
Simulation is transforming mobility to address unprecedented challenges and deliver cost effective, completely differentiated solutions, from safer, more sustainable designs to the complex electronics and embedded software that define them.

Inspire: Aerospace and Defense
The aerospace and defense industries must operate on the cutting edge to deliver advanced capabilities. Digital engineering helps them increase flexibility, update legacy programs, and speed new technology into service.

Inspire Energy and Industrials
Industries rely on simulation to streamline production and distribution of safer, cleaner, more reliable energy through fuel-to-power conversions, and to accelerate, scaling of low-carbon energy solutions.

FEATURED SPEAKERS

Dr. Ajei Gopal
President and Chief Executive Officer, Ansys
Ajei Gopal’s idea to drive “pervasive simulation,” or the use of engineering simulation throughout the product life cycle, has transformed the industry. Prior to Ansys, he served in various leadership roles where he demonstrated his ability to simultaneously drive organizational growth and improve operational efficiency.

Dr. Prith Banerjee
Chief Technology Officer, Ansys
Prith Banerjee leads the evolution of Ansys technology and champions the company’s next phase of innovation and growth. During his 35-year technology career — from academia, to initiating startups, to managing innovation in enterprise environments— he has actively observed, and promoted how organizations can realize open innovation success.

Walt Hearn
Senior Vice President, Worldwide Sales and Customer Excellence, Ansys
As an innovative business leader and simulation expert at Ansys, Walt Hearn leads high-performing teams to develop and execute sales strategy, technical excellence, and mentorship across the organization. He prides himself on ensuring customer success and helping organizations achieve top engineering initiatives to change the future of digital transformation.

Here is a quick video on simulation that I think we can all relate to:

I hope to see you there!

Simulation World 2024 is a free global virtual event

Also Read:

2024 Outlook with John Lee, VP and GM Electronics, Semiconductor and Optics Business Unit at Ansys

Unleash the Power: NVIDIA GPUs, Ansys Simulation

Ansys and Intel Foundry Direct 2024: A Quantum Leap in Innovation


Podcast EP216: Q4 2023 is Another Strong Growth Quarter for EDA as Reviewed by Wally Rhines

Podcast EP216: Q4 2023 is Another Strong Growth Quarter for EDA as Reviewed by Wally Rhines
by Daniel Nenni on 04-08-2024 at 8:00 am

Dan is joined by Dr. Walden Rhines. Wally is a lot of things, CEO of Cornami, board member, advisor to many and friend to all. In this session, he is the Executive Sponsor of the SEMI Electronic Design Market Data Report.

Wally reviews the Electronic Design Market Data report that was just released for Q4 2023. Growth continues to be strong at 14% overall. EDA and IP revenue ended 2023 at an incredible $17B, completing 20 consecutive quarters of positive growth.

Wally reviews the details of the numbers with Dan, including purchasing dynamics across the sector and small areas of lower performance.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Synopsys Presents AI-Fueled Innovation at SNUG 2024

Synopsys Presents AI-Fueled Innovation at SNUG 2024
by Daniel Nenni on 04-08-2024 at 6:00 am

Synopsys Presents AI Fueled Innovation at SNUG 2024

SNUG is the premier event for Synopsys to showcase its technology and impact on the industry. This year’s SNUG did not disappoint. The two-day event packed many fantastic user presentations along with exciting news of innovation from Synopsys. Jensen Huang and Sassine Ghazi even held a live, interactive Q&A session. Compelling content. The tagline for the event was: Our Technology, Your Innovation, setting a tone of collaboration for the future. You can learn more about the big picture for the event here. As you would expect, AI was a prominent and pervasive topic. There was a special session on Day 1 of SNUG for media and analyst attendees that dug into the impact of AI on chip design. I will explore how Synopsys presents AI-fueled innovation at SNUG 2024.

Framing the Discussion
Sanjay Bali

Sanjay Bali gave an insightful presentation about the opportunities AI presents for new approaches to
design. Sanjay is Vice President, Strategy and Product Management for the EDA Group at Synopsys. He’s been with the company for almost 16 years, so he’s seen a lot of change in the industry. Having spent time at Intel, Actel, Mentor and Magma before joining Synopsys, Sanjay brings a broad view of chip design to the table.

He presented an excellent overview of the opportunities for AI in the EDA workflow. He explained that in the implementation and verification flows, there are many options to consider. Choosing the right technology and architectural options during RTL design, synthesis, design planning, place & route and ECO closure have a profound impact in the quality of the final result, as well as the time to get there. Using AI-driven optimization allows all the inter-related decisions to be considered, balanced and optimized with a level of detail and discovery that is difficult for humans to achieve.

He reported that across image sensor, mobile, automotive, and high-performance computing applications Synopsys AI technology has delivered noteworthy improvements compared to non AI-assisted processes. From 5nm to 28nm technologies, results such as 12% better area, 25% lower power, 3X increased productivity and 15% test coverage improvement have been achieved. This is a small subset of the complete list of accomplishments.

And the story doesn’t stop there. Analog design can also benefit from AI, with a 10X overall improvement in turnaround time for analog optimization and a 3X faster turnaround time for analog IP node migration. The complexity and cost of testing advanced designs can also benefit from Synopsys AI technology, with pattern count reductions ranging from 18% to a whopping 70%, depending on the application.

Sanjay also touched on the emerging field of multi-die package design. Here, autonomous full-system exploration can optimize signal integrity, thermal properties and the power network, delivering improved performance and memory efficiency. A 10X boost in productivity with a better quality of result has been achieved.

Big data analytics are also creating new opportunities and revealing new insights. Process and product analytics can reduce defects and increase yields. The opportunities are eye-opening. Sanjay also talked about the application of generative AI to the design process. Junior engineers are able to ramp-up as much as 30% faster without depending on an expert. Generally speaking, AI can increase search and analysis speed and deliver superior results, allowing designers to be more productive.

This was an impressive presentation that covered far more topics than I expected. Sanjay presented a graphic that depicts the breadth of the Synopsys AI solution as shown below.

AI Powered Full Stack Synopsys EDA Portfolio
The Demonstration
Stelios Diamantidis

Stelios Diamantidis, Distinguished Architect, and Executive Director, Center for Generative AI provided a demonstration of some the tools in the Synopsys.ai arsenal. Stelios has been with Synopsys for almost fifteen years years and has played a key role in the infusion of AI into the Synopsys product suite.

It’s difficult to capture the full impact of a live demonstration without the use of video. Let me just say that the capabilities Sanjay described are indeed real. Across many scenarios, Stelios showcased advanced AI capabilities in a variety of Synopsys products.

A common theme for a lot of this work is the ability of AI to examine a vast solution space and find the optimal choices for a new design. The technology can deliver results faster and with higher quality than a team of experienced designers. The graphic at the top of this post presents a view of how this process works.

To Learn More

A lot was covered in this session. The breadth and depth of AI in the Synopsys product line is very impressive. You can get more information on this innovative set of capabilities here. And that’s how Synopsys presents AI-fueled innovation at SNUG 2024.


Podcast EP215: A Tour of the GlobalFoundries Silicon Photonics Platform with Vikas Gupta

Podcast EP215: A Tour of the GlobalFoundries Silicon Photonics Platform with Vikas Gupta
by Daniel Nenni on 04-05-2024 at 10:00 am

Dan is joined by Vikas Gupta, senior director of product management at GlobalFoundries focused on Silicon Photonics and ancillary technologies. Vikas has close to 30 years of semiconductor experience with TI, Xilinx, AMD, GlobalFoundries, POET Technologies, and back to GlobalFoundries.

Vikas discusses the growing demands of semiconductor design with a focus on compute and AI. He explains the major hurdles that need to be addressed in both processing speed and network bandwidth. Vikas provides details about the unique platform GlobalFoundries has developed and how it provides a scalable, more design-friendly approach to the use of silicon photonics across multiple applications.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Ninad Huilgol of Innergy Systems

CEO Interview with Ninad Huilgol of Innergy Systems
by Daniel Nenni on 04-05-2024 at 6:00 am

Ninad Huilgol

Ninad Huilgol is the Founder and CEO at Innergy Systems, has extensive experience in design verification of ultra low-power mobile SoCs. Previously, he has worked in senior engineering management at various semiconductor companies such as Broadcom and Synopsys. He has multiple power- and design-related patents, trade secrets and is the recipient of a Synopsys Inventor award.

Tell us about your company.
Innergy Systems was founded in 2017 with a mission to bring power analysis to the software world, make hardware power analysis faster and provide power feedback in the earliest stages of the design flow.

I am Innergy’s founder and CEO and have extensive experience in design verification of ultra low-power mobile SoCs. I worked in senior engineering management at various semiconductor companies including Broadcom and Synopsys.

Years ago, when I was chip verification lead at Broadcom, I came across a problem designers were facing early in the development of a new ultra-low power mobile SoC. They wanted to explore the power impact of some architectural experiments. Typically, models can estimate the performance of a new architecture, but power information could not be added. We were at a design conference. One leading EDA company was asked if there was any way to get power information in performance models. The answer was a categorical no.

This led me to think about forming a startup that could build technology to bring power analysis earlier (shift left) as well as provide results faster. In the process, it led to the invention of high-performance power models capable of providing power analysis from architecture stage all the way to tape out and even beyond.

Today, Innergy counts some big-name Tier-1 chipmakers among its customers. Our models are used in all phases of hardware development, and perhaps more importantly, for developing more power efficient software.

I am a big believer in sustainability. Compute has become power hungry, driven by AI, crypto and other applications. Innergy’s technology will help build more power-efficient hardware and software systems and reduce the carbon footprint. This also saves money for our customers, which is a big bonus.

What problems are you solving?
We solve quite a few problems that exist in current power analysis solutions.

Speed: Today’s large and complex designs need power analysis solutions that run simulations quickly, so that system-level power analysis is available in minutes, not hours or days. Traditional power analysis need to run gate-level simulations that take a long time to finish and require high-performance compute.

We create power models that take a fraction of the time to run simulations without compromising accuracy. We use our proprietary, patented technology that intelligently identifies a few critical events to monitor and track power. We build higher abstraction models of power consumption. Simulating at higher abstraction requires fewer resources, which leads to significant performance gains. In RTL, a typical simulation speed up of 30x- 50x has been demonstrated. In emulation and software environments, our results demonstrated 200x-500x simulation speedup and produced without the need for high-performance compute resources.

High speed does not mean less accuracy. Our models have been consistently benchmarked with typical accuracy of 95% or better when compared to sign-off power solutions.

Root cause analysis: Currently, understanding the root cause of a power issue requires multiple iterations of running RTL simulations, power simulations and waveform analysis.

With Innergy Quarx, detailed power analysis reports show which design instances were involved in a power hotspot, along with what those instances were doing and which actions were being performed by those instances. This simplifies the debug process by not requiring multiple iterations of simulation, power analysis and waveform debug.

Ability to run with software: Designers report a difficulty to estimate power cost of software features/subroutines. Traditionally, this problem has been solved only by emulation.

Innergy Quarx enables designers to build models that run directly in a software environment by modeling events that exist in both hardware and software environments. This versatility means Quarx models can be used in RTL, emulation and software environments without requiring modification.

Early “what-if” power analysis: Currently, the only way to perform “what-if” power exploration is by building custom models, using a spreadsheet or simple modeling tools that do not have fine-grained power information.

Innergy Quarx can build power models for existing designs (evolutionary) as well as new designs (revolutionary). Even without RTL, it’s possible to build architectural-level models with estimates of power added. Our models can do power versus performance analysis at an early stage by creating virtual power simulations with different traffic patterns, voltages and frequencies. This enables designers to start realizing value right from the earliest stages of their design project through tape out and beyond.

What application areas are your strongest?
There are three:

System-level power simulations: Innergy Quarx can run subsystem or full-chip power simulations at a fraction of the time currently required. We recently benchmarked our tool against a leading power product. Quarx produced results in 26 minutes. The other tool would have required a few days’ worth of simulation. This is over 500x faster and 97% accuracy compared to the other tool.

We can handle large simulations and provide potentially 100% power coverage. Due to how slow the traditional power tools are –– some designers run only 1-2% of available stimulus to check for power consumption, which means there could be power bugs hiding in the 98-99% unused stimulus. Our solution obviates this problem.

Ability to profile power consumption of software as well as hardware: Thanks to the booming AI market, power-efficient software design is becoming important. In AI applications, hardware cores tend to be simpler in construction, repeated tens or hundreds of times in a processing system. Hardware-based power analysis might not be effective as power savings tend to be smaller. Software running on hardware tends to be more complex with learning, inferencing and training increasing power consumption. In fact, AI is likely to take the top spot as the most power-hungry area, edging out crypto, according to a Stanford University report by Mack DeGeurin published in April 2023.

Quarx can provide detailed power consumption information with models able to run in a software environment without the need for expensive emulation. This closes the loop and enables power-efficient hardware and software design.

What keeps your customers up at night?
In design verification, the fear of undiscovered hardware bugs keeps designers up at night. It is a similar analogy with power bugs. Undiscovered power bugs can cause thermal run-aways creating catastrophic die failures.

Moreover, power and performance are at the top of any semiconductor engineering manager’s mind. Competition for the best power and performance numbers is strong, and I imagine this is another issue that keeps designers burning the midnight oil.

What was the most exciting high point of 2023 for your company?
We had two significant high points in 2023. The first was receiving the TiE50 Investor award from The Indus Entrepreneurs Group (TiE). TiE is one of the largest angel investment organizations in the world, and Innergy Systems was selected as one of the 50 best startups for investment. We are funded by TiE charter members and angels.

An even more exciting high point was getting more paying customers, including Tier-1 companies, further reinforcing our value proposition –– an early hardware/software power analysis platform for SoC designs.

What was the biggest challenge your company faced in 2023?
We managed to survive the downturn in business and the investment climate during 2023. According to some reports, many startups went bust during the third quarter of 2023 due to a tight funding environment.

What do you think the biggest growth area for 2024 will be, and why?
We agree with all the semiconductor industry experts –– AI is a big growth area in 2024. AI is driving innovation at speeds rarely witnessed before. Every expert in this space tells us that things keep changing weekly, as opposed to months or years. This is driving tremendous growth in this area.

AR/VR is also seeing growth.

How is your company’s work addressing this growth?
Innergy provides high-performance, high-accuracy power modeling for both hardware and software power optimization, especially important in AI-based systems.

Hardware in AI tends to be less complicated. For example, a single processing core can be instantiated hundreds of times to form an inferencing engine. Each core is simpler in design compared to a large CPU. Meaningful power savings by hardware optimization might be harder to find. Software running on top is more complex and learning daily. Understanding how power consumption is affected by software behavior is becoming critical.

We offer a practical, out-of-the box solution to provide power models that can run in software environments, closing the loop, and enabling simultaneous power optimization of hardware and software systems.

What does the competitive landscape look like and how do you differentiate?
We see some competition from other power modeling players and some homegrown solutions. Our differentiation is a simple-to-use, out-of-the-box solution that ticks all the boxes: Ease of use, consistent speed and accuracy results, and versatility.

What new features/technology are you working on?
Our next area of focus is adding intelligence to our models using AI.

Additional questions or final comments?
Innergy Systems will emerge from stealth mode over the next several months. Meanwhile, our credentials speak for us. We are highly experienced semiconductor design specialists passionate about power and have first-hand experience wrestling the challenges of large low-power application processors.

To learn more, visit the Innergy Systems website at www.innergysystems.com, email info@innergysystems.com or call (408) 390-1534.

Also Read:

CEO Interview: Ganesh Verma, Founder and Director of MoogleLabs

CEO Interview: Patrick T. Bowen of Neurophos

CEO Interview: Larry Zu of Sarcina Technology