DAC2025 SemiWiki 800x100

Simulation World 2024 Virtual Event

Simulation World 2024 Virtual Event
by Daniel Nenni on 04-08-2024 at 10:00 am

ANSYS Inc Racecar Simulation

ANSYS Simulation World is an annual conference hosted by ANSYS, Inc., a leading provider of engineering simulation software. The event typically brings together engineers, designers, researchers, and industry experts from around the world to discuss the latest advancements, best practices, and case studies in engineering simulation and virtual prototyping.

Simulation World 2024 is a free global virtual event

Attendees have the opportunity to participate in keynote presentations, technical sessions, hands-on workshops, and networking events. The conference covers a wide range of topics, including computational fluid dynamics (CFD), finite element analysis (FEA), electromagnetics simulation, Multiphysics simulation, additive manufacturing, and more.

The event provides a platform for users of ANSYS software to learn new skills, exchange ideas, and explore innovative applications of simulation technology across various industries, such as aerospace, automotive, electronics, energy, healthcare, and consumer goods.

Additionally, ANSYS Simulation World often features keynote speakers from industry-leading companies, showcasing how simulation-driven engineering has helped them solve complex engineering challenges, improve product performance, and accelerate time-to-market.

Overall, ANSYS Simulation World serves as a premier gathering for the simulation community, offering valuable insights, practical knowledge, and networking opportunities to help engineers and designers stay at the forefront of simulation technology.

EVENT TRACKS

Inspire: Automative and Transportation
Simulation is transforming mobility to address unprecedented challenges and deliver cost effective, completely differentiated solutions, from safer, more sustainable designs to the complex electronics and embedded software that define them.

Inspire: Aerospace and Defense
The aerospace and defense industries must operate on the cutting edge to deliver advanced capabilities. Digital engineering helps them increase flexibility, update legacy programs, and speed new technology into service.

Inspire Energy and Industrials
Industries rely on simulation to streamline production and distribution of safer, cleaner, more reliable energy through fuel-to-power conversions, and to accelerate, scaling of low-carbon energy solutions.

FEATURED SPEAKERS

Dr. Ajei Gopal
President and Chief Executive Officer, Ansys
Ajei Gopal’s idea to drive “pervasive simulation,” or the use of engineering simulation throughout the product life cycle, has transformed the industry. Prior to Ansys, he served in various leadership roles where he demonstrated his ability to simultaneously drive organizational growth and improve operational efficiency.

Dr. Prith Banerjee
Chief Technology Officer, Ansys
Prith Banerjee leads the evolution of Ansys technology and champions the company’s next phase of innovation and growth. During his 35-year technology career — from academia, to initiating startups, to managing innovation in enterprise environments— he has actively observed, and promoted how organizations can realize open innovation success.

Walt Hearn
Senior Vice President, Worldwide Sales and Customer Excellence, Ansys
As an innovative business leader and simulation expert at Ansys, Walt Hearn leads high-performing teams to develop and execute sales strategy, technical excellence, and mentorship across the organization. He prides himself on ensuring customer success and helping organizations achieve top engineering initiatives to change the future of digital transformation.

Here is a quick video on simulation that I think we can all relate to:

I hope to see you there!

Simulation World 2024 is a free global virtual event

Also Read:

2024 Outlook with John Lee, VP and GM Electronics, Semiconductor and Optics Business Unit at Ansys

Unleash the Power: NVIDIA GPUs, Ansys Simulation

Ansys and Intel Foundry Direct 2024: A Quantum Leap in Innovation


Podcast EP216: Q4 2023 is Another Strong Growth Quarter for EDA as Reviewed by Wally Rhines

Podcast EP216: Q4 2023 is Another Strong Growth Quarter for EDA as Reviewed by Wally Rhines
by Daniel Nenni on 04-08-2024 at 8:00 am

Dan is joined by Dr. Walden Rhines. Wally is a lot of things, CEO of Cornami, board member, advisor to many and friend to all. In this session, he is the Executive Sponsor of the SEMI Electronic Design Market Data Report.

Wally reviews the Electronic Design Market Data report that was just released for Q4 2023. Growth continues to be strong at 14% overall. EDA and IP revenue ended 2023 at an incredible $17B, completing 20 consecutive quarters of positive growth.

Wally reviews the details of the numbers with Dan, including purchasing dynamics across the sector and small areas of lower performance.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Synopsys Presents AI-Fueled Innovation at SNUG 2024

Synopsys Presents AI-Fueled Innovation at SNUG 2024
by Daniel Nenni on 04-08-2024 at 6:00 am

Synopsys Presents AI Fueled Innovation at SNUG 2024

SNUG is the premier event for Synopsys to showcase its technology and impact on the industry. This year’s SNUG did not disappoint. The two-day event packed many fantastic user presentations along with exciting news of innovation from Synopsys. Jensen Huang and Sassine Ghazi even held a live, interactive Q&A session. Compelling content. The tagline for the event was: Our Technology, Your Innovation, setting a tone of collaboration for the future. You can learn more about the big picture for the event here. As you would expect, AI was a prominent and pervasive topic. There was a special session on Day 1 of SNUG for media and analyst attendees that dug into the impact of AI on chip design. I will explore how Synopsys presents AI-fueled innovation at SNUG 2024.

Framing the Discussion
Sanjay Bali

Sanjay Bali gave an insightful presentation about the opportunities AI presents for new approaches to
design. Sanjay is Vice President, Strategy and Product Management for the EDA Group at Synopsys. He’s been with the company for almost 16 years, so he’s seen a lot of change in the industry. Having spent time at Intel, Actel, Mentor and Magma before joining Synopsys, Sanjay brings a broad view of chip design to the table.

He presented an excellent overview of the opportunities for AI in the EDA workflow. He explained that in the implementation and verification flows, there are many options to consider. Choosing the right technology and architectural options during RTL design, synthesis, design planning, place & route and ECO closure have a profound impact in the quality of the final result, as well as the time to get there. Using AI-driven optimization allows all the inter-related decisions to be considered, balanced and optimized with a level of detail and discovery that is difficult for humans to achieve.

He reported that across image sensor, mobile, automotive, and high-performance computing applications Synopsys AI technology has delivered noteworthy improvements compared to non AI-assisted processes. From 5nm to 28nm technologies, results such as 12% better area, 25% lower power, 3X increased productivity and 15% test coverage improvement have been achieved. This is a small subset of the complete list of accomplishments.

And the story doesn’t stop there. Analog design can also benefit from AI, with a 10X overall improvement in turnaround time for analog optimization and a 3X faster turnaround time for analog IP node migration. The complexity and cost of testing advanced designs can also benefit from Synopsys AI technology, with pattern count reductions ranging from 18% to a whopping 70%, depending on the application.

Sanjay also touched on the emerging field of multi-die package design. Here, autonomous full-system exploration can optimize signal integrity, thermal properties and the power network, delivering improved performance and memory efficiency. A 10X boost in productivity with a better quality of result has been achieved.

Big data analytics are also creating new opportunities and revealing new insights. Process and product analytics can reduce defects and increase yields. The opportunities are eye-opening. Sanjay also talked about the application of generative AI to the design process. Junior engineers are able to ramp-up as much as 30% faster without depending on an expert. Generally speaking, AI can increase search and analysis speed and deliver superior results, allowing designers to be more productive.

This was an impressive presentation that covered far more topics than I expected. Sanjay presented a graphic that depicts the breadth of the Synopsys AI solution as shown below.

AI Powered Full Stack Synopsys EDA Portfolio
The Demonstration
Stelios Diamantidis

Stelios Diamantidis, Distinguished Architect, and Executive Director, Center for Generative AI provided a demonstration of some the tools in the Synopsys.ai arsenal. Stelios has been with Synopsys for almost fifteen years years and has played a key role in the infusion of AI into the Synopsys product suite.

It’s difficult to capture the full impact of a live demonstration without the use of video. Let me just say that the capabilities Sanjay described are indeed real. Across many scenarios, Stelios showcased advanced AI capabilities in a variety of Synopsys products.

A common theme for a lot of this work is the ability of AI to examine a vast solution space and find the optimal choices for a new design. The technology can deliver results faster and with higher quality than a team of experienced designers. The graphic at the top of this post presents a view of how this process works.

To Learn More

A lot was covered in this session. The breadth and depth of AI in the Synopsys product line is very impressive. You can get more information on this innovative set of capabilities here. And that’s how Synopsys presents AI-fueled innovation at SNUG 2024.


Podcast EP215: A Tour of the GlobalFoundries Silicon Photonics Platform with Vikas Gupta

Podcast EP215: A Tour of the GlobalFoundries Silicon Photonics Platform with Vikas Gupta
by Daniel Nenni on 04-05-2024 at 10:00 am

Dan is joined by Vikas Gupta, senior director of product management at GlobalFoundries focused on Silicon Photonics and ancillary technologies. Vikas has close to 30 years of semiconductor experience with TI, Xilinx, AMD, GlobalFoundries, POET Technologies, and back to GlobalFoundries.

Vikas discusses the growing demands of semiconductor design with a focus on compute and AI. He explains the major hurdles that need to be addressed in both processing speed and network bandwidth. Vikas provides details about the unique platform GlobalFoundries has developed and how it provides a scalable, more design-friendly approach to the use of silicon photonics across multiple applications.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Ninad Huilgol of Innergy Systems

CEO Interview with Ninad Huilgol of Innergy Systems
by Daniel Nenni on 04-05-2024 at 6:00 am

Ninad Huilgol

Ninad Huilgol is the Founder and CEO at Innergy Systems, has extensive experience in design verification of ultra low-power mobile SoCs. Previously, he has worked in senior engineering management at various semiconductor companies such as Broadcom and Synopsys. He has multiple power- and design-related patents, trade secrets and is the recipient of a Synopsys Inventor award.

Tell us about your company.
Innergy Systems was founded in 2017 with a mission to bring power analysis to the software world, make hardware power analysis faster and provide power feedback in the earliest stages of the design flow.

I am Innergy’s founder and CEO and have extensive experience in design verification of ultra low-power mobile SoCs. I worked in senior engineering management at various semiconductor companies including Broadcom and Synopsys.

Years ago, when I was chip verification lead at Broadcom, I came across a problem designers were facing early in the development of a new ultra-low power mobile SoC. They wanted to explore the power impact of some architectural experiments. Typically, models can estimate the performance of a new architecture, but power information could not be added. We were at a design conference. One leading EDA company was asked if there was any way to get power information in performance models. The answer was a categorical no.

This led me to think about forming a startup that could build technology to bring power analysis earlier (shift left) as well as provide results faster. In the process, it led to the invention of high-performance power models capable of providing power analysis from architecture stage all the way to tape out and even beyond.

Today, Innergy counts some big-name Tier-1 chipmakers among its customers. Our models are used in all phases of hardware development, and perhaps more importantly, for developing more power efficient software.

I am a big believer in sustainability. Compute has become power hungry, driven by AI, crypto and other applications. Innergy’s technology will help build more power-efficient hardware and software systems and reduce the carbon footprint. This also saves money for our customers, which is a big bonus.

What problems are you solving?
We solve quite a few problems that exist in current power analysis solutions.

Speed: Today’s large and complex designs need power analysis solutions that run simulations quickly, so that system-level power analysis is available in minutes, not hours or days. Traditional power analysis need to run gate-level simulations that take a long time to finish and require high-performance compute.

We create power models that take a fraction of the time to run simulations without compromising accuracy. We use our proprietary, patented technology that intelligently identifies a few critical events to monitor and track power. We build higher abstraction models of power consumption. Simulating at higher abstraction requires fewer resources, which leads to significant performance gains. In RTL, a typical simulation speed up of 30x- 50x has been demonstrated. In emulation and software environments, our results demonstrated 200x-500x simulation speedup and produced without the need for high-performance compute resources.

High speed does not mean less accuracy. Our models have been consistently benchmarked with typical accuracy of 95% or better when compared to sign-off power solutions.

Root cause analysis: Currently, understanding the root cause of a power issue requires multiple iterations of running RTL simulations, power simulations and waveform analysis.

With Innergy Quarx, detailed power analysis reports show which design instances were involved in a power hotspot, along with what those instances were doing and which actions were being performed by those instances. This simplifies the debug process by not requiring multiple iterations of simulation, power analysis and waveform debug.

Ability to run with software: Designers report a difficulty to estimate power cost of software features/subroutines. Traditionally, this problem has been solved only by emulation.

Innergy Quarx enables designers to build models that run directly in a software environment by modeling events that exist in both hardware and software environments. This versatility means Quarx models can be used in RTL, emulation and software environments without requiring modification.

Early “what-if” power analysis: Currently, the only way to perform “what-if” power exploration is by building custom models, using a spreadsheet or simple modeling tools that do not have fine-grained power information.

Innergy Quarx can build power models for existing designs (evolutionary) as well as new designs (revolutionary). Even without RTL, it’s possible to build architectural-level models with estimates of power added. Our models can do power versus performance analysis at an early stage by creating virtual power simulations with different traffic patterns, voltages and frequencies. This enables designers to start realizing value right from the earliest stages of their design project through tape out and beyond.

What application areas are your strongest?
There are three:

System-level power simulations: Innergy Quarx can run subsystem or full-chip power simulations at a fraction of the time currently required. We recently benchmarked our tool against a leading power product. Quarx produced results in 26 minutes. The other tool would have required a few days’ worth of simulation. This is over 500x faster and 97% accuracy compared to the other tool.

We can handle large simulations and provide potentially 100% power coverage. Due to how slow the traditional power tools are –– some designers run only 1-2% of available stimulus to check for power consumption, which means there could be power bugs hiding in the 98-99% unused stimulus. Our solution obviates this problem.

Ability to profile power consumption of software as well as hardware: Thanks to the booming AI market, power-efficient software design is becoming important. In AI applications, hardware cores tend to be simpler in construction, repeated tens or hundreds of times in a processing system. Hardware-based power analysis might not be effective as power savings tend to be smaller. Software running on hardware tends to be more complex with learning, inferencing and training increasing power consumption. In fact, AI is likely to take the top spot as the most power-hungry area, edging out crypto, according to a Stanford University report by Mack DeGeurin published in April 2023.

Quarx can provide detailed power consumption information with models able to run in a software environment without the need for expensive emulation. This closes the loop and enables power-efficient hardware and software design.

What keeps your customers up at night?
In design verification, the fear of undiscovered hardware bugs keeps designers up at night. It is a similar analogy with power bugs. Undiscovered power bugs can cause thermal run-aways creating catastrophic die failures.

Moreover, power and performance are at the top of any semiconductor engineering manager’s mind. Competition for the best power and performance numbers is strong, and I imagine this is another issue that keeps designers burning the midnight oil.

What was the most exciting high point of 2023 for your company?
We had two significant high points in 2023. The first was receiving the TiE50 Investor award from The Indus Entrepreneurs Group (TiE). TiE is one of the largest angel investment organizations in the world, and Innergy Systems was selected as one of the 50 best startups for investment. We are funded by TiE charter members and angels.

An even more exciting high point was getting more paying customers, including Tier-1 companies, further reinforcing our value proposition –– an early hardware/software power analysis platform for SoC designs.

What was the biggest challenge your company faced in 2023?
We managed to survive the downturn in business and the investment climate during 2023. According to some reports, many startups went bust during the third quarter of 2023 due to a tight funding environment.

What do you think the biggest growth area for 2024 will be, and why?
We agree with all the semiconductor industry experts –– AI is a big growth area in 2024. AI is driving innovation at speeds rarely witnessed before. Every expert in this space tells us that things keep changing weekly, as opposed to months or years. This is driving tremendous growth in this area.

AR/VR is also seeing growth.

How is your company’s work addressing this growth?
Innergy provides high-performance, high-accuracy power modeling for both hardware and software power optimization, especially important in AI-based systems.

Hardware in AI tends to be less complicated. For example, a single processing core can be instantiated hundreds of times to form an inferencing engine. Each core is simpler in design compared to a large CPU. Meaningful power savings by hardware optimization might be harder to find. Software running on top is more complex and learning daily. Understanding how power consumption is affected by software behavior is becoming critical.

We offer a practical, out-of-the box solution to provide power models that can run in software environments, closing the loop, and enabling simultaneous power optimization of hardware and software systems.

What does the competitive landscape look like and how do you differentiate?
We see some competition from other power modeling players and some homegrown solutions. Our differentiation is a simple-to-use, out-of-the-box solution that ticks all the boxes: Ease of use, consistent speed and accuracy results, and versatility.

What new features/technology are you working on?
Our next area of focus is adding intelligence to our models using AI.

Additional questions or final comments?
Innergy Systems will emerge from stealth mode over the next several months. Meanwhile, our credentials speak for us. We are highly experienced semiconductor design specialists passionate about power and have first-hand experience wrestling the challenges of large low-power application processors.

To learn more, visit the Innergy Systems website at www.innergysystems.com, email info@innergysystems.com or call (408) 390-1534.

Also Read:

CEO Interview: Ganesh Verma, Founder and Director of MoogleLabs

CEO Interview: Patrick T. Bowen of Neurophos

CEO Interview: Larry Zu of Sarcina Technology


sureCore Brings 16nm FinFET to Mainstream Use With a New Memory Compiler

sureCore Brings 16nm FinFET to Mainstream Use With a New Memory Compiler
by Mike Gianfagna on 04-04-2024 at 10:00 am

SureCore Brings 16nm FinFET to Mainstream Use With a New Memory Compiler

Semiconductor processes can have a rather long and interesting life cycle. At first, a new process defines the leading edge. This is cost-no-object territory, where performance is king. The process is new, the equipment to make it is expensive, and its use is reserved for those that have a market (and budget) big enough to justify its use. As the technology matures, yields go up, costs go down, access becomes easier and more mainstream applications start to migrate toward the node as newer technologies take the leading-edge spot. I clearly remember when 16nm FinFET was that leading edge, much sought after technology. But that is now the past and 16nm FInFET is finding application in mainstream products, but there is a catch. As I mentioned, 16nm FinFET was all about speed, often at the expense of power. But mainstream applications can be very power sensitive. The good news is that sureCore is fixing the problem. Read on to see how sureCore brings 16nm FinFET to mainstream use with a new memory compiler.

The 16nm Problem

Applications such as wearables, IoT and medical devices can be good matches for what 16nm FinFET has to offer. The combination of performance, density and yield offered by the technology can be quite appealing. Also, cutting operating voltage generates substantial power savings while still delivering the needed performance. The technology has been in production for over ten years. This means the process is quite stable and yields are high. The fabs involved are largely depreciated as well. All this brings the cost of 16nm FinFET in reach for lower cost, power sensitive devices.

Can low power applications implemented in 28 or 22nm bulk or FDSOI cut ASPs and deliver better features and power with 16nm?  Things seem to line up well except for the key point made earlier: 16nm FinFET was focused on performance first, so much of the IP available for the node is built with performance in mind – power optimization was not central to its design, so a mismatch with newer, power-sensitive applications exists.

The sureCore Solution

sureCore is a company that tends to change the rules and open new markets. A recent post discussed how the company is enabling AI with low power memory IP. sureCore is even working with Agile Analog to enable quantum computing. So, opening 16nm FInFET to a broad range of applications is certainly in the sureCore wheelhouse.

Recently, the company announced the availability of its PowerMiser ultra-low, dynamic power memory compiler in 16nm FinFET. This effectively opens new opportunities for application of the technology by allowing demanding power budgets to be met.

Paul Wells, sureCore’s CEO explains the details quite well:

“FinFET was developed to address the increasingly poor leakage characteristics of bulk nodes. In addition, the key driver for the mobile sector was ever greater performance to deliver new features and a better user experience. The power consumption was not deemed a significant issue, as both the radio and the display were the dominant factors in battery life determination. This, in addition, to the relatively large form factor of a mobile phone meant that the batteries had capacities in excess of 3-4000mAH.”

He went on to highlight sureCore’s strategy:

“However, designers of power sensitive applications such as wearables and medical devices with much more constrained form factors and hence smaller batteries need a range of power optimised IP that can exploit the power advantages of FinFET whilst being much less concerned about performance. This has meant a demand for memory solutions that are specifically tailored to deliver much reduced power consumption. By providing the PowerMiser SRAM IP, sureCore is enabling the shift to mature FinFET processes for low power applications and is thus helping to provide clear differentiation for such products based on both cost and battery life. By doing so, the all-important competitive advantage over rivals may be realised.”

You can read the full text of the press release here .  And that’s how sureCore brings 16nm FinFET to mainstream use with a new memory compiler.


Are Agentic Workflows the Next Big Thing in AI?

Are Agentic Workflows the Next Big Thing in AI?
by Bernard Murphy on 04-04-2024 at 6:00 am

Agentic flows min min

AI continues to be a fast-moving space and we’re always looking for the next big thing. There’s a lot of buzz now around something called agentic workflows – ugly name but a good idea. LLMs had a good run as the state-of-the-AI-art, however evidence is building that the foundation model behind LLMs alone has limitations, both theoretically and in practical applications.  Simply building bigger and bigger models (over a trillion parameters last time I looked) may not deliver any breakthroughs beyond excess cost and power consumption. We need new ideas and agentic workflows might be an answer.

Image courtesy Mike McKenzie

Limits on transformers/LLMs

First I should acknowledge a Quanta article that started me down this path. A recent paper looked at theoretical limits on transformers based on complexity analysis. The default use model starts with a prompt to the LLM which should then return the result you want. Viewing the transformer as a compute machine, the authors prove that the range of problems that can be addressed is quite limited for these or any comparable model architectures.

A later paper generalizes their work to consider chain of thought architectures, in which reasoning proceeds in a chain of steps. The prompt suggests breaking the task down into a series of simpler intermediate goals which are demonstrated in “show your work” results. The authors prove complexity limits increase slightly with a slowly growing number of steps (with respect to the prompt size), more quickly with linear growth in steps, and faster still with polynomial growth. In the last of these cases they prove the class of problems that can be solved is exactly those solvable in polynomial time.

Complexity-based proofs might seem too abstract to be important. After all the travelling salesman problem is known to be NP-hard, yet chip design routinely depends on heuristic solutions to such problems and works very well. However limitations in practical applications of LLMs to math reasoning (see my earlier blog) hint that these theoretical analyses may not be too far off-base. Accuracy certainly grows with more intermediate steps in real chain of thought analyses. Time complexity in running multiple steps also grows, and per the theory will grow at corresponding rates. Suggesting while higher accuracy may be possible, the price is likely to be longer run times.

Agentic flows

The name derives from use of “agents” in a flow. There’s a nice description of the concepts in a YouTube video by Andrew Ng who contrasts the one-shot LLM approach (you provide a prompt, it provides an answer in one shot) with the Agentic approach which looks more like the way a human would approach a task. Develop a plan of attack, do some research, write a first pass, consider what areas might need to be improved (perhaps even have another expert review the draft), iterate until satisfied.

Agentic flows in my understanding provide a framework to generalize chain of thought reasoning. At a first level, following the Andrew Ng video, in a prompt you might ask a coder agent LLM to write a piece of code (step 1), and in the same prompt ask it to review the code it generated for possible errors (step 2). If it finds errors, it can refine the code and you could imagine this process continuing through multiple stages of self-refinement. A next step would be to use a second agent to test the code against some series of tests it might generate based on a specification. Together these steps are called “Reflection” for obvious reasons.

There are additional components in the flow Andrew that suggests: for Tool Use, Planning and Multi-Agent Collaboration. However the Reflection part is most interesting to me.

What does an Agentic flow buy you?

Agentic flows do not fix the time complexity problem; instead, they suggest an architecture concept for extending accuracy for complex problems through a system of collaborating agents. You could imagine this being very flexible and there are some compelling demonstrations. At the same time, Andrew notes we will have to think of agentic workflows taking minutes or even hours to return a useful result.

A suggestion

I see long run times as an interesting human engineering challenge. We’re OK waiting seconds to get an OK result (like a web search). Waiting possibly hours for anything less than a very good result would be a tough sell.

I get that VCs and the ventures they fund are aiming for moonshots – artificial general intelligence (AGI) as the only thing that might attract enough attention in a white-hot AI market. I wish them well, especially in the intermediate discoveries they make along the way. The big goal I suspect is still a long way off.

However the agentic concept might deliver practical and near-term value if we are prepared to allow expert human agents in the flow. Let the LLM do the hard work to get to a nearby goal, and perhaps suggest a few alternatives for paths it might follow next. This should take minutes at most. An expert human agent then directs the LLM to follow one of those paths. Repeat as necessary.

I’m thinking particularly of verification debug. In the Innovation in Verification series we’ve covered a few research papers on fault localization. All useful but still challenged to accurately locate a root cause. An agentic workflow alternating between an LLM and an expert human agent might help push accurate localization further and it could progress as quickly as the expert could decide between alternatives.

Any thoughts?


Navigating the Complexities of Software Asset Management in Modern Enterprises

Navigating the Complexities of Software Asset Management in Modern Enterprises
by Kalar Rajendiran on 04-03-2024 at 10:00 am

Engineering Environment

In today’s digital age, software has become the backbone of modern enterprises, powering critical operations, driving innovation, and facilitating collaboration. However, with the proliferation of software applications and the complexity of licensing models, organizations are facing significant challenges in managing their software assets effectively.

Altair has published a comprehensive guide that explores the nuances of Software Asset Management (SAM), uncovers the strategies, challenges, and solutions for optimizing software usage, addresses cost reduction, and helps ensure compliance in modern enterprises, with a particular focus on the acute challenges in Computer-Aided Design (CAD), Computer-Aided Engineering (CAE), Electronic Design Automation (EDA) environments. You can access the entire guide from here. Following is a synopsis of the importance of SAM, challenges faced and best practices for successful SAM initiatives.

Understanding Software Asset Management

Software Asset Management (SAM) encompasses the processes, policies, and tools used by organizations to manage and optimize their software assets throughout the software lifecycle. From procurement and deployment to usage tracking and retirement, SAM aims to maximize the value of software investments while minimizing risks and costs associated with non-compliance and underutilization.

The Growing Importance of SAM

Enterprise software spending is on the rise, driven by the increasing reliance on digital technologies for business operations and innovation. According to industry reports, enterprise software spending is expected to reach unprecedented levels in the coming years, highlighting the critical importance of effective SAM practices. In today’s competitive landscape, organizations cannot afford to overlook the strategic value of software asset management in driving efficiency, reducing costs, and mitigating risks.

Challenges in Software Asset Management

While SAM presents challenges across various industries, CAD, CAE and EDA environments face unique hurdles due to the specialized nature of their software tools and computing requirements.

These environments rely on specialized software tools tailored for engineering design, simulation, and analysis. These tools often come with complex licensing models and require high levels of expertise for effective management. Engineering simulations and analyses often require significant computational resources, including high-performance computing (HPC) clusters and specialized hardware accelerators. Managing software licenses across distributed computing environments adds another layer of complexity to SAM efforts. In addition, CAD and CAE environments deal with multidisciplinary engineering problems, involving various software tools and domains. Coordinating software usage and licenses across different engineering teams with diverse requirements poses a significant challenge for SAM initiatives.

Best Practices for Successful SAM Initiatives

To overcome these challenges and maximize the benefits, organizations can adopt the following best practices.

  • Establish clear goals and objectives aligned with business objectives and engineering requirements to guide SAM initiatives effectively.
  • Gain support from senior leadership to prioritize SAM efforts, allocate resources, and overcome organizational barriers.
  • Foster collaboration between IT, engineering, procurement, and finance teams to ensure alignment of SAM efforts with business needs and technical requirements.
  • Choose SAM tools specifically designed for CAD and CAE environments, capable of managing specialized software licenses and integrating with HPC workload management systems.

Partnering with Altair

Altair, a leading provider of engineering and HPC solutions, offers specialized SAM solutions tailored for CAD, CAE and EDA environments. With solutions like Altair® Software Asset Optimization (SAO) and Altair® Monitor™, organizations can leverage advanced analytics, predictive modeling, and visualization tools to optimize their software assets effectively. Altair’s industry expertise, proven track record, and commitment to innovation make it a trusted partner for organizations looking to streamline their SAM initiatives in CAD and CAE environments.

Summary

Software Asset Management (SAM) plays a crucial role in optimizing software usage, reducing costs, and ensuring compliance in modern enterprises, especially in engineering and design environments. By understanding the unique challenges and adopting best practices tailored for these environments, organizations can navigate the complexities of SAM with confidence and achieve success in their software asset optimization endeavors. With Altair as a trusted partner, organizations can unlock significant value, enhance productivity, and drive sustainable growth in the digital age.

To access the eGuide, please visit Make the Most of Software License Spending.

Also Read:

2024 Outlook with Jim Cantele of Altair

Altair’s Jim Cantele Predicts the Future of Chip Design

How to Enable High-Performance VLSI Engineering Environments


yieldHUB Improves Semiconductor Product Quality for All

yieldHUB Improves Semiconductor Product Quality for All
by Mike Gianfagna on 04-03-2024 at 6:00 am

yieldHUB Improves Semiconductor Product Quality for All

We all know that building advanced semiconductors is a team sport. Many design parameters and processes must come together in a predictable, accurate and well-orchestrated way to achieve success. The players are diverse and cover the globe. Assembling all the information required to optimize the project in one place, with the right level of analysis and insight is a particularly vexing problem. My first job out of college was building such a system for internal use at the RCA Solid State Division (RIP). If you happen to have a copy of the 15thDesign Automation Conference Proceedings handy, thumb to page 117 to check out my early efforts. That was back in the dawn of time when infrastructure was thin, and automaton was non-existent. This early work gave me an appreciation for the tools that followed over the decades. Today, optimizing designs is still difficult, but there are some excellent systems that cut the problem down to size.  Let’s examine how yieldHUB improves semiconductor product quality for all.

What’s The Problem?

Let me begin by framing the problem. It’s well known that many companies all over the world are involved in the design and manufacture of advanced chips. Design, verification, wafer fab, packaging, final test, qualification, and in-system validation and bring-up are all highly complex processes that involve many tools and companies around the world.

To ensure harmonious operation between all these entities requires, first and foremost, visibility into the data produced by each step. Here is the first problem – all sources of data have a particular format and access method. It would be nice if they were all the same, but they are not. So, there is a many-to-many challenge to assemble all the data needed in one place that is reliable and accurate. The single source of truth that is the holy grail for many online systems.

Once this is achieved, the next problem is what to do with all the data. What types of analyses are needed to turn data into useful information? The answer to that question depends on what you’re trying to monitor, debug or optimize. Many, many ways of looking at the data to find the needed insight must be supported. And we’re talking about massive amounts of information, making the whole process very challenging.

At the end of the day, product management teams need accessibility (all data in one place), analysis (a holistic view of everything important), coordination (everyone needs to be using the same information around the world), insight (the ability to spot the right trends), and support (to help use what’s available and quickly add something new when it’s needed).

Having been part of an internal team trying to solve this problem, I can tell you it’s too big for any internal team to address adequately. The cost of doing it right is too high for any one company.

An Elegant Solution

For almost twenty years, yieldHUB has been helping people working on yield improvement to enjoy their jobs and be more efficient. The company’s yield management platform and support organization have a worldwide footprint, both on-premise and in the cloud. Its data model is unique and hugely scalable, allowing customers to analyze data without having to download it first. The result is the ability to analyze hundreds of wafers worth of data in seconds. This is game-changing technology.

Kevin Robinson

I recently had the opportunity to get a live tour of some of yieldHUB’s capabilities with Kevin Robinson, yieldHUB’s VP of operations. Based in the UK, Kevin has been building capabilities at yieldHUB to help customers conquer yield challenges for over ten years.

Kevin began by describing the breadth and depth of yieldHUB’s central data server system. Data is automatically consolidated from the foundry (WAT, wafer test, final test as well as in-line data from the fab line), PCB data, and module data for actual products shipping to end customers. This is supplemented with genealogy data and manufacturing execution/ERP data to create a complete view of the enterprise.

Using thin client technology, secure access to all this information for targeted analysis is possible either behind the firewall or in the cloud. A knowledge base allows information and comments to be attached to any part of the process and shared efficiently. For example, lot-level or product-level information in the knowledge base is automatically seen by all those working in that area of the enterprise. This saves a lot of time and dramatically improves collaboration.

Kevin began by examining yield data for a half million mixed signal parts with embedded memory. This represents about 100GB of data. Each part is uniquely identified in the system, so there are many ways to explore the information. Kevin was able to quickly display this data to begin to identify possible trends. One example is shown below, where color-coded bar charts present failure mode distributions for various lots.

Failure mode distribution

Kevin then began to drill down into this data with many views, generated in real-time. In the interest of time, I’ll show one example – analyzing the lowest yielding lot. Focusing on the highest failing test for that lot, he examined the behavior across test sites. That created a rich view of data relationships as shown in the figure below.

Test site behavior
Wafer map view

The data can also be displayed on a per-lot, per wafer basis to create wafer maps. An example is shown on the right. Kevin provided many more examples of how to explore and analyze this data and other large datasets.

From personal experience, I can testify that the easy-to-use analysis capabilities of this system hide a massive amount of implementation detail. Accurately acquiring data from so many different sources and making the resultant massive data sets accessible and easy to visualize and analyze is no small feat.

The diagram below summarizes the big picture view of yieldHUB and its impact on the enterprise.

YieldHUB functions and impact

To Learn More

If you care about yield and product quality, you need a system like this. If you’re considering building it yourself, let me emphatically suggest you take a different approach. It will take a lot longer than you think to implement, and the ever-changing supply chain dynamics, equipment profiles and analysis requirements will quickly consume way too much time and effort.

I’ve provided a small window into a very detailed demonstration. The good news is that you can set up your own tour of yieldHUB here. You can also view a recent webinar on collaboration and yield improvement here. And that’s the best way to learn how yieldHUB improves semiconductor product quality for all.


Scaling Data Center Infrastructure for the Terabit Era

Scaling Data Center Infrastructure for the Terabit Era
by Kalar Rajendiran on 04-02-2024 at 10:00 am

Scaling Data Center Infrastructure for the Terabit Era Panel

Earlier this month, SemiWiki wrote about Synopsys’s complete 1.6T Ethernet IP solution to drive AI and Hyperscale Data Center chips. A technology’s success is all about when, where and how it gets adopted within the ecosystem. In the high-speed ethernet ecosystem, the swift adoption of 1.6T Ethernet relies on key roles and coordinated actions. Technology developers, such as semiconductor companies and IP providers, drive innovation in ethernet technologies, while standardization bodies like IEEE set crucial standards for interoperability. Collaboration among industry players ensures seamless integration of components, and interoperability testing validates compatibility. Infrastructure upgrades are essential to support higher speeds, requiring investments in hardware and networking components.

IEEE hasn’t yet ratified a standard based on 224G SerDes for 1.6Terabit Ethernet. High-speed ethernet is more than just a SerDes, a PCS and a PMA spec. There a lot of different pieces to ratifying an ethernet standard. Will the 1.6T Ethernet get standardized soon? How will the standard get rolled out into the industry? How does the technology evolve to handle latency requirements, and deliver the massive throughput demands and still keep the power at manageable levels? How do SoC designers prepare to support 1.6T ethernet? And what would data center technology look like ten years from now?

The above are the questions that a thought leadership webinar sponsored by Synopsys explored.

The session was hosted by Karl Freund, founder and principal analyst at Cambrian-AI Research. The panelists included John Swanson, HPC IP product line manager, Synopsys; Kent Lusted, principal engineer Ethernet PHY standards advisor, Intel; Steve Alleston, director of business development, OpenLight Photonics; John Calvin, senior wireline IP planner, KeySight Technologies. Those involved in planning for, implementing and supporting high-speed ethernet solutions would find the webinar very informative.

The following is a synthesis of the key points from the webinar.

At the heart of the matter lies the standardization process. While IEEE has yet to ratify a standard based on 224G SerDes for 1.6T Ethernet, the urgency for adoption is palpable. With the rise of artificial intelligence (AI) and machine learning (ML) applications driving the demand for enhanced data processing capabilities, the industry cannot afford to wait. The first wave of adoption is expected to emanate from data centers housing AI processors, where the need for massive data training and learning capabilities is paramount. Subsequently, switch providers like Broadcom and Marvel are poised to facilitate the second wave of adoption by furnishing the infrastructure necessary to support the burgeoning demands.

Amidst this backdrop, standardization bodies such as IEEE play a pivotal role in shaping the future of ethernet technology. IEEE P802.3dj draft specifications are instrumental in defining the parameters for 1.6T Ethernet, encompassing a myriad of physical layer types ranging from backplanes to single-mode fiber optics. However, the ecosystem of ethernet extends beyond IEEE, encompassing various industry bodies that develop specifications for different applications such as InfiniBand and Fibre Channel. Collaboration among these entities is imperative to ensure a robust ecosystem that meets the diverse needs of end-users and operators.

The proliferation of AI and ML applications has accelerated the pace of standardization efforts, compelling bodies like IEEE and OIF to expedite the development of specifications. While the quality of standards necessitates thorough review, the industry’s pressing needs mandate a balance between quality and expediency. This urgency is underscored by the advent of captive interfaces, where AI players who own both ends of the network are forging ahead with proprietary solutions to meet their immediate requirements, necessitating subsequent convergence with industry standards.

As part of this technology evolution, power efficiency emerges as a paramount concern. As the transition to 1.6T Ethernet entails a doubling of power consumption, innovative solutions are imperative to mitigate energy demands. Strategies such as co-packaged optics and silicon photonics hold promise in reducing power consumption while optimizing performance. However, achieving optimal solutions necessitates exploring a landscape of competing architectures and approaches.

As industry players gear up for the advent of 1.6T Ethernet, the role of system-on-chip (SoC) designers becomes pivotal. Despite facing monumental challenges, early access to IP facilitates progress amidst evolving standards. Moreover, power efficiency emerges as a cornerstone of data center evolution, with advancements in signaling efficiency poised to redefine the power landscape. As the march towards 1.6T Ethernet continues, collaboration, innovation, and a keen focus on efficiency will pave the way for a new era of connectivity.

Looking out ten years ahead, the data center promises a paradigm shift towards optical connectivity and enhanced power efficiency. With optics poised to play an increasingly central role, the industry must adapt to a landscape where latency and power consumption are paramount concerns.

Also Read:

TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production

Synopsys SNUG Silicon Valley Conference 2024: Powering Innovation in the Era of Pervasive Intelligence

2024 DVCon US Panel: Overcoming the challenges of multi-die systems verification