Banner 800x100 0810

LIVE WEBINAR: Automating the Integration Workflow with IP Centric Design

LIVE WEBINAR: Automating the Integration Workflow with IP Centric Design
by Daniel Nenni on 04-09-2024 at 10:00 am

hip webinar automating integration workflow social

Subsystem and full-chip integration plays a crucial role in any project – particularly for large SoCs. Our upcoming webinar on April 30 confronts the typical challenges of this process and provides a detailed view into how IP centric  design can help you solve them. Join us to learn how transforming your design flow can help your team reliably meet integration milestones, quickly debug issues, and enhance work quality and transparency.

View Video

In today’s landscape, the ongoing challenges of integrating design blocks into SoCs are clear. Teams are often working with a globally distributed workforce, overwhelmingly complex design data, and a lack of expertise on design blocks that were not developed locally.

As teams become more geographically dispersed, integration is complicated by the inclusion of more externally sourced and reused IPs, multiple design sites, and the difficulties of working across time zones. At the same time, the size, volume, and complexity of design files have also increased. This high volume of larger, more complex files can strain existing infrastructure and processes, causing delays and confusion. Lastly, a lack of local expertise on design blocks created by geographically distant teams makes it harder to address integration problems as they arise – leading to longer lead times, inefficient, spreadsheet-based debugging, and repeatedly missed integration milestones.

These complex, interrelated pain points introduce the need for a new and innovative approach. This is where an IP centric design methodology steps in. An IP, also referred to as a design block or module, is an abstraction of data files that defines an implementation, along with the meta-data that defines its state. In IP centric design, each element of the design – from internally reused and externally acquired IPs, to the design environment and the whole platform – is modeled as an IP. This allows the entire project and all related metadata to be modeled as a complete, hierarchical collection of IPs, including all versions and dependencies.

By leveraging an IP centric methodology, along with the use of “IP Aliases” and quality-based integration rules, teams can establish a streamlined, controlled, and transparent integration flow. This automated flow will enable teams to reliably meet key integration milestones, more easily debug integration issues, and improve overall quality.

We have some expert tips and essential best practice guidelines for creating, enforcing, and maintaining an IP centric design flow. The Automating the Integration Workflow presentation will give a more in-depth look at what elements are necessary to establishing an effective IP centric model, including annotation with rich metadata and a versioned, hierarchical Bill of Materials (BoM) that all team members can reference. We’ll also dive into how to support and hone your integration flow over time, giving examples of common governance rules your team can implement, as well as tips for how to consistently enforce them.

Getting your team onboard with this transformation and new approach can have a dramatic effect on your collaboration, productivity, go-to-market timeline, and quality. It will greatly reduce the SoC integration challenges your team struggles with, plus set a minimum quality requirement across all IP and provide an at-a-glance view into IP status and which blocks are ready for integration. And finally, establishing a secure and traceable IP centric design flow can make future IP reuse and integration easier.

Join us for our upcoming webinar, where we’ll walk through each step of the integration process through the lens of IP centric design. Register now to learn how to boost both efficiency and quality through a streamlined integration flow!

View Video

Also Read:

2024 Outlook with Adam Olson of Perforce

The Transformation Model for IP-Centric Design

Chiplets and IP and the Trust Problem


The Data Crisis is Unfolding – Are We Ready?

The Data Crisis is Unfolding – Are We Ready?
by Kalar Rajendiran on 04-09-2024 at 6:00 am

Global Data Sphere for Healthcare Data

The rapid advancement of technology, including generative AI, IoT, and autonomous vehicles, is revolutionizing industries and enhancing efficiency. At the same time, such advances also generate huge amounts of data to be transmitted and processed to make sense and provide value to consumers and society as a whole. In essence, heavy reliance on seamless data movement and processing have become integral to various aspects of modern life, from transportation logistics to healthcare and climate control. While the benefits are great and many, and include enhanced decision making, personalization, improved healthcare, and efficient resource allocation, there are various types of potential dangers that go hand-in-hand. At a broad level, we could call what we are marching toward as a potential data crisis in the making.

While this potential data crisis has many aspects to it, the most fundamental concern is the ability to continue to transmit and process data at increasingly higher speeds and very low latencies, without any disruption. Alphawave Semi has published a whitepaper on this specific aspect of the data crisis. Such a data crisis could have far-reaching consequences for individuals, society and the global economy. Businesses make market entry decisions based not only on potential opportunities and risk/reward calculations but also on consequential damages claims exposure. But with a heavy reliance on data and a highly interconnected world, it is difficult to isolate oneself or individual applications from this data crisis.

For example, an autonomous vehicle is expected to process 19 terabytes (TB) of data per hour to conduct itself. At a projected 840,000 autonomous vehicles hitting the streets by 2030, this translates to 1.6 million terabytes of data per hour. A disruption of even the slightest degree could have fatal and widespread catastrophic consequences. Another example involves the medical industry which uses digital health records for patient management.

For example, as of 2021, 88% of US-based doctors were relying on a rugged data infrastructure to support their usage. Any inability to process large volumes of data could lead to misdiagnosis events with dire consequences.

In essence, the overall global data infrastructure needs to be aggressively updated and kept up, to meet the ever-growing demand for data connectivity, integrity, safety and privacy.

Securing our Data Infrastructure

Generative AI, heralded as a transformative force in various industries, relies heavily on data infrastructure to realize its full potential. While it holds promise in improving efficiency and driving innovation, the associated power consumption and computational demands underscore the need for sustainable practices and energy-efficient solutions. While accommodating the demanding requirements of AI applications, our data infrastructure must continue to accommodate regular workloads like streaming videos and video calls.

Whether it’s facilitating seamless data transmission or enhancing interconnectivity within hyperscale data centers, semiconductor innovation takes center stage to meet the growing demands of data-intensive workloads. Legacy technologies with monolithic chip structures are insufficient for addressing the mounting computational pressure. Chiplets and custom silicon solutions emerge as game-changers in maximizing efficiency, reducing power consumption, and minimizing latency within data centers. Companies like Alphawave Semi and other industry leaders are spearheading efforts to leverage these technologies, pushing the boundaries of connectivity and scientific advancements.

As we navigate the complexities of the unfolding data crisis, collaboration and adaptability are key. Stakeholders across industries must come together to address the challenges and opportunities presented by the data-driven era. By investing in sustainable practices, embracing technological advancements, making investments, and fostering an ecosystem of innovation, we can look forward to a resilient, efficient, and interconnected digital future.

Summary

The unfolding data crisis presents both challenges and opportunities for our society. By leveraging connectivity, AI, and semiconductor innovation, we can overcome obstacles, drive progress, and usher in a new era of digital transformation and avert a data crisis.

The Alphawave Semi whitepaper on this topic can be downloaded from here.

Also Read:

Accelerate AI Performance with 9G+ HBM3 System Solutions

Alphawave Semiconductor Powering Progress

Will Chiplet Adoption Mimic IP Adoption?


Simulation World 2024 Virtual Event

Simulation World 2024 Virtual Event
by Daniel Nenni on 04-08-2024 at 10:00 am

ANSYS Inc Racecar Simulation

ANSYS Simulation World is an annual conference hosted by ANSYS, Inc., a leading provider of engineering simulation software. The event typically brings together engineers, designers, researchers, and industry experts from around the world to discuss the latest advancements, best practices, and case studies in engineering simulation and virtual prototyping.

Simulation World 2024 is a free global virtual event

Attendees have the opportunity to participate in keynote presentations, technical sessions, hands-on workshops, and networking events. The conference covers a wide range of topics, including computational fluid dynamics (CFD), finite element analysis (FEA), electromagnetics simulation, Multiphysics simulation, additive manufacturing, and more.

The event provides a platform for users of ANSYS software to learn new skills, exchange ideas, and explore innovative applications of simulation technology across various industries, such as aerospace, automotive, electronics, energy, healthcare, and consumer goods.

Additionally, ANSYS Simulation World often features keynote speakers from industry-leading companies, showcasing how simulation-driven engineering has helped them solve complex engineering challenges, improve product performance, and accelerate time-to-market.

Overall, ANSYS Simulation World serves as a premier gathering for the simulation community, offering valuable insights, practical knowledge, and networking opportunities to help engineers and designers stay at the forefront of simulation technology.

EVENT TRACKS

Inspire: Automative and Transportation
Simulation is transforming mobility to address unprecedented challenges and deliver cost effective, completely differentiated solutions, from safer, more sustainable designs to the complex electronics and embedded software that define them.

Inspire: Aerospace and Defense
The aerospace and defense industries must operate on the cutting edge to deliver advanced capabilities. Digital engineering helps them increase flexibility, update legacy programs, and speed new technology into service.

Inspire Energy and Industrials
Industries rely on simulation to streamline production and distribution of safer, cleaner, more reliable energy through fuel-to-power conversions, and to accelerate, scaling of low-carbon energy solutions.

FEATURED SPEAKERS

Dr. Ajei Gopal
President and Chief Executive Officer, Ansys
Ajei Gopal’s idea to drive “pervasive simulation,” or the use of engineering simulation throughout the product life cycle, has transformed the industry. Prior to Ansys, he served in various leadership roles where he demonstrated his ability to simultaneously drive organizational growth and improve operational efficiency.

Dr. Prith Banerjee
Chief Technology Officer, Ansys
Prith Banerjee leads the evolution of Ansys technology and champions the company’s next phase of innovation and growth. During his 35-year technology career — from academia, to initiating startups, to managing innovation in enterprise environments— he has actively observed, and promoted how organizations can realize open innovation success.

Walt Hearn
Senior Vice President, Worldwide Sales and Customer Excellence, Ansys
As an innovative business leader and simulation expert at Ansys, Walt Hearn leads high-performing teams to develop and execute sales strategy, technical excellence, and mentorship across the organization. He prides himself on ensuring customer success and helping organizations achieve top engineering initiatives to change the future of digital transformation.

Here is a quick video on simulation that I think we can all relate to:

I hope to see you there!

Simulation World 2024 is a free global virtual event

Also Read:

2024 Outlook with John Lee, VP and GM Electronics, Semiconductor and Optics Business Unit at Ansys

Unleash the Power: NVIDIA GPUs, Ansys Simulation

Ansys and Intel Foundry Direct 2024: A Quantum Leap in Innovation


Podcast EP216: Q4 2023 is Another Strong Growth Quarter for EDA as Reviewed by Wally Rhines

Podcast EP216: Q4 2023 is Another Strong Growth Quarter for EDA as Reviewed by Wally Rhines
by Daniel Nenni on 04-08-2024 at 8:00 am

Dan is joined by Dr. Walden Rhines. Wally is a lot of things, CEO of Cornami, board member, advisor to many and friend to all. In this session, he is the Executive Sponsor of the SEMI Electronic Design Market Data Report.

Wally reviews the Electronic Design Market Data report that was just released for Q4 2023. Growth continues to be strong at 14% overall. EDA and IP revenue ended 2023 at an incredible $17B, completing 20 consecutive quarters of positive growth.

Wally reviews the details of the numbers with Dan, including purchasing dynamics across the sector and small areas of lower performance.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Synopsys Presents AI-Fueled Innovation at SNUG 2024

Synopsys Presents AI-Fueled Innovation at SNUG 2024
by Daniel Nenni on 04-08-2024 at 6:00 am

Synopsys Presents AI Fueled Innovation at SNUG 2024

SNUG is the premier event for Synopsys to showcase its technology and impact on the industry. This year’s SNUG did not disappoint. The two-day event packed many fantastic user presentations along with exciting news of innovation from Synopsys. Jensen Huang and Sassine Ghazi even held a live, interactive Q&A session. Compelling content. The tagline for the event was: Our Technology, Your Innovation, setting a tone of collaboration for the future. You can learn more about the big picture for the event here. As you would expect, AI was a prominent and pervasive topic. There was a special session on Day 1 of SNUG for media and analyst attendees that dug into the impact of AI on chip design. I will explore how Synopsys presents AI-fueled innovation at SNUG 2024.

Framing the Discussion
Sanjay Bali

Sanjay Bali gave an insightful presentation about the opportunities AI presents for new approaches to
design. Sanjay is Vice President, Strategy and Product Management for the EDA Group at Synopsys. He’s been with the company for almost 16 years, so he’s seen a lot of change in the industry. Having spent time at Intel, Actel, Mentor and Magma before joining Synopsys, Sanjay brings a broad view of chip design to the table.

He presented an excellent overview of the opportunities for AI in the EDA workflow. He explained that in the implementation and verification flows, there are many options to consider. Choosing the right technology and architectural options during RTL design, synthesis, design planning, place & route and ECO closure have a profound impact in the quality of the final result, as well as the time to get there. Using AI-driven optimization allows all the inter-related decisions to be considered, balanced and optimized with a level of detail and discovery that is difficult for humans to achieve.

He reported that across image sensor, mobile, automotive, and high-performance computing applications Synopsys AI technology has delivered noteworthy improvements compared to non AI-assisted processes. From 5nm to 28nm technologies, results such as 12% better area, 25% lower power, 3X increased productivity and 15% test coverage improvement have been achieved. This is a small subset of the complete list of accomplishments.

And the story doesn’t stop there. Analog design can also benefit from AI, with a 10X overall improvement in turnaround time for analog optimization and a 3X faster turnaround time for analog IP node migration. The complexity and cost of testing advanced designs can also benefit from Synopsys AI technology, with pattern count reductions ranging from 18% to a whopping 70%, depending on the application.

Sanjay also touched on the emerging field of multi-die package design. Here, autonomous full-system exploration can optimize signal integrity, thermal properties and the power network, delivering improved performance and memory efficiency. A 10X boost in productivity with a better quality of result has been achieved.

Big data analytics are also creating new opportunities and revealing new insights. Process and product analytics can reduce defects and increase yields. The opportunities are eye-opening. Sanjay also talked about the application of generative AI to the design process. Junior engineers are able to ramp-up as much as 30% faster without depending on an expert. Generally speaking, AI can increase search and analysis speed and deliver superior results, allowing designers to be more productive.

This was an impressive presentation that covered far more topics than I expected. Sanjay presented a graphic that depicts the breadth of the Synopsys AI solution as shown below.

AI Powered Full Stack Synopsys EDA Portfolio
The Demonstration
Stelios Diamantidis

Stelios Diamantidis, Distinguished Architect, and Executive Director, Center for Generative AI provided a demonstration of some the tools in the Synopsys.ai arsenal. Stelios has been with Synopsys for almost fifteen years years and has played a key role in the infusion of AI into the Synopsys product suite.

It’s difficult to capture the full impact of a live demonstration without the use of video. Let me just say that the capabilities Sanjay described are indeed real. Across many scenarios, Stelios showcased advanced AI capabilities in a variety of Synopsys products.

A common theme for a lot of this work is the ability of AI to examine a vast solution space and find the optimal choices for a new design. The technology can deliver results faster and with higher quality than a team of experienced designers. The graphic at the top of this post presents a view of how this process works.

To Learn More

A lot was covered in this session. The breadth and depth of AI in the Synopsys product line is very impressive. You can get more information on this innovative set of capabilities here. And that’s how Synopsys presents AI-fueled innovation at SNUG 2024.


Podcast EP215: A Tour of the GlobalFoundries Silicon Photonics Platform with Vikas Gupta

Podcast EP215: A Tour of the GlobalFoundries Silicon Photonics Platform with Vikas Gupta
by Daniel Nenni on 04-05-2024 at 10:00 am

Dan is joined by Vikas Gupta, senior director of product management at GlobalFoundries focused on Silicon Photonics and ancillary technologies. Vikas has close to 30 years of semiconductor experience with TI, Xilinx, AMD, GlobalFoundries, POET Technologies, and back to GlobalFoundries.

Vikas discusses the growing demands of semiconductor design with a focus on compute and AI. He explains the major hurdles that need to be addressed in both processing speed and network bandwidth. Vikas provides details about the unique platform GlobalFoundries has developed and how it provides a scalable, more design-friendly approach to the use of silicon photonics across multiple applications.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Ninad Huilgol of Innergy Systems

CEO Interview with Ninad Huilgol of Innergy Systems
by Daniel Nenni on 04-05-2024 at 6:00 am

Ninad Huilgol

Ninad Huilgol is the Founder and CEO at Innergy Systems, has extensive experience in design verification of ultra low-power mobile SoCs. Previously, he has worked in senior engineering management at various semiconductor companies such as Broadcom and Synopsys. He has multiple power- and design-related patents, trade secrets and is the recipient of a Synopsys Inventor award.

Tell us about your company.
Innergy Systems was founded in 2017 with a mission to bring power analysis to the software world, make hardware power analysis faster and provide power feedback in the earliest stages of the design flow.

I am Innergy’s founder and CEO and have extensive experience in design verification of ultra low-power mobile SoCs. I worked in senior engineering management at various semiconductor companies including Broadcom and Synopsys.

Years ago, when I was chip verification lead at Broadcom, I came across a problem designers were facing early in the development of a new ultra-low power mobile SoC. They wanted to explore the power impact of some architectural experiments. Typically, models can estimate the performance of a new architecture, but power information could not be added. We were at a design conference. One leading EDA company was asked if there was any way to get power information in performance models. The answer was a categorical no.

This led me to think about forming a startup that could build technology to bring power analysis earlier (shift left) as well as provide results faster. In the process, it led to the invention of high-performance power models capable of providing power analysis from architecture stage all the way to tape out and even beyond.

Today, Innergy counts some big-name Tier-1 chipmakers among its customers. Our models are used in all phases of hardware development, and perhaps more importantly, for developing more power efficient software.

I am a big believer in sustainability. Compute has become power hungry, driven by AI, crypto and other applications. Innergy’s technology will help build more power-efficient hardware and software systems and reduce the carbon footprint. This also saves money for our customers, which is a big bonus.

What problems are you solving?
We solve quite a few problems that exist in current power analysis solutions.

Speed: Today’s large and complex designs need power analysis solutions that run simulations quickly, so that system-level power analysis is available in minutes, not hours or days. Traditional power analysis need to run gate-level simulations that take a long time to finish and require high-performance compute.

We create power models that take a fraction of the time to run simulations without compromising accuracy. We use our proprietary, patented technology that intelligently identifies a few critical events to monitor and track power. We build higher abstraction models of power consumption. Simulating at higher abstraction requires fewer resources, which leads to significant performance gains. In RTL, a typical simulation speed up of 30x- 50x has been demonstrated. In emulation and software environments, our results demonstrated 200x-500x simulation speedup and produced without the need for high-performance compute resources.

High speed does not mean less accuracy. Our models have been consistently benchmarked with typical accuracy of 95% or better when compared to sign-off power solutions.

Root cause analysis: Currently, understanding the root cause of a power issue requires multiple iterations of running RTL simulations, power simulations and waveform analysis.

With Innergy Quarx, detailed power analysis reports show which design instances were involved in a power hotspot, along with what those instances were doing and which actions were being performed by those instances. This simplifies the debug process by not requiring multiple iterations of simulation, power analysis and waveform debug.

Ability to run with software: Designers report a difficulty to estimate power cost of software features/subroutines. Traditionally, this problem has been solved only by emulation.

Innergy Quarx enables designers to build models that run directly in a software environment by modeling events that exist in both hardware and software environments. This versatility means Quarx models can be used in RTL, emulation and software environments without requiring modification.

Early “what-if” power analysis: Currently, the only way to perform “what-if” power exploration is by building custom models, using a spreadsheet or simple modeling tools that do not have fine-grained power information.

Innergy Quarx can build power models for existing designs (evolutionary) as well as new designs (revolutionary). Even without RTL, it’s possible to build architectural-level models with estimates of power added. Our models can do power versus performance analysis at an early stage by creating virtual power simulations with different traffic patterns, voltages and frequencies. This enables designers to start realizing value right from the earliest stages of their design project through tape out and beyond.

What application areas are your strongest?
There are three:

System-level power simulations: Innergy Quarx can run subsystem or full-chip power simulations at a fraction of the time currently required. We recently benchmarked our tool against a leading power product. Quarx produced results in 26 minutes. The other tool would have required a few days’ worth of simulation. This is over 500x faster and 97% accuracy compared to the other tool.

We can handle large simulations and provide potentially 100% power coverage. Due to how slow the traditional power tools are –– some designers run only 1-2% of available stimulus to check for power consumption, which means there could be power bugs hiding in the 98-99% unused stimulus. Our solution obviates this problem.

Ability to profile power consumption of software as well as hardware: Thanks to the booming AI market, power-efficient software design is becoming important. In AI applications, hardware cores tend to be simpler in construction, repeated tens or hundreds of times in a processing system. Hardware-based power analysis might not be effective as power savings tend to be smaller. Software running on hardware tends to be more complex with learning, inferencing and training increasing power consumption. In fact, AI is likely to take the top spot as the most power-hungry area, edging out crypto, according to a Stanford University report by Mack DeGeurin published in April 2023.

Quarx can provide detailed power consumption information with models able to run in a software environment without the need for expensive emulation. This closes the loop and enables power-efficient hardware and software design.

What keeps your customers up at night?
In design verification, the fear of undiscovered hardware bugs keeps designers up at night. It is a similar analogy with power bugs. Undiscovered power bugs can cause thermal run-aways creating catastrophic die failures.

Moreover, power and performance are at the top of any semiconductor engineering manager’s mind. Competition for the best power and performance numbers is strong, and I imagine this is another issue that keeps designers burning the midnight oil.

What was the most exciting high point of 2023 for your company?
We had two significant high points in 2023. The first was receiving the TiE50 Investor award from The Indus Entrepreneurs Group (TiE). TiE is one of the largest angel investment organizations in the world, and Innergy Systems was selected as one of the 50 best startups for investment. We are funded by TiE charter members and angels.

An even more exciting high point was getting more paying customers, including Tier-1 companies, further reinforcing our value proposition –– an early hardware/software power analysis platform for SoC designs.

What was the biggest challenge your company faced in 2023?
We managed to survive the downturn in business and the investment climate during 2023. According to some reports, many startups went bust during the third quarter of 2023 due to a tight funding environment.

What do you think the biggest growth area for 2024 will be, and why?
We agree with all the semiconductor industry experts –– AI is a big growth area in 2024. AI is driving innovation at speeds rarely witnessed before. Every expert in this space tells us that things keep changing weekly, as opposed to months or years. This is driving tremendous growth in this area.

AR/VR is also seeing growth.

How is your company’s work addressing this growth?
Innergy provides high-performance, high-accuracy power modeling for both hardware and software power optimization, especially important in AI-based systems.

Hardware in AI tends to be less complicated. For example, a single processing core can be instantiated hundreds of times to form an inferencing engine. Each core is simpler in design compared to a large CPU. Meaningful power savings by hardware optimization might be harder to find. Software running on top is more complex and learning daily. Understanding how power consumption is affected by software behavior is becoming critical.

We offer a practical, out-of-the box solution to provide power models that can run in software environments, closing the loop, and enabling simultaneous power optimization of hardware and software systems.

What does the competitive landscape look like and how do you differentiate?
We see some competition from other power modeling players and some homegrown solutions. Our differentiation is a simple-to-use, out-of-the-box solution that ticks all the boxes: Ease of use, consistent speed and accuracy results, and versatility.

What new features/technology are you working on?
Our next area of focus is adding intelligence to our models using AI.

Additional questions or final comments?
Innergy Systems will emerge from stealth mode over the next several months. Meanwhile, our credentials speak for us. We are highly experienced semiconductor design specialists passionate about power and have first-hand experience wrestling the challenges of large low-power application processors.

To learn more, visit the Innergy Systems website at www.innergysystems.com, email info@innergysystems.com or call (408) 390-1534.

Also Read:

CEO Interview: Ganesh Verma, Founder and Director of MoogleLabs

CEO Interview: Patrick T. Bowen of Neurophos

CEO Interview: Larry Zu of Sarcina Technology


sureCore Brings 16nm FinFET to Mainstream Use With a New Memory Compiler

sureCore Brings 16nm FinFET to Mainstream Use With a New Memory Compiler
by Mike Gianfagna on 04-04-2024 at 10:00 am

SureCore Brings 16nm FinFET to Mainstream Use With a New Memory Compiler

Semiconductor processes can have a rather long and interesting life cycle. At first, a new process defines the leading edge. This is cost-no-object territory, where performance is king. The process is new, the equipment to make it is expensive, and its use is reserved for those that have a market (and budget) big enough to justify its use. As the technology matures, yields go up, costs go down, access becomes easier and more mainstream applications start to migrate toward the node as newer technologies take the leading-edge spot. I clearly remember when 16nm FinFET was that leading edge, much sought after technology. But that is now the past and 16nm FInFET is finding application in mainstream products, but there is a catch. As I mentioned, 16nm FinFET was all about speed, often at the expense of power. But mainstream applications can be very power sensitive. The good news is that sureCore is fixing the problem. Read on to see how sureCore brings 16nm FinFET to mainstream use with a new memory compiler.

The 16nm Problem

Applications such as wearables, IoT and medical devices can be good matches for what 16nm FinFET has to offer. The combination of performance, density and yield offered by the technology can be quite appealing. Also, cutting operating voltage generates substantial power savings while still delivering the needed performance. The technology has been in production for over ten years. This means the process is quite stable and yields are high. The fabs involved are largely depreciated as well. All this brings the cost of 16nm FinFET in reach for lower cost, power sensitive devices.

Can low power applications implemented in 28 or 22nm bulk or FDSOI cut ASPs and deliver better features and power with 16nm?  Things seem to line up well except for the key point made earlier: 16nm FinFET was focused on performance first, so much of the IP available for the node is built with performance in mind – power optimization was not central to its design, so a mismatch with newer, power-sensitive applications exists.

The sureCore Solution

sureCore is a company that tends to change the rules and open new markets. A recent post discussed how the company is enabling AI with low power memory IP. sureCore is even working with Agile Analog to enable quantum computing. So, opening 16nm FInFET to a broad range of applications is certainly in the sureCore wheelhouse.

Recently, the company announced the availability of its PowerMiser ultra-low, dynamic power memory compiler in 16nm FinFET. This effectively opens new opportunities for application of the technology by allowing demanding power budgets to be met.

Paul Wells, sureCore’s CEO explains the details quite well:

“FinFET was developed to address the increasingly poor leakage characteristics of bulk nodes. In addition, the key driver for the mobile sector was ever greater performance to deliver new features and a better user experience. The power consumption was not deemed a significant issue, as both the radio and the display were the dominant factors in battery life determination. This, in addition, to the relatively large form factor of a mobile phone meant that the batteries had capacities in excess of 3-4000mAH.”

He went on to highlight sureCore’s strategy:

“However, designers of power sensitive applications such as wearables and medical devices with much more constrained form factors and hence smaller batteries need a range of power optimised IP that can exploit the power advantages of FinFET whilst being much less concerned about performance. This has meant a demand for memory solutions that are specifically tailored to deliver much reduced power consumption. By providing the PowerMiser SRAM IP, sureCore is enabling the shift to mature FinFET processes for low power applications and is thus helping to provide clear differentiation for such products based on both cost and battery life. By doing so, the all-important competitive advantage over rivals may be realised.”

You can read the full text of the press release here .  And that’s how sureCore brings 16nm FinFET to mainstream use with a new memory compiler.


Are Agentic Workflows the Next Big Thing in AI?

Are Agentic Workflows the Next Big Thing in AI?
by Bernard Murphy on 04-04-2024 at 6:00 am

Agentic flows min min

AI continues to be a fast-moving space and we’re always looking for the next big thing. There’s a lot of buzz now around something called agentic workflows – ugly name but a good idea. LLMs had a good run as the state-of-the-AI-art, however evidence is building that the foundation model behind LLMs alone has limitations, both theoretically and in practical applications.  Simply building bigger and bigger models (over a trillion parameters last time I looked) may not deliver any breakthroughs beyond excess cost and power consumption. We need new ideas and agentic workflows might be an answer.

Image courtesy Mike McKenzie

Limits on transformers/LLMs

First I should acknowledge a Quanta article that started me down this path. A recent paper looked at theoretical limits on transformers based on complexity analysis. The default use model starts with a prompt to the LLM which should then return the result you want. Viewing the transformer as a compute machine, the authors prove that the range of problems that can be addressed is quite limited for these or any comparable model architectures.

A later paper generalizes their work to consider chain of thought architectures, in which reasoning proceeds in a chain of steps. The prompt suggests breaking the task down into a series of simpler intermediate goals which are demonstrated in “show your work” results. The authors prove complexity limits increase slightly with a slowly growing number of steps (with respect to the prompt size), more quickly with linear growth in steps, and faster still with polynomial growth. In the last of these cases they prove the class of problems that can be solved is exactly those solvable in polynomial time.

Complexity-based proofs might seem too abstract to be important. After all the travelling salesman problem is known to be NP-hard, yet chip design routinely depends on heuristic solutions to such problems and works very well. However limitations in practical applications of LLMs to math reasoning (see my earlier blog) hint that these theoretical analyses may not be too far off-base. Accuracy certainly grows with more intermediate steps in real chain of thought analyses. Time complexity in running multiple steps also grows, and per the theory will grow at corresponding rates. Suggesting while higher accuracy may be possible, the price is likely to be longer run times.

Agentic flows

The name derives from use of “agents” in a flow. There’s a nice description of the concepts in a YouTube video by Andrew Ng who contrasts the one-shot LLM approach (you provide a prompt, it provides an answer in one shot) with the Agentic approach which looks more like the way a human would approach a task. Develop a plan of attack, do some research, write a first pass, consider what areas might need to be improved (perhaps even have another expert review the draft), iterate until satisfied.

Agentic flows in my understanding provide a framework to generalize chain of thought reasoning. At a first level, following the Andrew Ng video, in a prompt you might ask a coder agent LLM to write a piece of code (step 1), and in the same prompt ask it to review the code it generated for possible errors (step 2). If it finds errors, it can refine the code and you could imagine this process continuing through multiple stages of self-refinement. A next step would be to use a second agent to test the code against some series of tests it might generate based on a specification. Together these steps are called “Reflection” for obvious reasons.

There are additional components in the flow Andrew that suggests: for Tool Use, Planning and Multi-Agent Collaboration. However the Reflection part is most interesting to me.

What does an Agentic flow buy you?

Agentic flows do not fix the time complexity problem; instead, they suggest an architecture concept for extending accuracy for complex problems through a system of collaborating agents. You could imagine this being very flexible and there are some compelling demonstrations. At the same time, Andrew notes we will have to think of agentic workflows taking minutes or even hours to return a useful result.

A suggestion

I see long run times as an interesting human engineering challenge. We’re OK waiting seconds to get an OK result (like a web search). Waiting possibly hours for anything less than a very good result would be a tough sell.

I get that VCs and the ventures they fund are aiming for moonshots – artificial general intelligence (AGI) as the only thing that might attract enough attention in a white-hot AI market. I wish them well, especially in the intermediate discoveries they make along the way. The big goal I suspect is still a long way off.

However the agentic concept might deliver practical and near-term value if we are prepared to allow expert human agents in the flow. Let the LLM do the hard work to get to a nearby goal, and perhaps suggest a few alternatives for paths it might follow next. This should take minutes at most. An expert human agent then directs the LLM to follow one of those paths. Repeat as necessary.

I’m thinking particularly of verification debug. In the Innovation in Verification series we’ve covered a few research papers on fault localization. All useful but still challenged to accurately locate a root cause. An agentic workflow alternating between an LLM and an expert human agent might help push accurate localization further and it could progress as quickly as the expert could decide between alternatives.

Any thoughts?


Navigating the Complexities of Software Asset Management in Modern Enterprises

Navigating the Complexities of Software Asset Management in Modern Enterprises
by Kalar Rajendiran on 04-03-2024 at 10:00 am

Engineering Environment

In today’s digital age, software has become the backbone of modern enterprises, powering critical operations, driving innovation, and facilitating collaboration. However, with the proliferation of software applications and the complexity of licensing models, organizations are facing significant challenges in managing their software assets effectively.

Altair has published a comprehensive guide that explores the nuances of Software Asset Management (SAM), uncovers the strategies, challenges, and solutions for optimizing software usage, addresses cost reduction, and helps ensure compliance in modern enterprises, with a particular focus on the acute challenges in Computer-Aided Design (CAD), Computer-Aided Engineering (CAE), Electronic Design Automation (EDA) environments. You can access the entire guide from here. Following is a synopsis of the importance of SAM, challenges faced and best practices for successful SAM initiatives.

Understanding Software Asset Management

Software Asset Management (SAM) encompasses the processes, policies, and tools used by organizations to manage and optimize their software assets throughout the software lifecycle. From procurement and deployment to usage tracking and retirement, SAM aims to maximize the value of software investments while minimizing risks and costs associated with non-compliance and underutilization.

The Growing Importance of SAM

Enterprise software spending is on the rise, driven by the increasing reliance on digital technologies for business operations and innovation. According to industry reports, enterprise software spending is expected to reach unprecedented levels in the coming years, highlighting the critical importance of effective SAM practices. In today’s competitive landscape, organizations cannot afford to overlook the strategic value of software asset management in driving efficiency, reducing costs, and mitigating risks.

Challenges in Software Asset Management

While SAM presents challenges across various industries, CAD, CAE and EDA environments face unique hurdles due to the specialized nature of their software tools and computing requirements.

These environments rely on specialized software tools tailored for engineering design, simulation, and analysis. These tools often come with complex licensing models and require high levels of expertise for effective management. Engineering simulations and analyses often require significant computational resources, including high-performance computing (HPC) clusters and specialized hardware accelerators. Managing software licenses across distributed computing environments adds another layer of complexity to SAM efforts. In addition, CAD and CAE environments deal with multidisciplinary engineering problems, involving various software tools and domains. Coordinating software usage and licenses across different engineering teams with diverse requirements poses a significant challenge for SAM initiatives.

Best Practices for Successful SAM Initiatives

To overcome these challenges and maximize the benefits, organizations can adopt the following best practices.

  • Establish clear goals and objectives aligned with business objectives and engineering requirements to guide SAM initiatives effectively.
  • Gain support from senior leadership to prioritize SAM efforts, allocate resources, and overcome organizational barriers.
  • Foster collaboration between IT, engineering, procurement, and finance teams to ensure alignment of SAM efforts with business needs and technical requirements.
  • Choose SAM tools specifically designed for CAD and CAE environments, capable of managing specialized software licenses and integrating with HPC workload management systems.

Partnering with Altair

Altair, a leading provider of engineering and HPC solutions, offers specialized SAM solutions tailored for CAD, CAE and EDA environments. With solutions like Altair® Software Asset Optimization (SAO) and Altair® Monitor™, organizations can leverage advanced analytics, predictive modeling, and visualization tools to optimize their software assets effectively. Altair’s industry expertise, proven track record, and commitment to innovation make it a trusted partner for organizations looking to streamline their SAM initiatives in CAD and CAE environments.

Summary

Software Asset Management (SAM) plays a crucial role in optimizing software usage, reducing costs, and ensuring compliance in modern enterprises, especially in engineering and design environments. By understanding the unique challenges and adopting best practices tailored for these environments, organizations can navigate the complexities of SAM with confidence and achieve success in their software asset optimization endeavors. With Altair as a trusted partner, organizations can unlock significant value, enhance productivity, and drive sustainable growth in the digital age.

To access the eGuide, please visit Make the Most of Software License Spending.

Also Read:

2024 Outlook with Jim Cantele of Altair

Altair’s Jim Cantele Predicts the Future of Chip Design

How to Enable High-Performance VLSI Engineering Environments