100X800 Banner (1)

Multi-Die Systems: The Biggest Disruption in Computing for Years

Multi-Die Systems: The Biggest Disruption in Computing for Years
by Daniel Nenni on 04-14-2023 at 6:00 am

SNUG Panel 1

At the recent Synopsys Users Group Meeting (SNUG) I had the honor of leading a panel of experts on the topic of chiplets. The discussion was based on a report published by the MIT Technology Review Insights in cooperation with Synopsys. This is a very comprehensive report (12 pages) that is available online HERE.

Here is the preface of the MIT paper:

“Multi-die systems define the future of semiconductors” is an MIT Technology Review Insights report sponsored by Synopsys. The report was produced through interviews with technologists, industry analysts, and experts worldwide, as well as a cross-industry poll of executives. Stephanie Walden was the writer for this report, Teresa Elsey was the editor, and Nicola Crepaldi was the publisher. The research is editorially independent, and the views expressed are those of MIT Technology Review Insights. This report draws on a poll of the MIT Technology Review Global Insights Panel, as well as a series of interviews with experts specializing in the semiconductor industry and chip design and manufacturing. Interviews occurred between December 2022 and February 2023.

The focus of a panel like this is the participants. During our lunch prior to the panel I learned quite a bit:

Simon Burke, AMD Senior Fellow, has 25+ years experience building chips starting with HPC vendor Silicon Graphics. He them moved to AMD then to Xilinx and back to AMD through the acquisition. An amazing journey, both depth of knowledge and a great sense of humor. Simon and AMD are leaders in the chiplet race so there was a lot to be learned here.

John Lee, Head of Electronics, Semiconductors and Optics at Ansys, is a serial entrepreneur. I met John when Avant! bought his Signal Integrity company in 1994. Synopsys then bought Avant! In 2002 and John became R&D Director. John then co founded Mojave Design which was acquired by Magma in 2004. John left Magma after they were acquired by Synopsys and later founded Gear Design, a big data platform for chip design, which was acquired by Ansys in 2015.  John is one of my favorite panelists, he says it like it is.

Javier DeLaCruz has 25+ years of experience including a long stint with one of my favorite companies eSilicon. Javier works for Arm handling advanced packaging technology development and architecture adaption including 2.xD and 3D systems. Arm is everywhere so this is a big job.

Francois Piednol is Chief Architect at Mercedes Benz but prior to that he spent 20 years at Intel so he knows stuff. Francios is also a member of UCIe and a jet pilot. He actually owns a jet and as a pilot myself I could not be more impressed. Francios was part of the MIT Chiplets paper mentioned above as well so he is a great resource.

Dr. Henry Sheng, Group Director of R&D in the EDA Group at Synopsys, he currently leads engineering for 3DIC, advanced technology and visualization. He has over 25 years of R&D experience in EDA where he has led development across the spectrum of digital implementation, including placement, routing, optimization, timing, signal integrity and electromigration. He has previously led efforts on EDA enablement and collaborations for emerging silicon technologies nodes. Henry knows EDA.

Dan Kochpatcharin is the Head of Design Infrastructure Management Division at TSMC. Dan is a 30+ year semiconductor professional with 25 years at foundries. For the past 15 years Dan has been instrumental in the creation of the TSMC OIP. Today he leads the OIP Ecosystem Partnerships: 3DFabric Alliance, IP Alliance, EDA Alliance, DCA, Cloud Alliance, and VCA. Dan K, as we call him, knows the foundry business inside and out. I always talk to Dan whenever I can.

Here is the abstract for the panel:

The new era of multi-die systems is an exciting inflection point in the semiconductor industry. From high-performance and hyper-disaggregated compute systems to fully autonomous cars and ultra-high-definition vision systems; multi-die chip designs will transform computing possibilities, driving many new innovations, expanding existing markets and paving the way for new ones. Critical to fueling this momentum is the coherent convergence of innovations across the semiconductor industry by EDA, IP, chiplet, foundry and OSAT leaders. But what’s really happening inside the companies driving what can have one of the biggest impacts on system design and performance in a very long time?

Join this panel of industry leaders who are at the forefront of shaping the multi-die system era. Many have already made the move or are making key contributions to help designers achieve multi-die system success. Listen to their insights, their views on how multi-die system approaches are evolving, and what they see as best practice. Hear about the near, medium, and long-term future for multi-die innovation.

Here are the questions I asked:

Why Multi-Die System and Why Now?
  1. Mercedes: What is driving the change, and what is multi-die system offering you?
  2. AMD: How do you see the trend to multi-die at AMD and what is the key driver?
  3. Synopsys: Are we seeing other markets move in this direction?
  4. TSMC: How are you seeing the overall market developing?
It Takes a Village?
  1. Arm: How are companies like Arm viewing the multi-die opportunity and how does something like multi-die impact the day-to-day work for designers and system architects working with Arm?
  2. Ansys: how is the signoff flow evolving and what is being done to help mitigate the growing signoff complexity challenge?
  3. Synopsys: What other industry collaborations, IP, and methodologies are required to address the system-level complexity challenge?
It’s Just the Beginning?
  1. TSMC: Which technologies are driving the multi-die growth trend and how do you see these technologies evolving over time?
  2. AMD: When do you foresee true 3D – logic-on-logic – entering the arena for AMD and what kind of uplift would if offer compared to infinity fabric style connectivity solutions.
  3. Synopsys: How are the EDA design flows and the associated IP evolving and where do customers want to see them go?
Designing Multi-Die Systems?
  1. Mercedes: How is the multi-die design challenge being handled at Mercedes and is it evolving in lock-step – true HW/SW co-design – with these ongoing software advancements?
  2. AMD: What methodology advancements would you like to see across the industry to make system development more efficient? And what kind of impact does multi-die sytem design have on block designers?
  3. Ansys: How is the increased learning curve for these multi-physics effects being addressed?
  4. Arm: How is the Arm core design flow evolving to absorb these new degrees of freedom?
  5. Synopsys: how is EDA ensuring that designers can achieve the entitlement promised in their move to multi-die?
  6. TSMC: How is TSMC working with EDA and OSAT partners to simplify the move to multi-die design?
The Long Term?
  1. Mercedes: How is Mercedes approaching the long-term reliability challenge?
  2. TSMC: How is TSMC dealing with process reliability and longevity for these expanding use cases?
  3. Ansys: What is the customer view of the reliability challenge?
  4. Synopsys: Do you see multi-die system as a significant driver for this technology moving forward?

(I can cover the answers to these questions in a second blog)

Summary

The answers to most of the questions are covered in the MIT paper but here are a couple of points that I rang true to me:

Chiplets are truly all about the ecosystem. So many companies could be involved, especially for chip designers that are using commercial chiplets, so where is the accountability? Dan K. made a great point about working with TSMC because they are the ecosystem experts and the buck stops with the wafer manufacturer. The TSMC ecosystem really is like the semiconductor version of Disneyland, the happiest place on earth.

Another point that was made, which was a good reminder for me, is that we are at the beginning of the chiplet era and the semiconductor industry is very fast moving. Either you harness the power of chiplets or the power of chiplets will harness you, my opinion.

Also Read:

Taking the Risk out of Developing Your Own RISC-V Processor with Fast, Architecture-Driven, PPA Optimization

Feeding the Growing Hunger for Bandwidth with High-Speed Ethernet

Takeaways from SNUG 2023

Intel Keynote on Formal a Mind-Stretcher


Hardware Root of Trust for Automotive Safety

Hardware Root of Trust for Automotive Safety
by Daniel Payne on 04-13-2023 at 10:00 am

Rambus, RT 640,

Traveling by car is something that I take for granted and I just expect that my trips will be safe, yet our cars are increasingly using dozens of ECUs, SoCs and millions of lines of software code that combined together present a target for hackers or system failures. The Automotive Safety Integrity Levels (ASIL) are known by the letters: A, B, C, D; where the ISO 26262 standard defines ASIL D as the highest degree of automotive hazard. Reliability metrics for an automotive system are Single Point Fault Metric (SPFM) and Latent Fault Metric (LFM).

Siemens EDA worked together with Rambus on a functional safety evaluation for automotive using the RT-640 Embedded Hardware Security Module with about 3 million faults, reaching ISO 26262 ASIL-B certification, by achieving a SPFM > 90% and a LFM > 60%. The two Siemens tools used for functional safety evaluations were:

  • SafetyScope
    • Failures In Time (FIT)
    • Failure Mode Effect and Diagnostic Analysis (FMEDA) – permanent and transient faults, fault list
  • KaleidoScope
    • Fault simulation on the fault list
    • Fault detected, not detected, not observed

The Rambus RT-640 is a hardware security co-processor for automotive use, providing the root of trust, meeting the ISO 26262 ASIL-B requirements. Architectural blocks for the RT-640 include a RISC-V secure co-processor, secure memories and cryptographic accelerators.

Rambus RT-640 Root of Trust

Your automotive SoC would add an RT-640 to provide secure execution of user apps that are authenticated, stop tampering, provide secure storage, and thwart side-channel attacks. Software cannot even reach the critical tasks like key derivation done in hardware. All of the major processor architectures are supported: Intel, RISC-V, Arm.

Security warranties, and hardware cryptographic accelerators are supported, plus there’s protection against glitching and over-clocking.

For the functional safety evaluation there was a manually defined fault list for signals covered by the provided safety mechanism. SafetyScope then reported the estimated FMEDA metrics, so an initial idea of the core’s safety level. Modules that that didn’t affect the core safety or were not safety critical  were pruned from the fault list.

The Fault Tolerant Time Interval (FTTI) tells the tool how long to look for a fault to be propagated before an alarm is set. FTTI impacts fault simulation run times, so a balance is required. The max concurrent fault number was set between 600 to 1,000 faults from experimentation. A two-step fault campaign approach was used to get the best results in the least amount of time.

Unclassified faults were faults not injected and not observed, so to reduce the number of non-injected faults they used two reduction methods:

  • Bus-simplification – when one or more bits are detected for a certain fault, the safety mechanism works well. Faults on the remaining bits of the bus are also considered detected.
  • Duplication-simplification – all faults not injected or observed the are part of a duplicated module are classified as detected.

Both permanent and transient fault campaigns were run on the RT-640 co-processor, taking some 12 days to complete when run on an IBM LSF HPC environment with parallel execution. The estimated SPFM numbers came from the first run of SafetyScope.

RT-640 fault campaign results

These fault campaign results exceed the ISO 26262 requirements of SPFM > 90% and LFM > 60% for ASIL-B certification.

Summary

Siemens and Rambus showed a methodology to evaluate the RT-640 co-processor, with nearly 3 million faults, reaching a total SPFM value of 91.9%, plus a TFM of 75%, exceeded the requirements of the ASIL-B safety level in automotive applications.  This is good news for electronics systems used in cars, ensuring drivers that their travels are safer, drama-free and resistant to hacking efforts. Using a hardware root of trust like the Rambus RT-640 makes sense for safety-critical automotive applications, and the fault campaign results confirm it.

Read the complete 11 page white paper on the Siemens site.

Related Blogs

 


Siemens EDA on Managing Verification Complexity

Siemens EDA on Managing Verification Complexity
by Bernard Murphy on 04-13-2023 at 6:00 am

2023 DVCon Harry Foster

Harry Foster is Chief Scientist in Verification at Siemens EDA and has held roles in the DAC Executive Committee over multiple years. He gave a lunchtime talk at DVCon on the verification complexity topic. He is an accomplished speaker and always has a lot of interesting data to share, especially his takeaways from the Wilson Research Group reports on FPGA and ASIC trends in verification. He segued that analysis into a further appeal for his W. Edwards Deming-inspired pitch that managing complexity demands eliminating bugs early. Not just shift-left but a mindset shift.

Statistics/analysis

I won’t dive into the full Wilson Report, just some interesting takeaways that Harry shared. One question included in the most recent survey was on use of data mining or AI. 17% of reporting FPGA projects and 21% of ASIC projects said they had used some form of solution around their tool flows (not counting vendor supplied tool features). Most of these capabilities he said are home-grown. He anticipates opportunity to expand such flow-based solutions as we do more to enable interoperability (think log files and other collateral data which are often input to such analyses).

Another though provoking analysis was on first-silicon successes. Over 20 years of reports this is now at the lowest level at 24%, where typical responses over that period have hovered around 30%. Needing respins is of course expensive, especially for designs in 3nm. Digging deeper into the stats, about 50% of failures were attributable to logic/functional flaws. Harry said he found a big spike in analog flaws – almost doubling –  in 2020. Semiconductor Engineering held a panel to debate the topic; panelists agreed with the conclusion. Even more interesting the same conclusion held across multiple geometries; it wasn’t just a small feature size problem. Harry believes this issue is more attributable to increasing integration of analog into digital, for which design processes are not yet fully mature. The current Wilson report does not break down analog failure root causes. Harry said that subsequent reports will dig deeper here, also into safety and security issues.

Staffing

This a perennial hot topic, also covered in the Wilson report. From 2014 to 2022 survey respondents report a 50% increase in design engineers but a 144% increase in verification engineers over the same period, now showing as many verification engineers on a project as design engineers. This is just for self-identified roles. The design engineers report they spend about 50% of their time in verification-related tasks. Whatever budget numbers you subscribe to for verification, those number are growing much faster than design as a percentage of total staffing budget.

Wilson notes that the verification to design ratio is even higher still in some market segments, more like 5 to 1 for processor designs. Harry added that they are starting to see similar ratios in automotive design which he finds surprising. Perhaps attributable to complexity added by AI subsystems and safety?

The cost of quality

Wilson updates where time is spent in verification, now attributing 47% to debug. Half of the verification budget is going into debug. Our first reaction to such a large amount of time being spent in debug is to improve debug tools. I have written about this elsewhere, especially in using AI to improve debug throughput. This is indeed an important area for focus. However Harry suggests we should also turn to lessons from W. Edwards Deming the father of the quality movement. An equally important way to reduce the amount of time spent in debug is to reduce the number of bugs created. Well duh!

Deming’s central thesis was that quality can’t be inspected into a product. It must be built in by reducing the number of bugs you create in the first place. This is common practice in fabs, OSATs and frankly any large-scale manufacturing operation. Design out and weed out bugs before they even get into the mainstream flow. We think of this as shift left but it is actually more than that. Trapping bugs not just early in the design flow but at RTL checkin through static and formal tests applied as a pre-checkin signoff. The same tests should also be run in regression, but for confirmation, not to find bugs that could have been caught before starting a regression.

A larger point is that a very effective way to reduce bugs is to switch to a higher-level programming language. Industry wisdom generally holds that new code will commonly contain between 15 and 50 bugs per 1,000 lines of code. This rate seems to hold independent of the language or level of abstraction. Here’s another mindset shift. In 1,000 lines of new RTL you can expect 15 to 50 bugs. Replace that with 100 lines of a higher-level language implementing the same functionality and you can expect ~2 to 5 bugs. Representing a significant reduction in the time you will need to spend in debug.

Back to Wilson on this topic. 40% of FPGA projects and 30% of ASIC projects claim they are using high level languages. Harry attributes this relatively high adoption to signal processing applications, perhaps because HLS has had enthusiastic adoption in signal processing, an increasingly important domain these days (video, audio, sensing). But SystemC isn’t the only option. Halide is popular at least in academia for GPU design. As architecture becomes more important for modern SoCs I can see the trend extending to other domains through other domain specific languages.

Here are a collection of relevant links: Harry’s analysis of the 2022 verification study, the Wilson 2022 report on IC/ASIC verification trends, the Wilson 2022 report on FPGA verification trends and a podcast series on the results.

 

 


Podcast EP153: Suk Lee’s Journey to Intel Foundry Services, with a Look to the Future

Podcast EP153: Suk Lee’s Journey to Intel Foundry Services, with a Look to the Future
by Daniel Nenni on 04-12-2023 at 12:00 pm

Dan is joined by Suk Lee, Vice President of Design Ecosystem Development at Intel Foundry Services. He has over 35 years of experience in the semiconductor industry, with engineering, marketing, and general management positions at LSI Logic, Cadence, TI, Magma Design Automation and TSMC. At TSMC, he was responsible for managing the third party partners making up the OIP Ecosystem, and created the OIP Ecosystem Forum, the premier Ecosystem event in the Foundry Industry.

Suk discusses his journey through semiconductors, EDA and ultimately the foundry business. Dan explores the reasons Suk joined Intel Foundry Services, their focus and what the future holds for the organization in the changing semiconductor landscape.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Taking the Risk out of Developing Your Own RISC-V Processor with Fast, Architecture-Driven, PPA Optimization

Taking the Risk out of Developing Your Own RISC-V Processor with Fast, Architecture-Driven, PPA Optimization
by Daniel Nenni on 04-12-2023 at 10:00 am

Updated Speaker Slide

Are you developing or thinking about developing your own RISC-V processor? You’re not alone. The use of the RISC-V ISA to develop processors for SoCs is a growing trend. RISC-V offers a lot of flexibility with the ability to customize or create ISA and microarchitectural extensions to differentiate your design no matter your application area: AI, machine learning, automotive, data center, mobile, or consumer. Proprietary cores with custom extensions are often highly complex and have traditionally required an equally elevated level of expertise to design them. Only those with deep knowledge and skills have successfully met the challenges associated with evaluating the impact of design decisions on power, performance, and area (PPA). But even experts can admit that this process can take an exceptionally long time and full optimization of these parameters may not be obtained.

Join us for a webinar replay and learn how to overcome these challenges and take the risk out of developing your own RISC-V processor.

Synopsys is dedicated to addressing the challenges facing RISC-V-based design with a portfolio of industry leading solutions that can bring RISC-V designs to life faster and easier. This Synopsys webinar will cover two tools:  Synopsys ASIP Designer and Synopsys RTL Architect. These tools can help chip designers create highly customized processors faster while meeting the desired PPA targets with confidence. We will also show these solutions in action with a real-world case study that will highlight their interoperability and the results that can be achieved.

Synopsys ASIP Designer is the leading tool-suite to design and program Application Specific Instruction-set Processors (ASIPs).  From a user-defined processor model, capturing the ASIP’s instruction-set and micro-architecture in the architecture description language nML, ASIP Designer automatically creates both a complete software development kit with an efficient C/C++ compiler, and a synthesizable RTL implementation of the ASIP.

The complexity of RISC-V chips and restrictive advanced node rules have made it more difficult for implementation tools to achieve power, performance, and area (PPA) targets. Synopsys RTL Architect is the industry’s first physical aware, RTL analysis, exploration, and optimization environment. The solution enables designers to “shift left” and predict the implementation impact of RTL significantly reducing RTL development time and creating better RTL.

This online seminar will present a new interoperability solution that facilitates a “Synthesis-in-the-Loop” design approach, both during earlier architectural design stages with processor model modifications and during RTL implementation.  Synopsys ASIP Designer’s RTL generation tool has been extended with an “ASIP RTL Explorer” utility that can systematically generate multiple RTL implementations of the ASIP with different design options.  Then, using Synopsys RTL Architect’s parallel exploration capabilities, designers can perform a comparative analysis of these RTL variants with respect to performance, power, and area (PPA).

We will illustrate the effectiveness of the new interoperability solution with a case study of a RISC-V ISA extended ASIP design for an AI-optimized MobileNet v3 inference, for which we want to find an energy-efficient implementation.  We will show how Synopsys ASIP Designer’s RTL Explorer generated 7 RTL variants for this ASIP and how Synopsys RTL Architect was used to compare and analyze the power consumption of these alternatives quickly and accurately.

The new interoperability solution reinforces Synopsys ASIP Designer’s Synthesis-in-the-Loop methodology and brings another productivity gain in the design of SoCs with programmable accelerators.

Join us for our Synopsys webinar to remove risk from your RISC-V processor development. Register today and watch the replay!

Also Read:

Feeding the Growing Hunger for Bandwidth with High-Speed Ethernet

Takeaways from SNUG 2023

Full-Stack, AI-driven EDA Suite for Chipmakers

Power Delivery Network Analysis in DRAM Design


Optimizing Return on Investment (ROI) of Emulator Resources

Optimizing Return on Investment (ROI) of Emulator Resources
by Kalar Rajendiran on 04-12-2023 at 6:00 am

Verification Options SW vs HAV

Modern day chips are increasingly complex with stringent quality requirements, very demanding performance requirement and very low power consumption requirement. Verification of these chips is very time consuming and accounts for approximately 70% of the simulation workload on EDA server farms. As software-based simulators are too slow for many requirements, hardware assisted verification (HAV) technologies are finding increased use for many different purposes.  Emulators are used for pre-silicon software development, hardware/software co-verification, debugging and in-circuit emulation (ICE). While emulators improve throughput, they are expensive as even a single emulator can cost millions of dollars. Add to this the cost of a dedicated team to support the specialized workflows, and we are talking of a very expensive proposition.

Given the large investment involved with using emulators, it is natural to expect to maximize the return on investment (ROI), which means the emulators must be used very efficiently. If only running ICE jobs, the emulators may remain idle overnight and during weekends, translating to very low utilization of this expensive resource. Utilization can be increased by running both ICE jobs and overnight batch jobs. For this, designs must be compiled from RTL code and emulation boards must be allocated and programmed. And virtual target devices such as PCI, USB and video controllers must be soft-assigned before jobs can run.

According to Global Market Insights, the HAV market is expected to exceed $15 billion by 2027, representing a CAGR of over 15%. Investments in HAV tools is on the rise and investments in hardware emulation now exceed software-based verification, growing to $718 million in 2020. With this growth trend, optimizing the use of HAV resources takes on added importance.

HAV Optimization Challenges

Organizations often use hard-partitioning strategies to allocate emulator resources among teams. However, these allocations may be incompatible with the needs of simulation acceleration (SA) jobs. Emulation users refer to the challenge of efficiently packing workloads as the Tetris problem. There is also the need to manage the time required to cut over between workloads. And often the emulation environment involves hardware from multiple emulation vendors. This compounds the scheduling challenge as different emulators have different topology characteristics.

Schedulers need to:

  • Account for existing utilization and interactive jobs when placing new workloads
  • Schedule and share a limited number of emulated peripheral devices
  • Consider long lead times and workflows required to compile designs and load them into the emulator
  • Accommodate design teams requesting hard usage windows and future resource reservations

Because of all these challenges, emulation workloads are still managed manually in many environments.  When multiple groups compete for the same emulation resources, manual management becomes an issue. The allocation of hardware emulation resources should be automated for greater efficiency and throughput.

Altair’s Hardware Emulator Resource Optimizer (Hero)

Altair® Hero™ is an end-to-end solution designed specifically for hardware emulation environments, addressing all aspects of emulation flow including design compilation, emulator selection, and software and regression tests. Hero’s vendor-independent architecture and comprehensive policy management features provide organizations with flexibility and control.

Hero supports a variety of hardware-assisted verification platforms, including traditional software-based tools as well as hardware emulators based on custom processors and FPGAs. It is designed to be emulator-agnostic, providing a generic scheduling model that treats boards and modules as “leaf” resources, making it adaptable to most commercially available emulation platforms.

Key features of Hero 2.0 include policy management including FairShare and preemption, soft reservations enabling users to reserve blocks of time on an emulator in advance, visibility to emulator-specific metrics for hardware asset optimization and organizational planning, a rich GUI to simplify monitoring and determine the root cause of failing jobs, and support for emulation and prototyping platforms from multiple vendors.

Hero enables emulator users to benefit from the same kinds of policy-based controls common in software verification environments, such as attaching different priorities to different emulation jobs, applying FairShare policies to manage sharing of emulator resources, and preemption ensuring the resources are available during business hours for interactive ICE activities. Hero also provides granular, real-time visibility to emulator resources, including visibility to runtime host allocations and the boards and modules used across the various emulators.

Summary

As per Altair, Hero is the only scheduler to optimize use of resources across multiple emulators and hardware-assisted verification platforms. Altair has published two whitepapers around Hero to address the topic of optimizing the use of emulation resources. Those involved with running workloads on emulators would find these two whitepapers very informative.

The first whitepaper covers the details of how Hero helps maximize the ROI of customers’ emulator resources. This whitepaper goes into many intricate details as they relate to hardware emulation. For example, it identifies a number of business tangible metrics to track to evaluate the utilization efficiency of emulation resources rather than the simplistic metric of percentage of emulator gates used. The second whitepaper is more like an application notes on how users can apply the features of Hero for their emulation jobs.

For more details, visit the Altair Hero product page.

Also Read:

Measuring Success in Semiconductor Design Optimization: What Metrics Matter?

Load-Managing Verification Hardware Acceleration in the Cloud

Altair at #59DAC with the Concept Engineering Acquisition


LIVE WEBINAR – The ROI of User Experience Design: Increase Sales and Minimize Costs

LIVE WEBINAR – The ROI of User Experience Design: Increase Sales and Minimize Costs
by Daniel Nenni on 04-11-2023 at 10:00 am

planorama webinar blog graphic sm

The semiconductor industry has seen a significant shift towards vertical integration of products, expanding from chips to generalized or purpose-built integrated solutions. As software becomes an increasingly critical component of these solutions, leveraging modern software development processes with User Experience (UX) design are integral to successful launches.

It should be no surprise that intuitive, simple-to-use software drives sales, customer satisfaction and retention, and brand loyalty.  However, beyond these benefits, User Experience design accelerates your time-to-market while reducing your internal product development costs. In light of this, we organized a webinar to explore the role of UX design in software-hardware solutions, with its influence spanning from product development to sales wins.

With an intended audience of sales and software engineering managers responsible for product development, this live webinar will discuss how the User Experience design practice achieves the aforementioned business goals simultaneously.  Participants will learn how UX is practically implemented in IoT and integrated solution project teams, review real world case studies, and discuss questions in a live Q&A.

The webinar, “The ROI of User Experience Design: Increase Sales and Minimize Costs,” will be hosted by Matt Genovese, founder and CEO of Planorama Design, himself a seasoned engineer whose career spans semiconductors and software product development.  Don’t miss out on this opportunity to learn from an industry expert and take your product development process to the next level.  Watch the replay.

Abstract:
In today’s competitive landscape for IoT, edge, and cloud solutions, User Experience (UX) design has become more crucial than ever in achieving customer and business goals. During this live webinar, we will explore how UX design affects everything from sales, customer retention, time-to-market, to internal support and development costs. We’ll delve into key principles of user-centered design, and discuss how they drive more complete product and purpose-built solutions that differentiate you from competitors, and accelerate your customers through engineering to ramp more quickly into production.

Speaker:
Matt founded Planorama Design, a user experience design professional services company to design complex, technical software and systems that are simple and intuitive to use while reducing internal development and support costs. Staffed with seasoned engineers and UX designers, the company is headquartered in Austin, Texas, USA.

Watch the replay

Also Read:

CEO Interview: Matt Genovese of Planorama Design


WEBINAR: Design Cost Reduction – How to track and predict server resources for complex chip design project?

WEBINAR: Design Cost Reduction – How to track and predict server resources for complex chip design project?
by Daniel Nenni on 04-11-2023 at 6:00 am

picture innova

During the design of complex chips, cost reduction is becoming a real challenge for small, medium and large companies.  Resource management is a key to contain design cost.

The chip design market is expecting automated solutions to help in the resource prediction, planning and analysis. AI-based technologies are promising for emerging resource management solutions.

It is known that the trend during the design of modern chips is to more and more rely on cloud-based computing servers, but the key question is: Is it timely predictable to switch from on-premise to cloud-based servers?

INNOVA is providing a clear ML-based answer to this question. The INNOVA tool called PDM (for Project Design Manager) is making easy the three following key steps:

  1. Model training
  2. Model selection
  3. Time prediction of server resources

By including precise CPU, memory and disk information.

Worth noting that INNOVA’s PDM provides the infrastructure to track the usage of different resources (EDA tools, servers, engineering resources, libraries, …).  It can be securely plugged-in to any existing IT environment and interoperates with standard project, license, and server management tools.

Once the tool is installed, executing resource prediction is straightforward including the selection of the ML-model which is the most suitable according to the user and company context.  Report generation including comparison between real data and predicted data are also made easy even for non-AI experts.

Figure: Predicted vs. Real Data

No need to train the tool for months. With PDM for a 3-month prediction, in less than 24 hours, first training results are generated as well as the correlation reports between real and predicted data.

To make design managers, CAD and procurement teams comfortable when using this kind of technologies, the tool provides several APIs such as a web-based graphical interface, and high level scripting APIs in Python or using GraphQL. Reports generation and alarm creation are also made easy.

A dedicated Webinar entitled “AI-Based Resource Tracking & Prediction of Computing Servers For Chip Design Projects”. See the replay HERE.

About INNOVA:

INNOVA Advanced Technologies has been founded in 2020 by seasoned from the semiconductor industry. Its solution is intended for designers as well as design managers of complex and multi-domain projects, ranging from microelectronics to computer science. It helps them to manage projects and resources in one unique place.

For more information about INNOVA Advanced Technologies you can visit their website here: https://www.innova-advancedtech.com/

Also Read:

Defacto’s SoC Compiler 10.0 is Making the SoC Building Process So Easy

Using IP-XACT, RTL and UPF for Efficient SoC Design

Working with the Unified Power Format


Podcast EP152: An Informal Conversation with Aart de Geus on AI and Multi-Die at SNUG

Podcast EP152: An Informal Conversation with Aart de Geus on AI and Multi-Die at SNUG
by Daniel Nenni on 04-10-2023 at 2:00 pm

This is a special edition of our podcast series. At the recent Synopsys SNUG User Group, SemiWiki staff writer Kalar Rajendiran got the opportunity to conduct an informal interview with Aart de Geus, Chairman and CEO of Synopsys.

What follows are some of Aart’s thoughts on the deployment of AI across the semiconductor ecosystem. Technology and business implications are discussed as well as the impact of multi-die design on the overall landscape.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Multiphysics Analysis from Chip to System

Multiphysics Analysis from Chip to System
by akanksha soni on 04-10-2023 at 10:00 am

Image 1 2

Multiphysics simulation is the process of computational methods to model and analyze a system to understand its response to different physical interactions like heat transfer, electromagnetic fields, and mechanical structures. Using this technique, designers can generate physics-based models and analyze the behavior of the system as a whole.

Multiphysics phenomena play a key role when designing any electronic device. In the real world, most devices that we use in our day-to-day life consist of electronics. These devices consist of chips, wires, antennas, casing, and many other components responsible for the final operation and execution of the product. The physical phenomena happen not just within an electronic device but also impact nearby devices. Therefore, it’s important to consider the effects of physical interactions from chip to system and the surrounding environment.

Alternative methods and their shortcomings

Understanding the electrical behavior of any device or system isn’t enough. Designers also need to consider the Multiphysics aspects like thermal, mechanical stress/warpage, and electromagnetic effects. Designers may use different ways to understand the Multiphysics behavior of the system at different levels.

Engineers can simulate each physical phenomenon separately and integrate the results to understand the cumulative behavior. This approach is time-consuming and prone to errors, and does not allow for a comprehensive analysis of the interactions between different physical fields. For example, temperature variations in a multi-die IC package can induce Mechanical stress and mechanical can impact the electromagnetic behavior of the system. Everything is co-related; therefore, a comprehensive Multiphysics solution is required for simulating the physics of the entire system.

To achieve high performance and speed goals, chip designers are embracing multi-die systems like 2.5D/3D-IC architectures. The number of vectors to be simulated in these systems has reached millions. Conventional IC design tools cannot handle this explosion of data, so chip designers considered a limited set of data to analyze the Multiphysics behavior of the system. This approach might work if the system is not high-speed and not used in critical conditions, but it is definitely not applicable for today’s high-speed systems where reliability and robustness are the major requirements.

Ansys provides a complete comprehensive Multiphysics solution that can easily solve millions of vectors to thoroughly analyze the Multiphysics of the entire system- chip-package-system.

Advantages of Multiphysics Simulation from Chip to System

Comprehensive Multiphysics simulation is a powerful method that enables designers to accurately predict and optimize the behavior of complex systems at all levels, including chip, package, and system. Multiphysics simulation has many advantages but some of the most prominent advantages are:

  1. Enhanced Reliability: Comprehensive Multiphysics simulation methods analyze the physics of each complex component in the system and also consider the interactions between different physical domains. This technique provides more accurate results which ensure the reliability of the system. Ansys offers a wide range of Multiphysics solutions enabling designers to analyze the Multiphysics at all levels, chip, package, system, and surrounding environment.
  2. Improved Performance: Multiphysics solutions give insight into the different physics domains, their interactions, and their impact on the integrity of the system. By knowing the design’s response to the thermal and mechanical parameters along with electrical behavior, designers can take an informed decision and modify the design to achieve desired performance. In a 3D-IC package, Ansys 3D-IC solution provides a clear insight into the power delivery, temperature variations, and mechanical stress/warpage around chiplets and interposer, allowing designers to deliver higher performance.
  3. Design Flexibility: Designers can explore a wide range of design options and tradeoffs. It allows designers to take decisions based on yield, cost, and total design time. For example, in a 3D-IC package designers can choose the chiplets based on functionality, cost, and performance. Multiphysics simulation allows this flexibility without extra cost.
  4. Reduced cost: It allows designers to identify potential design issues early in the development process, reducing the need for physical prototypes and lowering development costs. Using simulation, you can also tradeoff between the BOM costs and expected performance.
  5. Reduced Power Consumption: A system consists of multiple parts and each part might have different power requirements. By Multiphysics simulation, the designers can estimate the power consumption in each part of the system and optimize the power delivery network.

Ansys offers powerful simulation capabilities that can help designers optimize their products’ performance, reliability, and efficiency, from chip to the system level. By using Multiphysics solutions by Ansys, designers can take informed design decisions while designing.

Learn more about Ansys Multiphysics Simulation tools here:

Ansys Redhawk-SC | IC Electrothermal Simulation Software

High-Tech: Innovation at the Speed of Light | Ansys White Paper

Also Read:

Checklist to Ensure Silicon Interposers Don’t Kill Your Design

HFSS Leads the Way with Exponential Innovation

DesignCon 2023 Panel Photonics future: the vision, the challenge, and the path to infinity & beyond!