Bronco Webinar 800x100 1

IEDM 2021 – Back to in Person

IEDM 2021 – Back to in Person
by Scotten Jones on 10-18-2021 at 6:00 am

IEDM 2021 SemiWIki

Anyone who has read my previous articles about IEDM knows I consider it the premier conference on process technology.

Last year due to COVID IEDM was virtual and although virtual offers some advantages the hallway conversations that can be such an important part of the conference are lost. This year IEDM is returning as a live event in San Francisco from Dec. 11-15, 2020, with on-demand access to materials starting Dec. 17.

Saturday, December 11 will be the tutorials:

90-Minute Tutorials – Saturday, Dec. 11

The 90-minute Saturday tutorial sessions on emerging technologies have become a popular and growing part of the IEEE IEDM. They are presented by experts in the field to bridge the gap between textbook-level knowledge and leading-edge current research, and to introduce attendees to new fields of interest:

2:45 p.m. – 4:15 p.m.

4:30 p.m. – 6:00 p.m.

Sunday, December will be the short courses:

IEDM Short Courses – Sunday, Dec. 12

In contrast to the Tutorials, the full-day Short Courses are focused on a single technical topic. Early registration is recommended, as they are often sold out. They offer the opportunity to learn about important areas and developments, and to network with global experts.

  • Future Scaling and Integration Technology, organized by Dechao Guo, IBM Research

 

Practical Implementation of Wireless Power Transfer, Hubregt Visser, IMEC

Monday will begin the conference with the plenary talks:

Plenary Presentations – Monday, Dec. 13

  • The Smallest Engine Transforming Our Future: Our Journey Into Eternity Has Only Begun, Kinam Kim, Vice Chairman & CEO, Samsung Electronics Device Solutions Division
  • Creating the Future: Augmented Reality, the Next Human-Machine Interface, Michael Abrash, Chief Scientist, Facebook Reality Labs
  • Quantum Computing Technology, Heike Riel, Head of Science & Technology, IBM Research and IBM Fellow

Followed by the full conference Monday through Thursday.

As every year, IEEE IEDM 2021 will offer Special Focus Sessions on emerging topics with invited talks from world experts to highlight the latest developments.

Monday, December 13, 1:35 PM

Session 3 – Advanced Logic Technology – Focus Session – Stacking of devices, circuits, chips: design, fabrication, metrology – challenges and opportunities

Tuesday, December 14, 9:05 AM

Session 14 – Emerging Device and Compute Technology – Focus Session – Device Technology for Quantum Computing

Wednesday, December 15, 9:05 AM

Session 25 – Memory Technology/Advanced Logic Technology – Focus Session – STCO for memory-centric computing and 3D integration

Wednesday, December 15, 1:35 PM

Session 35 – Sensors, MEMS, and Bioelectronics/Optoelectronics, Displays, and Imaging Systems – Focus Session – Technologies for VR and Intelligence Sensors

Session 38 – Emerging Device and Compute Technology/Optoelectronics, Displays, and Imaging Systems – Focus Session – Topological Materials, Devices, and Systems

For more information and to access the full program as it becomes available, please go to: https://www.ieee-iedm.org/

IEEE International Electron Devices Meeting (IEDM) is the world’s preeminent forum for reporting technological breakthroughs in the areas of semiconductor and electronic device technology, design, manufacturing, physics, and modeling. IEDM is the flagship conference for nanometer-scale CMOS transistor technology, advanced memory, displays, sensors, MEMS devices, novel quantum and nano-scale devices and phenomenology, optoelectronics, devices for power and energy harvesting, high-speed devices, as well as process technology and device modeling and simulation.


Silicon Startups, Arm Yourself and Catalyze Your Success…. Spotlight: Semiconductor Conferences

Silicon Startups, Arm Yourself and Catalyze Your Success…. Spotlight: Semiconductor Conferences
by Kalar Rajendiran on 10-17-2021 at 10:00 am

Live Panel

The arrival of fall seems to typically raise the number of conferences hosted by semiconductor ecosystem companies. The conferences may go by different names. But whether called a forum, summit, conference, or by some other creative name, the purpose is the same. It is to bring together technologists and business people together to share ideas. More specifically, to discuss industry opportunities and challenges and of course to tout the respective ecosystem partners’ accomplishments. Samsung Foundry Forum and Global Foundries Technology Summit were held in September. October has some interesting ones too.

As important as it is for everyone within the industry to participate in these conferences, the value for smaller companies is potentially much higher. Why? Because, where else and how else would a smaller company with smaller financial resources be able to get this, compressed into a few days, all happening in one location? Of course, nowadays, that one location happens to be a virtual one, in many cases. That is even better.

Just because something is free and you can attend it without even leaving your home or office does not mean you should or could attend all these conferences. So, how do you go about choosing? Well, to some extent it depends on who you are, where you are in terms of your product idea and what kind of assistance and insights you would need for that stage.

Leveraging Conference Opportunities

Arm Dev Summit

One that is in the immediate offing is the Arm Dev Summit, scheduled as a virtual event for Oct 19-Oct 21, 2021. There, you will get a chance to network with Arm’s global community of hardware designers and software developers. And an opportunity to hone your ideas, whether it is in AI, IoT, 5G, wired communications or super-computing. It’s a 3-day virtual event and the agenda is extensive. Whether you are just starting up, simply crazy about autonomous vehicles, a myth buster type or for that matter a 5G Campfire kind of person, there is something of value for everyone. Arm’s entire ethos is about creating diverse, multi-participant ecosystem to unlock new possibilities.

Register and attend all sessions that are relevant to your areas of interest. If you are a startup, you would certainly want to check out the following back-to-back sessions. Attend a live panel session that will focus on helping startups succeed, followed by a networking session with the same theme.

Of course, navigating a startup to market success demands lot more than just attending developer conferences. Building investor confidence for securing early rounds of funding is key. Innovating while keeping costs down is essential. Flexibility to test and iterate without overrunning the budget would be a big advantage. Reducing the risk in the project and product schedules cannot be understated. Getting a product to market faster than the competition is crucial. A series of future blogs will tackle these topics and how Arm and Silicon Catalyst could be of help.

TSMC OIP Forum

If you’re looking at the siliconization aspects of your products, you would want to register for the TSMC Open Innovation Platform (OIP) Forum. This one is scheduled for Oct 26, 2021 for the Americas Time Zone and for Oct 27, 2021 for Europe and Asia Time Zones. You can register for TSMC OIP Forum using this link.

Snapshot of some Sessions: TSMC OIP Forum, Oct 26, 2021

Catalyzing Your Business

Attending the right conferences certainly is helpful. But if you’re an emerging startup, you could benefit from more help on an ongoing basis, at least until you gain escape velocity.

Silicon Valley is known all over the world and semiconductors are the life force behind all electronics. All the modern conveniences we take for granted are powered by advanced and complex chips. And designing, manufacturing and producing these chips in high volumes are challenging tasks. Yet, there are no incubators focused on semiconductors. Although hard to believe, it is true.

Silicon Catalyst is the world’s only incubator focused on semiconductor solutions, including MEMS, sensors and intellectual property. Silicon Catalyst’s mission is to help semiconductor startups succeed. Through a coalition of in-kind and strategic partners, investors and advisors, Silicon Catalyst helps startups accelerate their ideas through prototypes, and onto a path to volume production.

As a strategic and in-kind partner, Arm participates in the incubation selection process and actively looks for opportunities to partner with these startups. As an in-kind partner, TSMC provides MPW shuttles for companies in the incubator. The Silicon Catalyst coalition provides everything startups need to design, fabricate, and market semiconductor solutions. The startups gain millions of dollars’ worth of EDA tools, IP, PDKs, prototypes, design and test services, packaging and business solutions and expert guidance from accomplished advisors. For more details, check out the full list of Silicon Catalyst partners.

Upcoming Blog Series

This blog is the first piece in a series of blogs to follow. The future blogs will cover challenges and opportunities that silicon startups commonly face and how the Silicon Catalyst ecosystem can be of significant help in accelerating their growth. The primary goal of the blog series is to identify tried-and-true solutions to the many problems to be faced, ultimately helping accelerate a silicon startup’s transformation from the initial conceptual stage to business success in the market.

Also Read:

WEBINAR: Maximizing Exit Valuations for Technology Companies

Silicon Catalyst and Cornell University Are Expanding Opportunities for Startups Like Geegah

Silicon Catalyst is Bringing Its Unique Startup Platform to the UK


Chiplet: Are You Ready For Next Semiconductor Revolution?

Chiplet: Are You Ready For Next Semiconductor Revolution?
by Eric Esteve on 10-17-2021 at 6:00 am

D2D IP market forercast 2020 2025

During the 2010-decade, the benefits of Moore’s law began to fall apart. Moore’s law stated transistor density doubled every two years, the cost of compute would shrink by a corresponding 50%. The change in Moore’s law is due to increased in design complexity the evolution of transistor structure from planar devices, to Finfets. Finfets need multiple patterning for lithography to achieve devices dimensions to below 20-nm nodes.

At the beginning of this decade, computing needs have exploded, mostly due to proliferation of datacenters and due to the amount of data being generated and processed. In fact, adoption of Artificial Intelligence (AI) and techniques like Machine Learning (ML) are now used to process ever-increasing data and has led to servers significantly increasing their compute capacity.

Servers have added many more CPU cores, have integrated larger GPUs used exclusively for ML, no longer used for graphics, and have embedded custom ASIC AI accelerators or complementary, FPGA based AI processing. Early AI chip designs were implemented using larger monolithic SoCs, some of them reaching the size limit imposed by the reticle, about 700mm2.

At this point, disaggregation into a smaller SoC plus various compute and IO chiplets appears to be the right solution. Several chip makers, like Intel, AMD or Xilinx have select this option for products going into production. In the excellent white paper from The Linley Group, “Chiplets Gain Rapid Adoption: Why Big Chips Are Getting Small”, it was shown that this option leads to better costs compared to monolithic SoCs, due to the yield impact of larger.

The major impact of this trend on IP vendors is mostly on the interconnect functions used to link SoCs and chiplets. At this point (Q3 2021), there are several protocols being used, with the industry trying to build formalized standards for many of them.

Current leading D2D standards includes i) Advanced Interface Bus (AIB, AIB2) initially defined by Intel, and now has offered royalty free usage, ii) High Bandwidth Memory (HBM) where DRAM dies are stacked on each other on top of a silicon interposer and are connected using TSVs, iii) Open Domain-Specific Architecture (ODSA) subgroup, an industry group, has defined two other interfaces, Bunch of Wires (BoW) and OpenHBI.

Heterogeneous chiplet design allows us to target different applications or market segments by modifying or adding just the relevant chiplets while keeping the rest of the system unchanged. New developments could be launched quicker to the market, with significantly lower investment, as redesign will only impact the package substrate used to house the chiplets.

For example, the compute chiplet can be redesigned from TSMC 5nm to TSMC 3nm to integrate larger L1 cache or higher performing CPU cores, while keeping the rest of the system unchanged. At the opposite end of the spectrum, only the chiplet integrating SerDes can be redesigned for faster rates on new process nodes offering more IO bandwidth for better market positioning.

Intel PVC is a perfect example of heterogeneous integration (various functional chiplet, CPU, switch, etc.) that we could call vertical integration, when the same chip maker owns the various chiplet components (except for memory devices).

Chip maker developing SoCs for high-end applications, such as HPC, datacenter, AI or networking are likely to be early adopters for chiplet architectures. Specific functions, like SRAMs for larger L3 cache, or AI accelerators, either Ethernet, PCIe or CXL standards should be the first interface candidate for chiplet designs.

When these early adopters have demonstrated the validity of heterogeneous chiplets leveraging multiple different business models, and obviously the manufacturing feasibility for test and, packaging, it will create an ecosystem will have been create that is critical to support this new technology. At this point, we can expect a wider market adoption, not only for high-performance applications.

We could imagine that heterogeneous products can go further, if a chip maker will launch on the market a system made of various chiplets targeting compute and IO functionality. This approach makes convergence on a D2D protocol mandatory, as an IP vendor offering chiplets with an in-house D2D protocol is not attractive to the industry.

An analogy to this, is the SoC building in the 2000’s, where semiconductor companies transition to integrating various design IPs coming from different sources. The IP vendors of the 2000’s will inevitably become the chiplet vendors of the 2020’s. For certain functions, such as advanced SerDes or complex protocols, like PCIe, Ethernet or CXL, IP vendors have the best know-how to implement it on silicon.

For complex Design IP, even if simulation verification has been run before shipping to customers, vendors have to validate the IP on silicon to guarantee performance. For digital IP, the function can be implemented in FPGA because it’s faster and far less expensive than making a test chip. For mixes-signal IP, like a SerDes based PHY, vendors select the Test Chip (TC) option allowing to silicon enabling them characterize the IP in silicon before shipping to customer.

Even though a chiplet is not simply a TC, because it will be extensively tested and qualified before being used in the field, the amount of incremental work to be done by the vendor to develop a production chiplet is far less. In other words, the IP vendor is the best positioned to quickly release a chiplet built from his own IP and offer the best possible TTM and minimize risk.

The business model for heterogeneous integration is in favor of various chiplets being made by the relevant IP vendor (eg. ARM for ARM-based CPU chiplets, Si-Five for Risc-V based compute chiplets and Alphawave for high-speed SerDes chiplets) since they are owner of the Design IP.

None of this prevents chip makers to design their own chiplets and source complexe design IPs to protect their unique architectures or implement house-made interconnects. Similar to SoC Design IP in the 2000’s, the buy or make decision for chiplets will be weighted between core competency protection and sourcing of non-differentiating functions.

We have seen that the historical and modern-day Design IP business growth since the 2000’s has been sustained by continuous adoption of external sourcing. Both models will coexist (chiplet designed in-house or by an IP vendor) but history has shown that the buy decision eventually over takes the make.

There is now consensus in the industry that a maniacal focus on achieving Moore’s law is not valid anymore for advanced technology nodes, eg. 7nm and below. Chip integration is still happening, with more transistors being added per sq. mm at every new technology node. However, the cost per transistor is growing higher every new node as well.

Chiplet technology is a key initiative to drive increased integration for the main SoC while using older nodes for other functionality. This hybrid strategy decreases both the cost and the design risk associated with integration of other Design IP directly onto the main SoC.

IPnest believes this trend will have two main effects in the interface IP business, one will be the strong growth of D2D IP revenues soon (2021-2025), and the other is the creation of the heterogenous chiplet market to augment the high-end silicon IP market.

This market is expected to consist of complex protocols functions like PCIe, CXL or Ethernet. IP vendors delivering interface IP integrated in I/O SoCs (USB, HDMI, DP, MIPI, etc.) may decide to deliver I/O chiplets instead.

The other IP categories impacted by this revolution will be SRAM memory compiler IP vendors, for L3 cache. By nature, the cache size is expected to vary depending on the processor. Nevertheless, designing L3 cache chiplet can be a way for IP vendor to increase Design IP revenues by offering a new product type.

As well, the NVM IP category can be positively impacted, as NVM IP are no longer integrated in SoCs designed on advanced process nodes. It would be a way for NVM IP vendors to generate new business by offering chiplets.

We think that FPGA and AI accelerator chiplets will be a new source of revenues for ASSP chip makers, but we don’t think they can be strictly ranked as IP vendors.

If Interface IP vendors will be major actors in this silicon revolution, the silicon foundries addressing the most advanced nodes like TSMC and Samsung will also play a key role. We don’t think foundries will design chiplets, but they could make the decision to support IP vendors and push them to design chiplets to be used with SoCs in 3nm, like they do today when supporting advanced IP vendors to market their high-end SerDes as hard IP in 7nm and 5nm.

Intel’s recent transition to 3rd party foundries is expected to also leverage third party IPs, as well as heterogenous chiplet adoption by semiconductor heavyweights. In this case, no doubt that Hyperscalars like Microsoft, Amazon and Google will also adopt chiplet architectures… if they don’t precede Intel in chiplet adoption.

By Eric Esteve (PhD.) Analyst, Owner IPnest

Also Read:

IPnest Forecast Interface IP Category Growth to $2.5B in 2025

Design IP Sales Grew 16.7% in 2020, Best Growth Rate Ever!

How SerDes Became Key IP for Semiconductor Systems


Podcast EP43: Navigating the Architecture Exploration Jargons and What Do They Mean to a Chip Architect?

Podcast EP43: Navigating the Architecture Exploration Jargons and What Do They Mean to a Chip Architect?
by Daniel Nenni on 10-15-2021 at 10:00 am

Dan is joined by Deepak Shankar, founder of Mirabilis Design. Dan explores the application and impact of architectural exploration on chip and system design.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Rick Seger of SigmaSense

CEO Interview: Rick Seger of SigmaSense
by Daniel Nenni on 10-15-2021 at 6:00 am

RickSeger Photo

Rick Seger is a pioneer of the PCAP touch and pen industry. Seger formerly was the President of N-trig, which incorporated PCAP technology with active pen solutions into devices for nearly all major PC OEMs. Microsoft acquired N-trig in 2015, and Seger is now the CEO of SigmaSense, a global leader in touch sensing performance. SigmaSense brings the best user experiences to products ranging from mobile phones and laptops to large monitors and digital signage. Their revolutionary new approach to sensing delivers 100 to 1,000 times improved Signal-to-Noise Ratio (SNR) performance in many instances.

We are speaking to Rick as SigmaSense is embarking on an exciting next step in their Company’s growth: they have just concluded a Series B round totaling $24 million, which will help the Company begin mass production to deliver its innovations and features to a range of applications, starting with touch and improved user experiences.

Q: Can you provide a brief background on the technology behind SigmaSense®?

Troy Gray, one of our founders, is the inventor who originally conceived the idea to use the concurrent bi-directional drive and sense for detecting changes in electric fields instead of scanning voltage thresholds. Troy is an inventor at his core and has nearly 30 years in the touch industry. Several years ago, he came up with this exciting concept based on his innate understanding of how to detect electron movement: the ability to manipulate and sense electron fields concurrently and adaptively. From this idea of concurrent driving and sensing on the same pin, SigmaSense was born.

Q: How does this technology fundamentally revolutionize the semiconductor industry? 

Human factors decide the winners in almost every market. Digital devices are becoming more intuitive by leveraging AI capabilities that delight customers. For this reason, the future of semiconductors is becoming as much about sensing high fidelity analog interactions as it is about processor performance.  As sensing becomes a top priority, we are shifting the industry from mature voltage mode Analog-to-Digital Converters (ADCs) to current and frequency-based ADCs. This shift moves much of what is done in analog today to 90% digital silicon.

The analog voltage-based sensing industry has been stuck for the past 40 years fighting RC time constraints to detect voltage thresholds above the electrical noise using analog processes that are difficult to scale.  The technology injects latency, uses too much battery power, is limited in canceling noise, and cannot rapidly adapt to changing conditions.  By contrast, current mode ADCs take advantage of scalable digital semiconductor processes that are faster, lower power, higher SNR, and less expensive to manufacture.

At the hardware level, each pin on a semiconductor IC is typically dedicated to transmitting, receiving, power, OR other communications purposes. Troy’s breakthrough can reduce multiple pins to a single pin for many applications.   We apply current mode ADCs to enable a single pin to transmit, receive, deliver power, and encode communications concurrently and without muxing.  Combined with AI or edge processors, rather than transmitting massive amounts of data up to the cloud, designers can be selective on the data they want to deliver, capturing the highest fidelity data from any target analog system with far higher efficiency.

We are not the only ones to see SigmaSense revolutionizing the analog-to-digital interface. With our Series B, we are adding Aurelio Fernandez to our Board, who served as the first VP of Worldwide Sales for Broadcom and is exceptionally well connected in the semiconductor industry. Aurelio joins another semiconductor veteran, David French, who was added to our Board in our previous funding round and has a long history in the semiconductor industry with NXP, Cirrus Logic, ADI, and TI.

Q: How does the SigmaSense technology work? 

Our SigmaDrive® current-mode ADC technology can be applied to any impedance-based sensing problem, but let’s use the PCAP touch industry as a good example. With SigmaSense, all channels (rows and columns) are programmable and can sense temperature, pressure, capacitance, voltage, current, resistance, and impedance changes. Now, the presence of a hand or the movement of an object, whether it’s a conductive object or a dielectric, can be detected using a unique ability to see and sense all changes in the electric fields.

Q: What are the problems you are solving? 

Our first product, a software-defined touch-sensing solution we call SigmaVision®, is faster and more robust than current systems.  Touch defines the Human Machine Interactions (HMI), which defines the experience, which ultimately defines the brand. In the last 12 months, leading phones have struggled with firmware upgrades and failures in the field due to voltage mode touch solutions that are pushed to their limit.  The touch solutions have been found to be highly sensitive to noise, slowing report rates, increasing lag, and generally compromising customers’ experiences.  HMI is the wrong place to make compromises.  Imagine a touch response speed that is faster and smoother for high-speed gaming or one that readily works through gloves or can even perform gesture recognition above the screen without touching. Now, picture yourself in front of a 100-inch screen that you touch with gloves through a storefront window while it’s raining.  This technology enables a new class of current mode ADCs for use within an entirely new generation of faster, more responsive, and interactive devices.

Q: Why are Signal-to-Noise ratios and higher-quality data collection essential?

RS: Data starts at the sensor, at the conversion point from analog to digital, and ends with the desired output or expected response. Our silicon systems need better, faster data capture, especially as AI becomes more prevalent. We are watching now as AI systems are at the mercy of the data we load into them.  The data determines the experience.

Are we surprised by garbage in, garbage out? We have nearly unlimited sensing data everywhere: flowing through our bodies, coming off a touchscreen, or inside our vehicles, including all the changes and movements of electrons through various disruptions and interactions. Identifying which data is to be processed, which has the highest value, and which provides the best results will require high fidelity data provided by software-defined sensing systems that are adaptive and flexible. Analog systems are chaotic, changes are continuous, happening in real-time and cloud processing is not efficient, so we see significant silicon investments to improve processing performance at the edge.

Q: Why is SigmaSense’s ultra-low voltage a breakthrough in the semiconductor space? 

We have developed a single pin on a semiconductor device that can concurrently transmit, receive, communicate and provide power using ultra-low voltages, up to a thousand times lower voltages than what our competitors need for sensing in that same environment. The breakthrough means we no longer need high voltage signals to get above a noise floor. The benefits of ultra-low-voltage sensing are lower power consumption, longer battery life, lower-cost materials, better display optics, improved sensor reliability, and lower emissions.

Q: What trends do you see in the semiconductor space?

Recent semiconductor shortages have driven an increased focus on semiconductors’ importance in all our lives. Markets will drive semiconductor designs to higher efficiency, specifically a renewed focus on more efficient processing at the edge. Better sensing data is critical for our devices’ “end-to-end” processing performance and ultimately determines the human factors we want.

Many industry leaders are beginning to put a priority on end-to-end processing performance. The focus on semiconductors delivering raw processing performance will not end. Still, many of the most significant benefits of mixed-signal performance and efficiency are silicon, enabling better data capture. Silicon enablement of adaptive sensing systems is sure to win the new end-to-end processing challenge.

Our recent Series B fundraise and the addition of two semiconductor veterans to our board in the past year makes us very well positioned for making substantial impacts in a range of mixed signal markets. While we’re initially focused on capturing sizable market share in the touch and HMI (Human Machine Interface) markets, we will then extend our technology into wearables, bio-sensing, IoT and automotive applications.

Rick Seger’s Bio:

Rick Seger is a Pioneer of the PCAP touch and pen industry. As President of N-trig Inc. from 2006 to 2015, he helped define the first customer products to incorporate PCAP technology, enabling pen solutions into devices from nearly all major PC OEMs. Since 2006 he has driven the adoption of modern touch and pen-based input methods, influencing hardware and software design decisions. Laptop Magazine named him as one of the “25 Most Influential People in Mobile Technology” for his evangelism in this area.

Mr. Seger is a leading advocate for the advancement of Interactive Displays and is passionate about helping manufacturers deliver products that will drive broad adoption. Most exciting to Mr. Seger is the impact interactive touch and pen-based displays can have on Education Markets. From Healthcare to Education, from Business to the Arts, the promise of interactivity is strongly sought after driving design decisions. He has been integral to defining some of the best touch solutions, pens, applications, and sensing devices poised for rapid adoption.

Mr. Seger started his career at Intel Corporation and further developed his leadership experience in the Semiconductor Industry as VP Sales Motorola SPS Consumer Group. During his 13 years at Motorola, Mr. Seger consistently grew the Business, managing more than 80 IC design wins with major OEMs and generating more than $500M in annual revenues.


Webinar – Comparing ARM and RISC-V Cores

Webinar – Comparing ARM and RISC-V Cores
by Daniel Payne on 10-14-2021 at 10:00 am

Mirabilis Webinar, October 21

Operating systems and Instruction Set Architectures (ISA) can have long lifespans, and I’ve been an engineering user of many ISAs since the 1970s. For mobile devices I’ve followed the rise to popularity of the ARM architecture, and then more recently the RISC-V ISA which has successfully made the leap from university project to commercialization with a widening ecosystem of support.  Naturally the question arises as to which ISA fits a specific workload for the best efficiency, like: MIPS, latency, and the number of instructions.

One company that has expertise in answering these questions is Mirabilis Design, and they’re hosting a webinar about how to model and measure the efficiency of three popular cores:

  • ARM Cortex A53
  • ARM Cortex A77
  • SiFive U74

Mirabilis Design will show their models of these processors, and how to configure each processor model with settings for:

  • Clock Speed
  • Caches: L1, L2, DSU
  • AXI Speed
  • DRAM Speed
  • Custom switches

The same C code will be used across each processor, and the specific C compilers will be used. Simulations with the compiled code are run in the VisualSim tool, then the results of the simulations are compared to show metrics, like:

  • # of Instructions
  • Latency
  • Maximum MIPS
  • Cache hit-ratio
  • Memory bandwidth
  • Power

You will find out which ISA has the smallest # of instructions, ARM or RISC-V, meaning the best compiler efficiency, along with latency and MIPS numbers. With the Mirabilis Design approach it only takes minutes to run a simulation on your own C code, then collect all of the efficiency numbers for an ISA that you have configured. This information helps a system architect to detect any bottlenecks, and then optimize the architecture for best performance.

Summary

System architects and SoC design teams trying to decide which ISA to go with on their next project should be interested in this webinar. You can see the replay HERE.

Compare Performance-power of Arm Cortex vs RISC-V for AI applications.

Abstract:
In the Webinar, we will show you how to construct, simulate, analyze, validate, and optimize an architecture model using pre-built components. We will compare micro and application benchmarks on system SoC models containing clusters of ARM Cortex A53/A77/A65AE/N1, SiFive u74, and other vendor cores.

Aside from the processor resources such as cache and memory, the system will contain custom switches, Ingress/Egress buffers, credit flow control, DMA AI accelerators, NoC and AMBA AXI buses.

The evaluation and optimization criteria will be task latency, dCache hit-ratio, power consumed/task and memory bandwidth.

The parameters to be modified are bus topology, cache size, processor clock speed, custom arbiters, task thread allocation and changing the processor pipeline.

Key Takeaways:
1. Validating architecture models using mathematical calculus and hardware traces
2. Construct custom policies, arbitrations and configure processor cores
3. Select the right combination of statistics to detect bottlenecks and optimize the architecture
4. Identify the right use of stochastic, transaction, cycle-accurate and traces to construct the model

Speaker Bios:
Alex Su is a FPGA solution architect at E-Elements Technology, Hsinchu, Taiwan. Prior to that, Mr Su has worked at ARM Ltd for 5 years in technical support of Arm CPU and System IP.

Deepak Shankar is the Founder of Mirabilis Design and has been involved in the architecture exploration of over 250 SoC and processors. Deepak has published over 50 articles and presented at over 30 conferences in EDA, semiconductors, and embedded computing.

About Mirabilis Design
Mirabilis Design, a Silicon Valley company, designs cutting edge software solutions that identify and eliminate risks in product performance. Its flagship product, VisualSim Architect is a system-level modeling, simulation, and analysis environment that relies on libraries and application templates to vastly improve model construction and time required for analysis. The seamless design framework facilitates designers to work on a design together, cohesively, to meet an intermeshed time and power requirements. It is typically used for maximum results, early in the design stage, parallel to the development of the product’s written specification. It precedes implementation stages – RTL, software code, or schematic – rendering greater design flexibility.

Related Blogs


SeaScape: EDA Platform for a Distributed Future

SeaScape: EDA Platform for a Distributed Future
by Daniel Nenni on 10-14-2021 at 6:00 am

EDA Platform for a Distributed Future

The electronic design community is well aware that it faces a daunting challenge to analyze and sign off the next generation of huge multi-die 3D-IC systems. Most of today’s EDA tools require extraordinary resources in specialized computers with terabytes of RAM and hundreds of processors. Customers don’t want to keep buying more of these expensive systems. It was interesting then to see an alternative approach discussed at the Ansys IDEAS Digital Forum that promises a more scalable approach to dealing with huge design sizes.

The session titled “SeaScape Analysis Platform – What’s Up and What’s Coming” was presented by Scott Johnson, senior engineer in the Ansys R&D team. Scott summed up the challenge as the need for a way to make use of the thousands of cheap, generic computers made available by commercial cloud providers. The proposed answer is called SeaScape and it shares many similarities with the open source Spark analytics engine from Apache. But SeaScape was created and designed specifically for EDA applications and it greatly simplifies the application of big-data techniques for electronic design. Users don’t need to worry about process messaging or resource scheduling or any of that. SeaScape’s internal scripting and user interface is based on Python – the world’s most popular coding language that comes with a huge open source ecosystem.

SeaScape is pre-built for the cloud and designed to require minimal setup. Scott stated that one of its primary benefits is the elastic compute resource allocation that allows every job to start as soon as even a single CPU is ready, and more CPUs will be conscripted as they become available and as required by the tool.

Instant start-up and easy cloud deployment are just two of the major benefits offered by SeaScape’s big-data distributed data processing technology

Following this introduction to SeaScape Scott turned his focus to practical deployment of SeaScape.  The first product implementation was in Ansys RedHawk-SC. RedHawk is the EDA industry golden signoff tool for chip power integrity analysis. RedHawk has been ported onto the SeaScape data platform and is now RedHawk-SC, which is currently in widespread production use at most leading semiconductor houses.

One of the really unique features over traditional EDA tools made possible in RedHawk-SC thanks to SeaScape is that a single session can simultaneously analyze multiple views, scenarios, and PVT corners. That means that a single RedHawk-SC session will generate multiple extraction views in the physical space, multiple transient analyses, multiple signal integrity views, and so forth.

The consequence of this massive parallelism is that additional analytics become possible that were never available before. This includes predictive analytics derived when things like switching activity, physical location, and timing criticality are combined in true multi-variable analysis to create an avoidance score that tells designers early on what things probably will work well and what won’t. As Scott points out, “Having this breadth of data available at your fingertips is a game changer!”. Customers can also customize RedHawk-SC’s analyses and tailor them to their signoff needs through the Python scripting interface.

SeaScape is in production use in RedHawk-SC for power integrity signoff of some of the world’s largest digital chips. It’s ability to analyze multiple operational corners at the same time is a huge advance in speed and analytics

The last section of Scott’s presentation described some of the advanced capabilities on offer in RedHawk-SC that were made possible by SeaScape. These include:

  • Reliable analysis of dynamic voltage drop (DvD) by simultaneously analyzing many thousands of possible switching scenarios in order to give extremely high coverage.
  • DvD diagnosis capabilities that untangle which cells are the ultimate root causes of observed voltage drops and focus debugging effort in the right places.
  • Use the analysis data to perform very fast ‘what-if’ queries in a matter of seconds
  • Construct hierarchical reduced order models (ROM) that capture the essentials of the interactions between multiple components with much faster runtimes
  • Analyze the low-frequency power noise in the package so their impact can be decoupled from the analysis of high-frequency power noise in each chip. This speeds up analysis where both slow and fast signals interact

Scott summarized SeaScape as a better way forward to handle today’s huge design sizes and rising complexity by harnessing the power of many small machines in the cloud, completing even the biggest tasks in a single day, and bringing together disparate data sources to improve the quality and power of information delivered to the user.

More technical sessions and designer case studies are available at Ansys IDEAS Digital forum at www.ansys.com/ideas .

Also Read

Ansys Talks About HFSS EM Solver Breakthroughs

Ansys IDEAS Digital Forum 2021 Offers an Expanded Scope on the Future of Electronic Design

Have STA and SPICE Run Out of Steam for Clock Analysis?


Webinar – SoC Planning for a Modern, Component-Based Approach

Webinar – SoC Planning for a Modern, Component-Based Approach
by Mike Gianfagna on 10-13-2021 at 10:00 am

Webinar – SoC Planning for a Modern Component Based Approach

We all know that project planning and tracking are critical for any complex undertaking, especially a complex SoC design project. We also know that IP management is critical for these same kinds of projects – there is lots of IP from many sources being integrated in any SoC these days. If you don’t keep track of what you’re using and how it’s used there will be chaos. What isn’t discussed as much is how these two disciplines interact – what are the benefits of a holistic approach? This was the focus of a recent webinar from Perforce. The synergies and benefits of a comprehensive approach are substantial. Read on to learn about SoC planning for a modern, component-based approach.

Johan Karlsson

First up is Johan Karlsson, senior consultant and Agile expert at Perforce. Johan elaborates on planning strategies that are useful for SoC design. He points out that SoC projects are becoming more software-centric, and this creates opportunities. Johan reviews the various strategies available for planning complex projects.

He begins with a discussion of the traditional “deadline tracking” type of approach. The mindset here incudes:

  • Visualizing fixed deadlines and what leads up to them
  • Handling hard dependencies between different work activities
  • Rolling wave planning details

Managing the dependencies and the impact of changes is key for this approach. Approaches for implementation include:

  • Work breakdown structure
  • Gannt scheduling

Another approach Johan discusses is something called the lean approach. This technique focuses on delivering customer value in a just-in-time way. Quality is key here, with lots of root cause analysis to find and improve process steps. The customer, in this context could be the end customer or an internal team that is involved in the project. The approach focuses on flow and looks for areas where waste can be reduced.  The principles here include:

  • Value: satisfy customer needs just in time
  • Flow: locating waste generated by the way a process is organized
  • Quality: is built-in

Approaches for implementation can include:

  • Whiteboard
  • Post-it notes of different colors
  • Pens

The final approach is adaptive techniques. Here, an agile approach is taken. The methodology is very similar to what is used in software development – it can be applied to management of IC design as well. The driving philosophy of this approach is the Agile Manifesto, summarized as follows:

  • Individuals and interactions over process and tools
  • Working software (or hardware) over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following the plan

A SCRUM framework can be used for implementation:

  • Roles (product owner, SCRUM master)
  • Events/meetings (sprint planning, daily stand-ups, sprint reviews)
  • Artifacts (product backlog, sprint backlog, etc.)

Johan then discusses the reality of real projects, where a hybrid, or mixed-use approach of all three methods will typically work best. There are excellent insights offered here about what will work best in real projects and how various approaches can be implemented. I highly recommend you get these insights directly from Johan. I webinar replay link is coming. Spoiler alert: Hansoft from Perforce provides an excellent backbone to implement a customized, targeted planning approach.

Simon Butler

The next presenter is Simon Butler, general manager, Methodics IPLM. IP Lifecycle Management (IPLM) has been covered quite a bit by SemiWiki. You can get a good overview of IPLM here. Simon begins with a good overview of the fundamentals of IPLM. It’s worth repeating here:

  • The fundamental use model in IPLM is hierarchical configuration management of the project IP (component) list
    • Some of these IPs will be outsourced or off the shelf, others internally developed
    • IPLM enables a robust release flow managing internal and external component versions
  • The IPLM release flow can be integrated directly into your verification flow and enforce quality control on your release candidates
    • Releases of the required quality can be automatically inserted into the overall hierarchy, with versioning to ensure traceability

Simon goes on to explain the various parts of the design that can be tracked – both the data and metadata. A great explanation of how to implement these concepts in a design flow is also presented, complete with a discussion of the bill of materials and how it is managed. A methodology to unify the management and tracking of IP and its impact on the overall project plan is presented by Simon, along with an example.

At this point, I started to see the benefits of unifying these two disciplines. IP effects the design project and vice versa. Keeping track of all of it in one unified environment is quite appealing. During the webinar, a convenient Semiconductor Starter Pack is offered. This package contains all the tools needed to implement a complete IC and IP tracking/management flow. This is a great way to experience the benefits of a unified approach. If some of the items discussed seem relevant to your design projects, you can check out the webinar here. It also includes a very relevant Q&A section.  Now you can find out about SoC planning for a modern, component-based approach.

Also Read

You Get What You Measure – How to Design Impossible SoCs with Perforce

Achieving Scalability Means No More Silos

Future of Semiconductor Design: 2022 Predictions and Trends


TSMC Arizona Fab Cost Revisited

TSMC Arizona Fab Cost Revisited
by Scotten Jones on 10-13-2021 at 8:00 am

TSMC North America Fabs

Back in May of 2020 I published some comparisons of the cost to run a TSMC fab in Arizona versus their fabs in Taiwan. I found the fab operating cost based on the country-to-country difference to only be 3.4% higher in the US and then I found an additional 3.8% because of the smaller fab scale. Since that time, I have continued to encounter reports that the US fab costs are approximately 30% higher than countries in Asia. In the studies I have found, most of the cost difference is attributed to “incentives” without a clear explanation of what the incentives are. My calculation does not include incentives but still the size of the difference led me to completely reexamine my assumptions and look into incentives, what they could be and how they would impact the costs I calculate.

Profit and Loss

At the highest-level companies are judged by their Profit and Loss (P&L) and I decided to go through a simple P&L line by line and look at every country-to-country difference that could impact the bottom line profitability.

A P&L is summarized on an income statement, a simple income statement is:

  1. Revenue – the money received from selling the product
  2. Cost of Goods Sold (COGS) – the direct costs to produce the product being sold. This is what our Models calculate.
  3. Gross Margin = Revenue – COGS. For wafer sale prices we estimate gross margin and apply it to the wafer cost.
  4. Period expenses – Research and Development expenses (R&D), Selling, General and Administration expenses (SG&A) and other expenses.
  5. Operating Income = Gross Margin – Period Expenses
  6. Income Before Tax = Operating Income – Interest and Other
  7. Net Income = Income Before Tax – Tax (tax is based on Income Before Tax)

We can then go through this line by line to look at country by country differences. These line numbers will be referenced below in bold/italics.

For a cost evaluation line 1. Is irrelevant.

Line 2. (COGS) Is a key differentiator.

Cost of Goods Sold

In our Models we break out wafer cost categories as follows:

  • Starting Wafer
  • Direct labor
  • Depreciation
  • Equipment Maintenance
  • Indirect Labor
  • Facilities
  • Consumables

Starting wafers – our belief is that starting wafers are globally sourced and the country where they are purchased does not impact the price. This has been confirmed in multiple expert interviews including by wafer suppliers.

Direct Labor (DL) – all our Models have DL rates by country and year for 24 countries. In 2021 the difference in labor rate from the least expensive to most expensive country was 21x! For each wafer size and product type we have estimates of labor hours required and we calculate the direct labor cost. We believe this calculation accurately reflect cost differences between countries in all our Models. It should be noted here, that leading edge 300mm wafer fabs are so highly automated that there are very few labor hours in the process and even with a huge labor rate difference, the percentage impact on wafer cost is small.

Depreciation – this is the most complex category. The capital cost to build a wafer fab is depreciated over time to yield with the depreciation amount charged off to the P&L.

We break out the capital cost to build a facility into:

  1. Equipment – we believe equipment is globally sourced and the cost is basically the same in any country. We did get one input that the US cost are slightly higher due to import costs, but we don’t believe this is significant.
  2. Equipment Installation – install costs in our Models are based on equipment type with different costs assigned to; inspection and metrology equipment, lithography equipment, and other equipment types (ALD, CVD, PVD, etc.). What we have found in our interviews is the costs vary by country with the variation being different for the different categories. For example, inspection and metrology equipment installation is heavily weighted toward electrical work that varies in cost between countries. Other equipment is more heavily weighted toward process hookups that are less country dependent. Lithography equipment is intermediate between the two.
  3. Automation – we believe automation is globally sourced and does not change in cost between countries although we are still checking on this assumption.
  4. Building – in the past we assumed that building costs were the same by country believing the major components were globally sourced. In our expert interviews we found there is a significant difference in cost per country. Revisiting fab construction costs we have in our databases also found differences after accounting for product types. Our latest Strategic Cost and Price Model fully accounts for these differences.
  5. Building Systems – as with the building we assumed building systems were globally sourced and the cost didn’t vary by country, but this only partially true. Our latest Strategic Cost and Price Model fully accounts for these differences.
  6. Capital Incentives – if a company receives government grants to help pay for the investments to build a wafer fab, they will impact the actual capital outlay for the company building the Fab. In the past we have not accounted for this, we now allow capital incentives to be entered into the model.

Our models all calculate the capital investment by fab using a detailed bottoms-up calculation. The equipment, equipment installs, and automation are then depreciated over five years, the building systems over ten years and the building over fifteen years. We use these default values because most companies use these lifetimes for reporting purpose. There are lifetimes by country differences for tax purposes, but taxes and reporting values are typically calculated separately. There are some companies that don’t use five years for equipment but to enable consistent comparison between fabs we use five years as a default, although the ability to change the lifetimes is built into many of our Models.

Equipment Maintenance – equipment maintenance costs include consumable items, repair parts and service contracts. The technicians and engineers that maintain equipment at a company are accounted for in the Indirect Labor Cost described below.

In our Strategic Cost and Price Model the country differences are accounted for as follows:

  1. Consumables – we continue to believe this is the same by country but there are company to company differences. For example, an etcher has quartz rings in the etch chamber that some companies source from the Original Equipment Manufacturers and other companies may source in the secondary market at lower cost.
  2. Repair Parts – repair parts are distinct from consumables in that they aren’t expected to normally wear out during operation. We believe these are globally sourced and don’t vary in cost by country.
  3. Service Contracts – we believe there is some difference in service contract costs due to labor rate differences.

Our latest Strategic Cost and Price Model fully accounts for these differences.

Indirect Labor (IDL) – IDL is made up of engineers, technicians, supervisors and managers, our Models have engineer salaries by country for twenty-four countries by year and ratios are used to calculate the technician, supervisor, and manager salaries. Difference in engineer salaries vary by 12x between the lowest cost and highest cost countries. For each process/fab being modeled we look at the IDL hours required for the process and break out the IDL hours between the four IDL categories. We believe all our Models correctly reflect country to country differences currently. As with DL costs, IDL costs have less impact on wafer cost than you might expect but are more significant than DL costs.

Facilities – we break out facilities into Ultrapure Water, Water and Sewer, Electric, Natural Gas, Communications, Building Systems Maintenance, Facility Occupancy, and Insurance. The main costs are Electric, Natural Gas, Building Systems Maintenance, and Insurance. Our Models all account for Electric and natural gas rates by country for twenty-four countries. Electrical rates vary by 2.8x by country and natural gas by 7.6x by country and both are fully accounted for in the models. Facility system maintenance and facility occupancy also vary by country. Our latest Strategic Cost and Price Model fully accounts for these differences.

Consumables – all our Models calculate consumables in varying degrees of detail. We believe materials are sourced globally and do not vary in price by country. There are some country-to-country tariff differences but the implementation of this is so complex and constantly changing that we do not model. It. We do not believe the impact is significant.

Profit and Loss – Continued

 Line 3 – Gross Margin

Gross Margin isn’t part of a COGS discussion but many of our customers buy wafers from foundries. Foundry wafer prices are Wafer Cost + Foundry Margin and we have put significant effort into providing Foundry Margin guidance in our models. Foundry Margins in our Models vary company to company and within a company by year and quarter, purchasing volume and process node. They are not country dependent.

Line 4 – Period Expenses

 Not relevant to a wafer cost discussion

Line 5 Operating Income

Not relevant to a wafer cost discussion

There are two other places in the P&L where we may see country-to-country impact.

Line 6 Income Before Taxes

if a government offers a company low-cost loan this would reduce interest expenses in the interest line. In my opinion low-cost loans are incentives.

Line 7 Net Income

Tax – there are two pieces to the tax line, one is country-to-country tax rate differences and the other is preferential tax rates. In my opinion tax rate differences are a structural difference whereas a preferential tax rate is an incentive. For example, the corporate tax rate in the US is 25.8% and in Taiwan is 20%. These tax rates are normally applied to Income Before Taxes.

In summary we see country to country operating cost differences and our current release of our Strategic Cost and Price Model models these differences accurately and in detail.

There is also country to country tax rate differences that we don’t model because they are below the COGS line.

Finally, there are incentives, we see these as having three parts:

  1. Capital grants that would reduce capital cost and therefore depreciation in COGS.
  2. Low-rate loans that would impact interest expenses.
  3. Tax incentives, investment, R&D and other tax reductions.

TSMC Arizona Fab

Having reviewed all the elements of wafer cost difference we can now investigate how TSMC’s cost in Arizona will match up to their cost in Taiwan.

TSMC currently produces 5nm wafers in Fab 18 – phases 1, 2, and 3 in Taiwan. We believe each phase is current running 40,000 wafer per month (wpm) with plans to ramp to 80,000 wpm per phase over the next two years. In contrast the Arizona fab is planned to produce 20,000 wpm (at least initially). This will lead to three differences in costs:

  1. Country to country operating cost difference – after accounting for all the operating cost differences, we now find a 7% increase in cost to operate in Arizona versus Taiwan. We find a higher difference than we did previously due to now including some factors we had previously missed. Having reviewed a P&L line by line and consulting with a wide range of experts we do not believe there are any missing parts to this analysis. An interesting note here is direct labor cost in the US are over 3x the rate in Taiwan, but they have only minimal impact because in Taiwan direct labor is only 0.1% of the wafer cost and even tripling or quadrupling the labor rate it is still less than 1% of the wafer cost. Utility costs are the other hand are lower in the US.
  2. Fab size differences – accounting for a 20,000 wpm fab in the US versus 80,000 wpm in Taiwan, plus the efficiency of clustering multiple fabs together in Taiwan adds 10% to the country-to-country difference found in 1. For a total 17% difference. We want to highlight that the 10% additional cost is due to TSMC’s decision to build a small fab in the US. We expect the initial Arizona cleanroom to have room to ramp up to more than 20,000 wpm and the site to have room for additional cleanrooms. Over time if TSMC ramps up and expands the site the 10% difference can be reduced or eliminated.
  3. Incentives – to the best of my knowledge Taiwan does not offer direct capital grants. To the best of my knowledge Taiwan does not offer low-cost loans. In the past Taiwan offered tax rebates for capital investment in fabs but my understand is this program has ended. There are R&D tax rebates available, and Taiwan has a lower corporate tax rate than the US (although this isn’t an “incentive” in my view). To investigate the tax advantage for TSMC in Taiwan versus the US I have compared TSMC’s effective tax rate over the last three years to Intel’s effective tax rate in the last three years. Surprisingly they aren’t that different, now I know there is a lot of complex financial engineering in Taxes, but it is the best comparison I can find. TSMC ‘s tax rate for 2018, 2019 and 2020 is 11.7%, 11.4% and 11.4% respectively. Over the same period Intel’s tax rate was 9.7% (one-time benefits) in 2018, 12.5%, in 2019, and 16.7% (NAND Sale) in 2020. So over three years TSMC paid 11.5% and Intel paid 13.1% as a tax rate which isn’t that different.

Conclusion

The bottom line to all this is the cost for TSMC to make wafers in the US is only 7% higher than Taiwan if they built the same size fab complex in the US as what they have in Taiwan. Because they are building a smaller Fab complex the cost will be 17% higher but that is due to TSMC’s decision to build a smaller fab, at least initially.

I do want to point out this doesn’t mean the US is not at a bigger cost disadvantage versus any other country. India has reportedly discussed providing 50% of the cost of a fab as part of an attempt to get Taiwanese companies to set up a fab in India. At least in the past the national and regional governments in China have offered large incentives. Israel has also provided significant incentives to Intel in the past. But under current conditions a US fab is only 7% more expensive than a fab in Taiwan if all factors other than the location are the same.

Also Read:

Intel Accelerated

VLSI Technology Symposium – Imec Alternate 3D NAND Word Line Materials

VLSI Technology Symposium – Imec Forksheet


AI and ML for Sanity Regressions

AI and ML for Sanity Regressions
by Bernard Murphy on 10-13-2021 at 6:00 am

machine learning for regressions min

You probably know the value proposition for using AI and ML (machine learning) in simulation regressions. There are lots of knobs you can tweak on a simulator, all there to help you squeeze more seconds, or minutes out of a run. If you know how to use those options. But often it’s easier to talk to your friendly AE, get a reasonable default setup and stick with that. Consider that a sort of one-step learning.

However, what works well in one case may not be optimal in others. Learning must evolve as designs and test cases change. You can’t reasonably call the AE in for every run, and you shouldn’t have to. ML can automate the learning. Which makes sense, but what I had not realized is that one of the big impact areas for this technology is for sanity regressions. Vishwanath (Vish) Gunge of Microsoft elaborated at Synopsys Verification Day 2021.

Why short regressions are such a good fit

Sanity tests are those tests you run to make sure you (or someone else) didn’t do something stupid. Like accidentally checking in code that you hadn’t finished fixing. Or leaving a high verbosity debug switched turned on. When you want to integrate all the code in a big subsystem of the whole SoC, probabilities of a basic mistake add up quickly. We design sanity tests to smoke these problems out quickly. Because the last thing you want is to launch overnight regressions, then come back in the morning to garbage results. Sanity tests are designed to run quickly, maybe a few minutes, at most say 30 minutes, in parallel across many machines.

Seems like that wouldn’t be where you would find a big win in ML optimization. But you’d be wrong. It’s not the test run-time that matters, it’s the frequency of those tests. Vish said that in their environment, sanity regressions consume huge compute resources, running many times per day. Which I read as them using those regressions in the best possible way – flushing out basic mistakes as a per-designer level, a per-subsystem level and a full integration level. When a mistake is found, a sanity test (or tests) must be re-run. Lot of checking before time is invested in expensive full regressions. Which is why ML can have an important impact.

VCS DPO

Synopsys VCS® offers a dynamic performance optimization (DPO) option based on both proprietary and ML methods. I don’t know the internal details, but it is interesting that they use other methods in addition to ML. ML is the hot topic these days but it’s not always the most efficient way to get to a good result. Rule-based systems can be more semantically aware and converge quicker to an approximate solution, from which ML can then further optimize. At least that’s my guess.

That said, this is AI/ML so there is a “training” phase and an “application” phase. All packaged for ease of use, no AI skills required by the end user.

Dynamic performance optimization in action

Vish presented analysis comparing the non-AI (base-level) run-time with learning phase and the application phase on the same set of sanity runs. For DPO they used all optimization apps available as a starting point, for example FGP (fine-grained parallelism) with multiple cores. Naturally learning phase runs were slower than the base level, perhaps by ~30%. However, application runs were on average 25% faster, allowing them to do ~30% more of these regressions per day.

Vish stressed that some thought is required to get maximum benefit in these flows since learning takes more time than base runs. He suggested running learning once every few days as the design is evolving, to keep optimizations reasonably current as design and tests change. Learning can run less frequently as the project is nearing signoff since optimum settings shouldn’t be expected to change as often.

A very interesting and practical review. You can learn more from the recorded session. Vish’s talk is one of the early sessions on Day 2.

Also Read:

IBM and HPE Keynotes at Synopsys Verification Day

Reliability Analysis for Mission-Critical IC design

Why Optimizing 3DIC Designs Calls for a New Approach