ads mdx semiwiki building trust gen 800x100ai

Arm Inches Up the Infrastructure Value Chain

Arm Inches Up the Infrastructure Value Chain
by Bernard Murphy on 08-30-2023 at 6:00 am

Arm just revealed at HotChips their compute subsystems (CSS) direction led by CSS N2. The intent behind CSS is to provide pre-integrated, optimized and validated subsystems to accelerate time to market for infrastructure system builders. Think HPC servers, wireless infrastructure, big edge systems for industry, city, enterprise automation. This for me answers how Arm can add more value to system developers without becoming a chip company. They know their technology better than anyone else; by providing pre-designed, optimized and validated subsytems – cores, coherent interconnect, interrupt, memory management and I/O interfaces, together with SystemReady validation – they can chop a big chunk out of the total system development cycle.

Accelerating Custom Silicon

A completely custom design around core, interconnect, and other IPs obviously provides maximum flexibility and ability to differentiate but at a cost. That cost isn’t only in development but also in time to deployment. Time is becoming a very critical factor in fast moving markets – just look at AI and the changes it is driving in hyperscaler datacenters. I have to believe current economic uncertainties compound these concerns.

Those pressures are likely forcing an emphasis on differentiating only where essential and standardizing everywhere else, especially when proven experts can take care of a big core component. CSS provides a very standard yet configurable subsystem for many-core compute, include N2 cores (in this case), the coherent mesh network between those cores, together with interrupt and memory management, cache hierarchy, chiplet support through UCIe or custom interfaces, DDR5/LPDDR5 external memory interface, PCIe/CXL Gen5 for fast IO and or coherent IO, expansion IO, and system management.

All PPA optimized for an advanced 5nm TSMC process and proven SystemReady® with a reference software stack. The system developer still has plenty of scope for differentiation through added accelerators, specialized compute, their own power management, etc.

Neoverse V2

Arm also announced a next step in the Neoverse V-series, unsurprisingly improved over the V1 version with improved integer performance and reduction in system level cache misses. There is improvement on a variety of other benchmarks also.

Also noteworthy is its performance in the NVIDIA Grace-Hopper combo (based on Neoverse V2). NVIDIA shared real hardware data with Arm on performance versus Intel Sapphire Rapids and AMD Genoa. In raw performance the Grace CPU was mostly at par with AMD and generally faster than Sapphire Rapids by 30-40%.

Most striking for me was their calculation for a datacenter limited to 5MW, important because all datacenters are ultimately power limited. In this case Grace bested AMD in performance by between 70% and 150% and was far ahead of Intel.

Net value

First on Neoverse’s contribution to Grace-Hopper – wow. That system is at the center of the tech universe right now, thanks to AI in general and large language models in particular. This is an incredible reference. Second, while I’m sure that Intel and AMD can deliver better peak performance than Arm-based systems, and Grace-Hopper workloads are somewhat specialized, (a) most workloads don’t need high end performance and (b) AI is getting into everything now. It is becoming increasingly difficult to make a case that, for cost and sustainability over a complete datacenter, Arm-based systems shouldn’t play a much bigger role especially as expense budgets tighten.

For CSS-N2, based on their own analysis Arm estimates up to 80 engineering years of effort required to develop the CSS N2 level of integration, a number that existing customers confirm is in the right ballpark. In an engineer-constrained environment, this is 80 engineering years they can drop from their program cost and schedule without compromising whatever secret differentiation the want to add around the compute core.

These look like very logical next steps for Arm in their Neoverse product line. Faster performance in the V-series and let customers take advantage of Arm’s own experience and expertise in building N2-based compute systems, while leaving open lots of room for adding their own special sauce. You can read the press release HERE.


Visit with Easy-Logic at #60DAC

Visit with Easy-Logic at #60DAC
by Daniel Payne on 08-29-2023 at 10:00 am

Easy-Logic at #60DAC

I had read a little about Easy-Logic before #60DAC, so this meeting on Wednesday in Moscone West was my first in-person meeting with Jimmy Chen and Kager Tsai to learn about their EDA tools and where they fit into the overall IC design flow. A Functional Engineering Change Order (ECO) is a way to revise an IC design by updating the smallest portion of the circuit, avoiding a complete re-design. An ECO can happen quite late in the design stage, causing project delays or even failures, so minimizing this risk and reducing the time for an ECO is an important goal, one that Easy-Logic has productized in a tool called EasylogicECO.

Easy-Logic at #60DAC

This EDA tool flow diagram shows each place where EasylogicECO fits in with logic synthesis, DFT, low power insertion, Place & Route, IC layout and tape-out.

EasylogicECO tool flow

Let’s say that your engineering team is coding RTL and they find a bug late in the design cycle, they could make an RTL change and then use the EasylogicECO tool to compare the differences between the two RTL versions, and then implement the ECO changes, where the output is an ECO netlist and the commands to control the Place & Route tools from Cadence or Synopsys.

Another usage example for EasylogicECO is post tape-out where a bug is found or the spec changes, and then you want to do a metal-only ECO change in order to keep mask costs lower.

Easy-Logic is a 10 year old company, based in Hong Kong, and their EasylogicECO tool came out about 5-6 years ago. Most of their customers are in Asia and the names have been kept private, although there are quotes from several companies, like: Sitronix, Phytium, Chipone, Loongson Technology, ASPEED and Erisedtek. Users have designed products in industries for cell phone, HPC, networking, AI, servers, and high-end segments.

EasylogicECO is being used mostly on the advanced nodes, such as 7nm and 10nm, where design sizes can be 5 million instances per block, and functional ECOs are used at the module and block levels. Their tool isn’t really replacing other EDA tools, rather it fits neatly into existing EDA tool flows as shown above. Both Unix and Linux boxes run EasylogicECO, and the run times really depend on the complexity of the design changes. With a traditional methodology it could take 5 days to update a block with 5 million instances, but now with the Easy-Logic approach it can take only 12 hours. This methodology aims to make the smallest patch in the shortest amount of time.

Easy-Logic works at the RTL level. After logic synthesis you basically lose the design hierarchy, which makes it hard to do an ECO. Patents have been issued for the unique approach that EasylogicECO takes by staying at the RTL level.

Engineering teams can quickly evaluate within a day or two this approach from Easy-Logic. They’ve made the tool quite easy to use, so there’s a quick learning curve, as your inputs are just the original RTL, the revised RTL, the original netlist, the synthesized netlist of the revised RTL, and a library.

With 50 people in the company, you can contact an office in Hong Kong, San Jose, Beijing or Taiwan. 2023 was the first year at DAC for the company. Engineers can use this new ECO approach in four use cases:

  • Functional ECO
  • Low power ECO
  • Scan chain ECO
  • Metal ECO

Summary

SoC design is a very challenging approach to product development where time is money, and making last-minute changes like ECOs can make or break the success of a project. Easy-Logic has created a methodology to drastically shorten the time it takes for an ECO, while staying at the RTL level. I expect to see high interest in their EasylogicECO tool this year, and more customer success stories by next DAC in 2024.

Related Blogs

Key MAC Considerations for the Road to 1.6T Ethernet Success

Key MAC Considerations for the Road to 1.6T Ethernet Success
by Kalar Rajendiran on 08-29-2023 at 6:00 am

The World of Ethernet is Gigantic and Growing

Ethernet’s continual adaptation to meet the demands of a data-rich, interconnected world can be credited to the two axes along which its evolution has been propelled. The first axis emphasizes Ethernet’s role in enabling precise and reliable control over interconnected systems. As industries embrace automation and IoT, Ethernet facilitates real-time monitoring, seamless communication, and deterministic behavior, fostering a new era of industrial and infrastructure advancements. The second axis underscores Ethernet’s capacity to handle the burgeoning volumes of data generated by modern applications. From cloud computing to AI-driven analytics, Ethernet serves as the backbone for data movement, storage, and deep analysis, accelerating insights and innovation across diverse domains. The next speed milestone in ethernet’s evolution is 1.6T and this transformative leap requires a meticulous approach to meet the requirements along both of the above axes.

The advent of 1.6T Ethernet heralds a new era of connectivity, one where data-intensive applications will seamlessly coexist with latency-sensitive demands. Through the convergence of 224G SerDes technology, flexible and configurable MAC and PCS IP developments, and optimized silicon architectures, the networking industry can deliver solutions that not only meet but exceed the requirements of 1.6T ethernet systems. This is the context of a Synopsys-sponsored webinar where Jon Ames and John Swanson spotlighted the focus areas of design for achieving efficiency and delivering performance.

Key Considerations for 1.6T Ethernet Success

At the heart of the Ethernet subsystem are the application and transmit/receive (Tx/Rx) queues. Application queues handle data coming from applications and services running on network-connected devices. These queues manage the flow of data into the Ethernet subsystem for transmission. The Tx/Rx queues manage the movement of packets between the Media Access Control (MAC) layer and the PHY layer for transmission and reception, respectively. Efficient queue management ensures optimal data flow and minimizes latency. Scalability, flexibility, efficient packet handling, streamlined error handling, low latency, support for emerging protocols, energy efficiency, forward error correction (FEC) optimization, security and data integrity, interoperability and compliance are all key considerations in an Ethernet subsystem.

The MAC layer is responsible for frame formatting, addressing, error handling, and flow control. It manages the transmission and reception of Ethernet frames and interacts with the PHY layer to control frame transmission timings. Timing considerations are crucial to ensure proper communication between the PHY and MAC layers, especially at high speeds.

The Physical Coding Sublayer (PCS) is responsible for encoding and decoding data for transmission and reception. It interfaces between the MAC layer and the PMA/PMD layer. The PCS manages functions like data scrambling, error detection, and link synchronization. It prepares data from the MAC layer for transmission through the PMA/PMD layer.

The PMA (Physical Medium Attachment), PMD (Physical Medium Dependent), and PHY (Physical Layer) collectively handle the physical transmission of data over the network medium, be it copper cables or optical fibers. The PMA/PMD layer performs functions like clock and data recovery, signal conditioning, and modulation. The PHY layer manages signal transmission, equalization, and error correction to ensure reliable data transfer at high speeds.

The synergy between cutting-edge 224G SerDes technology and the development of innovative MAC and PCS IP is poised to redefine the accessibility and scalability of 1.6T Ethernet. These components play a pivotal role in the realization of off-the-shelf solutions that seamlessly align with forthcoming 1.6T Ethernet standards. The 224G SerDes technology offers the crucial physical layer connectivity required to sustain the high data rates demanded by 1.6T Ethernet. Achieving successful communication at high data rates requires close coordination between the PHY and MAC layers, accurate timing synchronization, and the implementation of effective error correction techniques. These factors will collectively contribute to the reliability, efficiency, and performance of 1.6T Ethernet networks.

Synopsys Solutions

Synopsys MAC, PCS, and 224G SerDes IP solutions come with pre-verified and optimized designs. This means that the IP has already undergone rigorous testing and validation, reducing the need for extensive in-house verification efforts. This accelerates the development process by providing a reliable foundation to build upon. The IP solutions are designed to comply with IEEE 802.3 Ethernet standards and ensure interoperability and compatibility with a wide range of devices and network configurations. Designers can rely on the IP’s adherence to these standards, saving time that would otherwise be spent on custom protocol implementation. The solutions often come with configurability options. This enables designers to tailor the IP to their specific application requirements without having to build everything from scratch. This configurability streamlines the design process and reduces the need for extensive manual modifications.

Summary

As the race toward 1.6T Ethernet intensifies, the development of silicon solutions capable of delivering optimized power efficiency and minimal silicon footprint becomes paramount. To harness the capabilities of 1.6T Ethernet without compromising on energy consumption and design complexity, engineers must craft architectures that seamlessly merge efficiency with innovation. This involves meticulous digital design, ensuring that the intricate interaction between hardware components and software layers is harmonious, thereby producing networking solutions that are both efficient and robust and help accelerate first pass silicon success.

For more details, visit the Synopsys Ethernet IP Solutions page.

You can watch the entire webinar on-demand from here.

Also Read:

WEBINAR: Why Rigorous Testing is So Important for PCI Express 6.0

Next-Gen AI Engine for Intelligent Vision Applications

VC Formal Enabled QED Proofs on a RISC-V Core


Systematic RISC-V architecture analysis and optimization

Systematic RISC-V architecture analysis and optimization
by Don Dingee on 08-28-2023 at 10:00 am

RISC V architecture analysis and optimization chain

The RISC-V movement has taken off so quickly because of the wide range of choices it offers designers. However, massive flexibility creates its own challenges. One is how to analyze, optimize, and verify an unproven RISC-V core design with potential microarchitecture changes allowed within the bounds of the specification. S2C, best known for its FPGA-based prototyping technology, gave an update at #60DAC into its emerging systematic RISC-V architecture analysis and optimization strategy, adding modeling and emulation capability.

Three phases to RISC-V architecture analysis

RISC-V differs from other processor architectures in how much customization is possible – from execution unit and pipeline configurations all the way to adding customized instructions. Developers are exploring the best fits of various RISC-V configurations in many applications, where some definitions are still ambiguous. EDA support has yet to catch up; basic tools exist, but few advanced modeling platforms are available.

These conditions leave teams in a problem: if they extend the RISC-V instruction set for their implementation, they must create new cycle-accurate models for those instructions before assessing performance, simulated or emulated. S2C is working to fill this void with a complete chain for systematic RISC-V architecture analysis and optimization featuring one familiar technology flanked by two others.

First in the chain is S2C’s new RISC-V “core master” model abstraction platform, Genesis. It provides stochastic modeling, system architecture modeling, and cycle-accurate modeling, with increasing levels of accuracy as models add fidelity. Genesis allows the simulation of commercially available RISC-V cores as IP modules, then updating parameters or adding custom logic to the microarchitecture. These simulations enable earlier optimization of cores.

Holding the middle of the analysis chain is the S2C Prodigy prototyping family, facilitating FPGA-based prototypes for hardware logic debugging, basic performance assessment, and early software development. Prodigy prototyping hardware also accepts off-the-shelf I/O modules developed by S2C for stimulus and consumption of real-world signals around the periphery of the SoC, as well as RISC-V IP performance verification.

 

New emulation capability comes with S2C’s OmniArk hybrid emulation system, capable of hyper-scale verification of RISC-V SoCs. OmniArk specializes in compiling automotive SoCs and boasts powerful debugging capabilities for an efficient verification environment. It scales up to 1 billion gates for large designs and supports verification modes like QEMU, TBA, and ICE.

An example: collaboration on the XiangShan RISC-V core project

Accurate behavioral models of RISC-V cores carry through early modeling, FPGA-based prototyping, and hardware emulation processes. Giving designers better control of both IP and models enables tasks once only possible in hardware prototypes to shift into virtual analysis activities earlier in the design cycle, creating more opportunities for optimization.

An example of systematic RISC-V architecture analysis and optimization is in S2C’s collaboration with the XiangShan project team based at the Chinese Academy of Sciences. XiangShan is a superscalar, six-wide, out-of-order RISC-V implementation targeting a Linux variant for its operating system.

The XiangShan team used S2C products to create a core verification platform integrated with an external GPU and other peripherals. The hyperscale core partitions into an S2C FPGA-based prototyping platform, with peripherals added via PCIe and other interfaces.

“As RISC-V technology has penetrated various fields, its open-source, conciseness, and high scalability are redefining the future of computing,” says Ying J. Chen, Vice President at S2C. “S2C’s three major product lines can provide various solutions like software performance evaluation for microarchitecture analysis, system integration, and specification compliance testing based on RISC-V.”

We expect more details soon from S2C on how the systematic RISC-V architecture analysis and optimization chain come together with upcoming US product announcements – for now, S2C’s Chinese language site has some information on Genesis. More details on the XiangShan RISC-V project are available from tutorials given at ASPLOS’23.

Also Read:

Sirius Wireless Partners with S2C on Wi-Fi6/BT RF IP Verification System for Finer Chip Design

S2C Accelerates Development Timeline of Bluetooth LE Audio SoC

S2C Helps Client to Achieve High-Performance Secure GPU Chip Verification


AMD Puts Synopsys AI Verification Tools to the Test

AMD Puts Synopsys AI Verification Tools to the Test
by Mike Gianfagna on 08-28-2023 at 6:00 am

AMD Puts Synopsys AI Verification Tools to the Test

The various algorithms that comprise artificial intelligence (AI) are finding their way into the chip design flow. What is driving a lot of this work is the complexity explosion of new chip designs required to accelerate advanced AI algorithms. It turns out AI is both the problem and the solution in this case. AI can be used to cut the AI chip design problem down to size. Synopsys has been developing AI-assisted design capabilities for quite a while, beginning with the release of a design space optimization capability (DSO.ai) in 2020. Since then, several new capabilities have been announced, significantly expanding its AI-assisted footprint. You can get a good overview of what Synopsys is working on here. One of the capabilities in the Synopsys portfolio focuses on verification space optimization (VSO.ai). The real test of any new capability is its use by a real customer on a real design, and that is the topic of this post. Read on to see how AMD puts Synopsys AI verification tools to the test.

VSO.ai – What it Does

Test coverage of a design is the core issue in semiconductor verification. The battle cry is, “if you haven’t exercised it, you haven’t verified it.” Stimulus vectors are generated using a variety of techniques, with constrained random being a popular approach. Those vectors are then used in simulation runs on the design, looking for test results that don’t match expected results.

By exercising more of the circuit, the chance of finding functional design flaws is increased.

Verification teams choose structural code coverage metrics (line, expression, block, etc.) of interest and automatically add them to simulation runs. As each test iteration generates constrained-random stimulus conforming to the rules, the simulator collects metrics for all the forms of coverage included. The results are monitored, with the goal of tweaking the constraints to try to improve the coverage. At some point, the team decides that they have done the best that they can within the schedule and resource constraints of the project, and they tape out.

Code coverage does not reflect the intended functionality of the design, so user-defined coverage is important. This is typically a manual effort, spanning only a limited percentage of the design’s behavior. Closing coverage and achieving verification goals is quite difficult.

A typical chip project runs many thousands of constrained-random simulation tests with a great deal of repetitive activity in the design. So, the rate of new coverage slows, and the benefit of each new test reduces over time.

At some point, the curve flattens out, often before goals are met. The team must try to figure out what is going on and improve coverage as much as possible within time and resource constraints. This “last mile” of the process is quite challenging. The amount of data collected is overwhelming and trying to analyze it and determine the root cause of a coverage hole is difficult and labor-intensive. is it an illegal bin for this configuration or a true coverage hole?

The design of complex chips contains many problems that look like this – the requirement to analyze vast amounts of data and identify the best path forward. The good news is that AI techniques can be applied to this class of problems quite successfully.

For coverage definition, Synopsys VSO.ai infers some types of coverage beyond traditional code coverage to complement user-specified coverage. Machine learning (ML) can learn from experience and intelligently reuse coverage when appropriate. Even during a single project, learnings from earlier coverage results can help to improve coverage models.

VSO.ai works at the coarse-grained test level and provides automated, adaptive test optimization that learns as the results change. Running the tests with highest ROI first while eliminating redundant tests accelerates coverage closure and saves compute resources.

The tool also works at the fine-grained level within the simulator to improve the test quality of results (QoR) by adapting the constrained-random stimulus to better target unexercised coverage points. This not only accelerates coverage closure, but also drives convergence to a higher percentage value.

The last mile closure challenge is addressed by automated, AI-driven analysis of coverage results. VSO.ai performs root cause analysis (RCA) to determine why specific coverage points are not being reached. If the tool can resolve the situation itself, it will. Otherwise, it presents the team with actionable results, such as identifying conflicting constraints.

The figure below summarizes the benefits VSO.ai can deliver. A top-level benefit of these approaches is the achievement of superior results in less time with less designer effort. We will re-visit this statement in a moment.

The Benefits of VSO.ai

What AMD Found

At the recent Synopsys Users Group (SNUG) held in Silicon Valley, AMD presented a paper entitled, “Drop the Blindfold: Coverage-Regression Optimization in Constrained-Random Simulations using VSO.ai (Verification Space Optimization).”  The paper detailed AMD’s experiences using VSO.ai on several designs. AMD had substantial goals and expectations for this work:

Reach 100% coverage consistently with small RTL changes and design variants, but in an optimized, automated way.

AMD applied a well-documented methodology using VSO.ai across regression samples for four different designs. The figure below summarizes these four experiments.

Regression Characteristics Across Four Designs

AMD then presented a detailed overview of these designs, their challenges and the results achieved by using VSO.ai, compared to the original effort without VSO.ai. Recall one of the hallmark benefits of applying AI to the design process:

Achievement of superior results in less time with less designer effort

In its SNUG presentation, awarded one of the Top 10 Best Presentations at the event, AMD summarized the observed benefits as follows:

  • 1.5 – 16X reduction in the number of tests being run across the four designs to achieve the same coverage
  • Quick, on-demand regression qualifier
    • Can be used to gauge how well the test distribution of a regression is if user is uncertain on iterations needed
  • Potentially target more bins under same budget
    • If default regression(s) do not achieve 100% coverage, VSO.ai can potentially exceed this (i.e., experiment #1)
  • Testcase(s) removal in coverage regressions if not contributing
  • More reliable test grading for constrained random tests
    • URG (Unified report generator): seed-based v/s
    • VSO.ai: probability-based
  • Debug
    • Uncover coverage items that have a lower probability of being hit than expected

This presentation put VSO.ai to the test and the positive impact of the tool was documented.  As mentioned, this kind of user application to real designs is the real test of a new technology. And that’s how AMD puts Synopsys AI verification tools to the test.

Also Read:

WEBINAR: Why Rigorous Testing is So Important for PCI Express 6.0

Next-Gen AI Engine for Intelligent Vision Applications

VC Formal Enabled QED Proofs on a RISC-V Core


Podcast EP178: An Overview of Advanced Power Optimization at Synopsys with William Ruby

Podcast EP178: An Overview of Advanced Power Optimization at Synopsys with William Ruby
by Daniel Nenni on 08-25-2023 at 10:00 am

Dan is joined by William Ruby, director of product management for Synopsys Power Analysis products. He has extensive experience in the area of low-power IC design and design methodology, and has held senior engineering and product marketing positions with Cadence, ANSYS, Intel, and Siemens. He also has a patent in high-speed cache memory design.

Dan explores new approaches to power analysis and power optimization with William, who explains strategies for increasing accuracy of early power analysis, when there is more opportunity to optimize the design. Enhanced modeling techniques and new approaches to computing power are discussed. The benefits of emulation for workload-based power analysis are also explored.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The First TSMC CEO James E. Dykes

The First TSMC CEO James E. Dykes
by Daniel Nenni on 08-25-2023 at 6:00 am

James Dykes TSMC CEO (1)

Most people ( including ChatGPT) think Morris Chang was the first TSMC CEO but it was in fact Jim Dykes, a very interesting character in the semiconductor industry.

According to his eulogy: Jim came from the humblest of beginnings, easily sharing that he grew up in a house without running water and never had a bed of his own. But because of his own drive, coupled with compassion, leadership, and intelligence, he was indeed a genuine “success story.” He was honored in his profession with awards too numerous to list. During his long career he held leadership positions in several companies, including Radiation, Harris, General Electric, Philips North America and TSMC in Taiwan. His work took him to locales in Florida, California, North Carolina and Texas as well as overseas, but he returned to his Florida roots to retire, living both in Fort McCoy and St. Augustine.

Jim was known around the semiconductor industry as a friendly, funny, approachable person. I did not know him but some of my inner circle did. According to semiconductor lore, Jim Dykes was forced on Morris Chang by the TSMC Board of Directors due to his GE Semiconductor experience and Philips connections. Unfortunately Jim and Morris were polar opposites and didn’t get along. Jim left TSMC inside the two year mark and was replaced by Morris himself. Morris didn’t like Philips looking over his shoulder and stated that the TSMC CEO must be Taiwanese and he was not wrong in my opinion. Morris then hired Don Brooks as President of TSMC. I will write more about Don Brooks next because he had a lasting influence on TSMC that is not generally known.

One thing Jim left behind that is searchable is industry presentations. My good friend and co-author Paul McLellan covered Jim’s “Four Little Dragons of the Orient and an Emerging Role Model for Semiconductor Companies” presentation quite nicely HERE. This presentation was made in January of 1988 while Jim was just starting as CEO of TSMC. I have a PDF copy in case you are interested.

“I maintain we are no less than a precursor of an entirely new way of doing business in semiconductors. We are a value-added manufacturer with a unique charter… We can have no designs or product of our own. T-S-M-C was established to bridge the gap between what our customers can design and what they can market.”

“We consider ourselves to be a strategic manufacturing resource, not an opportunistic one. We exist because today’s semiconductor companies and users need a manufacturing partner they can trust and our approach, where we and our customers in effect spread costs among many users, yet achieve the economics each seeks, makes it a win-win for everyone.”

So from the very beginning TSMC’s goal was to be the Trusted Foundry Partner which still stands today. From the current TSMC vision and mission statement:

“Our mission is to be the trusted technology and capacity provider of the global logic IC industry for years to come.”

Another interesting Jim Dykes presentation “TSMC Outlook May 1988” is on SemiWiki. It is more about Taiwan than TSMC but interesting  just the same.

“Taiwan, by comparison, is more like Silicon Valley. You find in Taiwan the same entrepreneurial spirit the same willingness to trade hard work for business success and the opportunities to make it happen, that you find in Santa Clara County, and here in the Valley of the Sun. Even Taiwan’s version of Wall Street will seem familiar to many of you. There’s a red-hot stock market where an entrepreneur can take a company public and become rich overnight.”

I agree with this statement 100% and experienced it first hand in the 1990s through today, absolutely.

I was also able to dig up a Jim Dykes presentation “TO BE OR NOT TO BE” from 1982 when he was VP of the Semiconductor Division at GE. In this paper Jim talks about the pros and cons of being a captive semiconductor manufacturer. Captive is what we now call system fabless companies or companies that make their own chips for complete systems they sell (Apple). Remember, at the time, computer system companies were driving the semiconductor industry and had their own fabs: IBM, HP, DEC, DG, etc… so we have come full circle with systems companies making their own chips again.

Speaking of DG (Data General), I read Soul of a New Machine by Tracy Kidder during my undergraduate studies and absolutely fell in love with the technology. In fact, after graduating, I went to work for DG which was featured in the book.

I have a PDF copy of Jim’s “TO BE OR NOT TO BE” presentation in case you are interested.

Also read:

How Philips Saved TSMC

Morris Chang’s Journey to Taiwan and TSMC

How Taiwan Saved the Semiconductor Industry


Empyrean visit at #60DAC

Empyrean visit at #60DAC
by Daniel Payne on 08-24-2023 at 6:00 am

Patron EM IR flow min

I arrived for my #60DAC booth appointment at Empyrean and was able to watch a customer presentation from Jason Guo, of Diodes. Jason was talking about how his company used the Patron tool for EM/IR analysis on their automotive chips. Diodes was found back in 1959 at Plano, Texas, and has since grown into 32 locations around the globe, offering chips for logic, analog, power management, precision timing and interconnect.

Diodes has also used the Empyrean ALPS tool for AMS simulation. For an EM analysis flow they use ALPS for circuit simulation plus Patron to see the EM, IR and pin voltages. They can quickly view the EM layout violations, then make fixes to the layout. Mr. Guo said that they’ve used Empyrean tools for about two years now, and that they are easy to learn and use.

Patron EM/IR design flow

The layout viewer is called Skipper, and the colors displayed represent voltage drops (IR), where red is a violation.

After the customer presentation I talked with Jason Xing of Empyrean to get an update on what’s new in the last 12 months. Mr. Xing that Empyrean now has a complete custom AMS design and verification tool flow, consisting of tools for:

  • Schematic Capture
  • Custom IC layout
  • SPICE circuit simulation
  • DRC and LVS checking
  • EM/IR analysis

Designers of Power Management ICs (PMIC) can use Empyrean tools for both design and verification.

Something new for 2023 is standard cell and memory characterization, with a tool called Empyrean Liberal. Their approach for characterization uses a Static Timing Analysis (STA) method to measure delays with exhaustive searching, and no missed timing arcs. These tools are cloud ready for speeding up characterization run times, and they support LVF, an extension to the Liberty format to add statistical timing variation to the measurements.

RF circuit designers can use the Empyrean ALPS-RF circuit simulator for both frequency and time-domain simulations, supporting large signal, small signal and noise analysis.

The company has about 900 people now, and they went public in July 2022 on the China, Shenzhen exchange. Some 600 customers are using Empyrean EDA tools, and even the foundries are using their tools. Their headquarters are in Beijing, then R&D is done in Nanjing, Chengdu, Shanghai and Shenzhen.

Happy customers also include Willsemi, using the Empyrean Polas tool for reliability analysis of PMICs by measuring Rdson and performing EM analysis. Monolithic Power Systems (MPS) also uses the Polas power layout analysis tool. Renesas does SoC designs with complex clocking structures, and the Empyrean ClockExplorer tool helped improve the quality of their clock structures. O2Micro used the Empyrean AMS flow with the TowerJazz iPDK for their Power IC and analog design projects.

Summary

Empyrean has been blogged about here on SemiWiki since 2019, and I thoroughly enjoyed visiting their booth in July to see their new products and growth in the EDA industry. Their point tools have grown into tool flows supporting custom IC design and even flat panel design. I look forward to visiting them again in 2024 at DAC to report on new developments.

Related Blogs

 


Using Linting to Write Error-Free Testbench Code

Using Linting to Write Error-Free Testbench Code
by Daniel Nenni on 08-23-2023 at 10:00 am

AMIQ EDA Design and Verification

In my job, I have the privilege to talk to hundreds of interesting companies in many areas of semiconductor development. One of the most fun things for me is interviewing customers—hands-on users—of specific electronic design (EDA) tools and chip technologies. Cristian Amitroaie, CEO of AMIQ EDA, has been very helpful in introducing me to both commercial and academic users of his company’s Design and Verification Tools (DVT) Integrated Development Environment (IDE) family of products.

Recently, Cristian connected me with Lars Wolf, Harald Widiger, Daniel Oprica, and Christian Boigs from Siemens. They kindly shared their time with me to talk about their experiences with AMIQ’s Verissimo SystemVerilog Linter.

SemiWiki: Can you please tell us a bit about your group and what you do?

Siemens: We are members of a 10-15 person verification team at Siemens and part of a department that does turnkey development of application-specific integrated circuits (ASICs) for factory automation products within the company. Our team of experts focuses on verification IP (VIP), developing new VIP components and also reusing and adapting existing VIP.

SemiWiki: What are your biggest challenges?

Siemens: We have all the usual issues of any project, such as limited resources, tight schedules, and increasing complexity. But there are two specific challenges that led us to look at Verissimo as a possible solution.

First, since our VIP can be used by many projects, we have a very high standard of quality. We don’t want our ASIC design teams debugging problems that turn out to be issues in our VIP, so we must provide them with error-free models, testbenches, and tests. Of course, the better the verification environment, the better the ASICs that we provide to the product teams.

The second challenge involves the extension of our development landscape to incorporate SystemVerilog and the Universal Verification Methodology (UVM) for our projects. At the time, many of our engineers were not yet experts in this domain, so we were looking for tools that would help them learn and help them write the best possible code.

SemiWiki: So, you thought that a SystemVerilog/UVM linting tool would help?

Siemens: Yes, we were looking specifically for such a solution. The whole point of linting is to identify and fix errors so that the resulting code is correct. We believed that the engineers would learn over time to avoid many of these errors and make code development faster and smoother. We considered several options and ended up choosing Verissimo from AMIQ EDA.

SemiWiki: What was the process for getting the team up and running with the tool?

Siemens: It’s built on an IDE, so it’s easy to use and it provides all sorts of aids in navigating through code and fixing errors. Most engineers used it successfully after minimal training. We spent much of our effort refining the linting rules checked. Verissimo has more than 800 out-of-the-box rules, and some were more important to us than others. We started with the default ruleset and then turned off the checks that we didn’t need for one reason or another. We ended up with about 510 rules enabled. Every rule must be explainable and understandable by every verification engineer.

SemiWiki: Is this ruleset static?

Siemens: No; we meet regularly to review the rules and to consider adding new ones since AMIQ EDA is always offering more. On average, we add four or five rules every month. We try to keep up with new rules and new features in Verissimo so we’re always getting the maximum benefit for our team.

SemiWiki: Are there any particular rules that impressed you?

Siemens: We know that a lot of the rules were added due to user demand, and in general we also find these rules very useful. There are some rules that cover aspects of SystemVerilog that we hadn’t previously considered, such as detecting dead code, identifying copy-and-paste code, and pointing out coding styles that may reduce simulation performance. We were especially intrigued by the random stability checks. Initially we took reproduction of random stimulus for granted, but we learned that it doesn’t happen without proper coding style.

SemiWiki: How is Verissimo run in your verification flow?

Siemens: We encourage our engineers to run linting checks as they write their code, but we do not require them to do so. We considered making a linting run a requirement for code commit, but we didn’t want engineers to consider waiving possible errors just to get through the check-in gate. We require the flexibility to commit code that may not yet be perfect but is needed to get the testbench to compile and run regressions.

We decided instead to make Verissimo part of our daily regression run. Using a common ruleset ensures consistency in coding style and adoption of best practices across the entire team. Verissimo results are included in our regression dashboard and tracked over time, along with code coverage and pass/fail results from regression tests. Any linting errors and error waivers are discussed during code reviews as part of making the VIP as clean and reusable as possible.

SemiWiki: Do you see any resistance to linting among your engineers?

Siemens: We honestly didn’t know what to expect in this regard, and we have been pleasantly surprised. We have a small, cohesive team and there is no debate over using linting as part of our process. There is also no abuse of error waivers, which are reviewed carefully and used only as a last resort.

SemiWiki: Has Verissimo lived up to your expectations?

Siemens: It certainly has addressed the two challenges that led to us looking for a linting solution: high quality and coding guidance. We now have confidence that our VIP is lint error-free, with no syntax or semantic errors, and compliant to our coding rules. Our VIP is more reusable, maintainable, and manageable. Verissimo has also proven to be a very good learning tool. As we discuss rules and debug linting errors, we understand both SystemVerilog and UVM better, and we think more deeply about our code.

SemiWiki: How has your experience been working with AMIQ EDA?

Siemens: It’s more of a partnership than a pure vendor-customer relationship. Early in our engagement, we compared our coding guidelines with the Verissimo rules, and asked AMIQ EDA to add some new rules plus adjustments and new parameters for some existing rules. Of course, as with any piece of software, we’ve found a few bugs in the tool itself. In all cases, we have found them to be responsive and supportive.

SemiWiki: Do you plan to change the way that you use Verissimo in the future?

Siemens: Since we have been successful so far, we plan to continue everything we are doing now on all new VIP projects. There are two areas where we would like to improve a bit. Our goal was to meet every two weeks to discuss linting rules, errors, and waivers, but we haven’t always done that. We would like to make those meetings more regular. We would also like to update Verissimo releases more often throughout the project so that we can take advantage of new rules that require new capabilities in the tool.

SemiWiki: Gentlemen, thank you very much for your time. It is great that you have had so much success with adding linting to your testbench and VIP development flow.

Siemens: It has been our pleasure.

Also Read:

A Hardware IDE for VS Code Fans

Using an IDE to Accelerate Hardware Language Learning

AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family


Predictive Maintenance in the Context of Automotive Functional Safety

Predictive Maintenance in the Context of Automotive Functional Safety
by Kalar Rajendiran on 08-23-2023 at 6:00 am

Margin Agent to Measure Performance and Degradation

The automotive industry is undergoing a major transformation. The convergence of electrification, connectivity, driver-assistance technologies, and software-defined vehicles has led to the rise of use of advanced System-on-Chips (SoCs) that drive unprecedented levels of functionality and performance. However, this transformation also raises concerns about the safety and reliability of these complex systems. This shift presents unique challenges and opportunities in ensuring the performance, reliability, and safety of automotive systems. As automotive technology advances, the traditional approaches to functional safety face significant disruptions. The traditional reactive approach to functional safety, which involves addressing issues after they occur, is insufficient for the complex and interconnected automotive systems of today. Proactive approaches are needed for predicting failures, anticipating risks, and actively mitigating them before they cause significant disruptions. Maintaining the utmost resilience and operational efficiency of automotive systems, calls for continuous monitoring and predictive insights.

proteanTecs has published a whitepaper that explores the shifting landscape of automotive functional safety and the required methodologies to ensure the safe and reliable operation of these intricate automotive systems. The whitepaper delves into the implications of data-driven advancements in artificial intelligence, machine learning, and data analytics for automotive functional safety and useful life extension. It explores proactive approaches that transcend traditional reactive measures, enabling stakeholders to anticipate failures and proactively mitigate risks.

This whitepaper is an excellent read for everyone involved in the development and deployment of automotive functional safety systems. Following are some excerpts from that whitepaper.

Hardware Failure Anticipation through Prognostic Techniques

One of the key challenges in ensuring functional safety is anticipating hardware failures. With the application of prognostic techniques, which involve analyzing data to predict the future reliability of components or systems, automotive stakeholders can anticipate potential hardware failures. This allows them to take preventive measures before they occur, thus enhancing the safety and longevity of automotive systems.

Understanding the Impact of Defects on System Lifetime

Defects innate to manufacturing or caused by operating environment and usage patterns can significantly affect the lifetime of automotive systems. Analyzing defect-induced failures and understanding their occurrence can help in estimating the Failure Rate (FR) of electronic devices. Equipped with this insight, automotive manufacturers can focus on improving the reliability of electronic components, thereby ensuring the longevity of the systems and enhancing functional safety.

Time-To-Failure (TTF) Predictions and Reliability Improvement

Estimating the TTF is another critical aspect in understanding the reliability of automotive components and systems. TTF predictions involve monitoring device performance to estimate when a device will fail. By combining observed (empirical field data) and predicted failures, reliability parameters can be estimated more quickly. TTF predictions can be leveraged to gain insights into potential failure scenarios and take preemptive actions. These preemptive actions enhance the overall functional safety of automotive systems.

Leveraging Deep Data On-Chip Monitors and Degradation Modeling

Monitoring the margin degradation of Integrated Circuits (ICs) is essential to estimating FR and TTF of automotive systems. The implementation of deep data on-chip monitors and degradation modeling based on Physics-of-Failure principles are essential for continuous monitoring of ICs. This methodology provides real-time data for proactive decision-making and preemptive actions, thereby enhancing functional safety. The method involves using IC-embedded circuits called “Agents” strategically placed within the device to monitor degradation over time without interrupting normal operations. The Agents provide high-resolution data on chip parameters and degradation, allowing for the prediction of TTF for individual devices based on manufacturing parameters and mission history.

The Figure below shows how the Agents are connected to the monitored logic to measure timing margin of the logical paths.

During normal IC operations, the worst-case margin of the monitored logical paths is stored in the Agent and the data can be read at any time.

Estimating Remaining Useful Life (RUL) and Preventing Future Failures

By reading Agent data during reliability stress tests, the primary degradation mechanism can be determined, and TTF prediction algorithms can be used to estimate the remaining lifetime of devices. The ability to estimate RUL of automotive components is crucial for prescriptive measures and risk mitigation. Machine learning algorithms and predictive analytics can be applied to estimate RUL and prevent future failures. By identifying potential points of failure in advance, automotive manufacturers can implement preventive measures to ensure safety and operational efficiency.

Extending Useful Life

Prescriptive maintenance, which is another concept, recommends actions to change the future outcome by adapting the operational conditions of the device. While some intrinsic failures are difficult to predict, device aging can be modeled, and operational workload can be reduced to minimize stress. This may involve restricting software processes, adjusting voltage and frequency, or employing “limp-home mode” strategies. By reducing the operational stress, the occurrence of wear-out faults can be delayed, leading to an extended useful lifetime for the device.

The Figure below shows how a noticeable extension of useful life time can be achieved through a combination of predictive and prescriptive maintenance as the FR remains lower than 100 FIT for a longer period.

Summary

The ever-changing landscape of advanced SoCs in the automotive industry demands a fresh perspective on functional safety. Embracing proactive approaches, harnessing data-driven insights, and leveraging advanced techniques such as prognostics, degradation modeling, and predictive analytics are essential to support the transforming auto industry. By embracing continuous monitoring and predictive insights, automotive manufacturers and OEMs can achieve unprecedented levels of resilience, robustness, and operational efficiency from ICs to ECUs.

You can download the entire whitepaper from here. To learn more about proteanTecs technology and solutions, visit www.proteanTecs.com.

By ensuring the utmost performance, reliability, and safety of automotive systems, stakeholders can unlock the full potential of electrification, connectivity, and driver-assistance technologies while addressing the functional safety challenges of the modern automotive landscape.

Also Read:

Semico Research Quantifies the Business Impact of Deep Data Analytics, Concludes It Accelerates SoC TTM by Six Months

Maintaining Vehicles of the Future Using Deep Data Analytics

Webinar: The Data Revolution of Semiconductor Production