SNPS1670747138 DAC 2025 800x100px HRes

Podcast EP111: How sureCore is Fueling the AI Revolution With Tony Stansfield

Podcast EP111: How sureCore is Fueling the AI Revolution With Tony Stansfield
by Daniel Nenni on 10-07-2022 at 10:00 am

Dan is joined by Tony Stansfield, sureCore’s CTO. Tony has over 35 years of semiconductor industry experience in a variety of technical roles. He is cited as an inventor on 23 patents covering SRAM, CAM, low-power electronics, and programmable logic.

Improving the Efficiency of AI Applications Using In-Memory Computation

Tony explores the unique, low power capabilities of sureCore’s standard and custom memory products. The specific ways this technology is used to optimize AI applications are covered in some detail, including general and specific examples of approaches such as in-memory compute.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


A Memorable Samsung Event

A Memorable Samsung Event
by Daniel Nenni on 10-07-2022 at 6:00 am

Samsung DRAM Roadmap 2022

Samsung hosted its first-ever Samsung Tech Week Oct 3-5 with some insightful keynotes and great food. The week led off with Samsung Foundry Forum and a keynote from Foundry president, Si-Young Choi. Attendees at Samsung Foundry’s SAFE Forum were welcomed by Ryan Lee, head of the business’ Design Enablement team. John In-Young Park, President of Samsung’s System LSI, welcomed guests to System LSI Tech Day. Finally, Memory President, Jung-Bae Lee, gave an industry keynote at the Memory Tech Day to round out the week. The most memorable (pun intended) presentation from Samsung this year, in my opinion, was from the Memory group.

“One trillion gigabytes is the total amount of memory Samsung has made since its beginning over 40 years ago. About half of that trillion was produced in the last three years alone, indicating just how fast digital transformation is progressing,” said Jung-bae Lee, President and Head of Memory Business at Samsung Electronics. “As advances in memory bandwidth, capacity and power efficiency enable new platforms and these, in turn, stimulate more semiconductor innovations, we will increasingly push for a higher level of integration on the journey toward digital coevolution.”

Jim Elliott has been in the memory business for 25 years, 20 of those with Samsung. This alone is an impressive feat in Silicon Valley. Jim was engaging and presented a nice landscape for the memory business moving forward.

Jim highlighted the industry drivers with the PC, phones, and now the data driven (availability and reliability) era we are in today. According to reports 90% of the worlds data was created in the last two years and that hypergrowth will continue.

For growth trends Jim mentioned the Metaverse, automotive, and robotics with AI. I would argue that for memory and logic both AI will be the underlying growth driver for most semiconductor market segments and will require massive amounts of leading-edge memory and logic. To be clear, AI will touch an enormous amount of chips that will never have enough logic or memory performance and density.

The seven hundred billion dollar question is: “Can memory technology keep up with the data explosion demand?” The answer of course is yes and Jim explained why.

According to Jim the memory node transition has increased from 7-9 quarters at 90nm to 26 quarters at 10nm. To address the coming challenges Jim talked in more detail about Samsung memory.

Samsung unveiled its fifth-generation 10nm-class DRAM as well as eighth- and ninth-generation Vertical NAND (V-NAND). Today Samsung has 567 DRAM engagements and 617 NAND engagements. Let’s face it, Samsung is the #1 semiconductor company for a reason and memory is the driver behind the Samsung semiconductor dynasty so I don’t see that changing anytime soon.

Jim’s presentation covered partnerships, alternative business models, and the coming Open Innovation Samsung Memory Research Centers. He also presented the roadmaps for DRAM and NAND:

Jim concluded with application notes on mobile, server, cloud, and moving forward with automotive (server on wheels). Samsung currently has 400 automotive projects on the way and are in mass production with 60+ automotive customers.

The other presentation that caught my interest was from Synopsys. Sassine Ghazi, the president and COO of Synopsys, kicked off Samsung SAFE with an engaging keynote on unlocking innovation potential. Sassine began his presentation with the statement, semiconductor chips and software have been the most uplifting phenomena in the history of humankind. Quite a bold statement. He went on to focus on the pivotal role hardware (chips) have made and he observed three fundamental obstacles that must be overcome to reach the next level of innovation.

  • Balance complexity and energy
  • Scale for workload-optimized chips
  • Optimize talent and productivity

He expanded on each item and provided concrete examples of solutions developed through collaboration between Synopsys and Samsung. He concluded with some eye-opening information about the deployment of AI technology to design chips at Samsung. The impact appears to be quite significant. Sassine stated, AI is the only way forward. This is an area to watch. Absolutely.

Also Read:

Webinar: Semifore Offers Three Perspectives on System Design Challenges

WEBINAR: Taking eFPGA Security to the Next Level

WEBINAR: How to Accelerate Ansys RedHawk-SC in the Cloud


DFT Moves up to 2.5D and 3D IC

DFT Moves up to 2.5D and 3D IC
by Daniel Payne on 10-06-2022 at 10:00 am

2.5D and 3D chiplets min

The annual ITC event was held the last week of September, and I kept reading all of the news highlights from the EDA vendors, as the time spent on the tester can be a major cost and the value to catching defective chips from reaching production is so critical. Chiplets, 2.5D and 3D IC design have caught the attention of the test world, so I learned what Siemens EDA just announced to address the new test demands with their DFT approach. Vidya Neerkundar is a Product Manager for the Tessent family of DFT products, and she presented an update.

DFT Challenges

For most of the history of IC designs we’ve had one die in one package, along with multi-chip modules (MCM). For 2.5D and 3D ICs with multiple dies, how do you take the individual die tests, then make them work for the final package?

What if the DFT architectures for each internal die are different from each other?

Is there an optimal way to schedule the die tests while in a package to reduce test times?

2.5D and 3D chiplets

Tessent Multi-die

Siemen’s development team extended their technology to support 2.5D and 3D IC packaging with Tessent Multi-die. At SemiWiki we blogged last year about the Tessent Streaming Scan Network, which used 2D hierarchical scan test. This same approach extends 2D hierarchical DFT into 2.5D and 3D ICs now. Here’s what that looks like for three chiplets in a 2.5D device:

The IEEE created a standard for test access architecture for 3D stacked ICs, known as IEEE 1838-2019. IEEE 1687 defines the access and control of instrumentation embedded inside an IC using another standard, IEEE 1149.1 – with test access ports. Tessent Multi-die supports all of these standards.

Each die in a chiplet design has a Boundary Scan Description Language (BSDL) file, and then Tessent Multi-die creates the package level BSDL for you.

IEEE 1838

This die-centric test standard became board approved in November 2019, and allows testing of a die as part of a multi-die stack. A 3D stack of die is connected for test purposes using a Flexible Parallel Port (FPP), along with Die Wrapper Registers (DWR) and Test Access Ports (TAP):

3D Stack for Testing

IEEE 1687 – Internal JTAG

This 2014 standard helps to streamline the use of instruments that are embedded inside each die.  There’s an Instrument Connectivity Language (ICL), and Procedure Description Language (PDL) to define the instrumentation. The flow between an ATE system and internal JTAG is shown below:

IEEE 1687 flow

IEEE 1149.1 JTG

The boundary scan standard with a Test Access Port goes back to 1990, and the Boundary Scan Description Language (BSDL) came along in 2001. This standard defines how instructions and test data flow inside a chip.

IEEE 1149.1 JTAG

Bringing all of these test standards together, we can see how Tessent Multi-die connects to each chiplet inside of a 3D stack. The test patterns for cores within each die and test scheduling is accomplished with Tessent Streaming Scan Network (SSN).

Tessent Streaming Scan Network

SSN basically packetizes test data delivery, which decouples the core DFT and chip DFT, allowing an independent shift of concurrently tested cores. Practical benefits are time savings for DFT planning, easier routing and timing closure, and up to a 4X test time and volume reduction.

Tessent SSN

Summary

Close collaboration between foundries, design, test and the IEEE have created a vibrant 2.5D and 3D eco-system, with all of the technology in place to advance semiconductor innovation. Siemens EDA has extended their Tessent software to embrace the new test challenges, while using IEEE standards. Tessent Multi-die is integrated with all of the other Tessent products and platform, so you don’t have to cobble tools and flows together.

Related Blogs


U.S. Automakers Broadening Search for Talent and R&D As Electronics Take Over Vehicles

U.S. Automakers Broadening Search for Talent and R&D As Electronics Take Over Vehicles
by Tony Hayes on 10-06-2022 at 6:00 am

DOCKLANDS3

The auto industry isn’t for the faint of heart in late 2022. As Deloitte recently explained, chip shortages, supply chain bottlenecks, unpredictable consumer demand and the industry overhaul mandated by the rise of EVs are all creating unprecedented turmoil in this key sector. One particularly pressing challenge is the ongoing dearth of technically oriented employees in this era when cars make decisions on their own.

The automotive sector is predicted to face a global shortage of 2.3 million skilled workers by 2025 and 4.3 million by 2030 according to a research project from global executive recruiting firm Ennis & Co, which specializes in the sector.  This underscores the rapidly growing amount of modern technology in vehicles today. According to Edward Jones, Professor in Electrical and Electronic Engineering at the University of Galway, “The mechanical components are just one part of the equation and there’s probably as much if not more emphasis now on the software, the electronics, the sensors and the user experience.”

He and his research colleagues have had a front-row seat to the quickly evolving auto industry because so many manufacturers have come to his region in a search of new solutions to hiring, development, testing and other key aspects of building a modern, in-demand vehicle and the key electronics that make it run. Jaguar Land Rover (JLR), General Motors, Intel, Analog Devices, Valeo and many other players have heavily focused on the West of Ireland , in recent years to tap into the resources available there. For example, JLR opened a technology development center in 2017 and keeps adding to its staff while GM has increasingly grown and evolved its Irish operations to encompass key priority areas such as AI, data management and cyber.

As Jones describes it, Ireland offers companies “many of the high-value pieces of the technology stack, software, sensing, AI, machine learning, as well as the test infrastructure.” One company paying attention is Valeo, a top supplier of automotive cameras that sells products to vehicle manufacturers worldwide. University of Galway has several intriguing government-funded projects underway with Valeo that include examining the effect of inclement weather on autonomous vehicle sensors, multi-sensor fusion to improve the perception of a car’s vision system, and modeling the behavior of road users at junctions so that a vehicle’s intelligent systems can plan ahead rather than taking reactive actions.

According to Martin Glavin, an autonomous vehicle expert and Professor of Electronic and Computer Engineering at the University of Galway, “The auto belt in the west of Ireland conducts research on auto sensors and systems in ways that maybe aren’t possible in a lot of the United States. The weather in Ireland is known to be very variable and changeable.” Both Glavin and Jones are also researchers at Lero (the Science Foundation Ireland Research Center for Software), where they do extensive research and testing of vehicle sensors and perception systems.

“In Ireland, our road network length is equivalent to France,” reports Glavin. “We have everything from small country lanes all the way up to high-end motorways.  R&D in Ireland is ideal in that the country is compact and very dynamic in terms of the climate and conditions. Within 10 or 15 minutes, you can go from winter to summer or be on a small country road or a busy motorway.”

The University of Galway team and other research groups are also conducting testing of intelligent technology in the farm equipment arena. Projects are underway with McHale Engineering, part of a leading agricultural equipment dealer headquartered in Ireland.  One ag-tech project is focusing on data capture and algorithm development so that equipment operators and the devices they use are more efficient. Meanwhile, the team is also performing “analysis based on the behavior of the machine and how the machines are being used, ultimately with the aim of predicting failures on those machines and offering a more robust machine that can diagnose its own problems,” notes Glavin.

On the testing side, the Irish government funded Future Mobility Campus Ireland (FMCI), a facility that lets manufacturers put their technologies through their paces via heavily monitored test tracks.  Partners include GM, JLR, Cisco, Analog Devices, Seagate and Red Hat. Noted FMCI CEO Russell Vickers: “There are two main reasons why companies come to Ireland.  One is probably European localization there’s also the areas of data management, data processing, AI and machine learning. That’s why Jaguar Land Rover set up in Ireland because they could get access to software developers that have those skills. You have to follow the people.”

Underpinning the talent search is the growing demand for EVs due to emission concerns and costs at the pump. Proof of this important direction was seen recently in America’s Inflation Reduction Act of 2022, which offers new or expanded tax incentives for buying EVs, as well as the recent mandate in California, America’s environmental pacesetter, that all new vehicles sold by 2035 must be EVs.  The Paris Accord, which includes 196 of the world’s nations, was the forerunner – aligning with a vision of zero-emission vehicles, fewer crashes and reduced congestion.

Pursuing new technologies reflects the public/private partnership that has long characterized Ireland, with government organizations funding and collaborating with universities, research operations and companies. For example, the Science Foundation Ireland (SFI) centers work with companies in the areas of lithium batteries for EVs as well as breakthrough non-metal batteries and vehicle parts.

Noted Lorraine Byrne, executive director of AMBER, the SFI-funded materials science center headquartered at Trinity College Dublin, “We offer companies multidisciplinary scientific expertise to address specific research questions associated with their technology roadmaps. We help to accelerate early-stage research that can reduce the time to market for our industry partners. The materials science work we do at AMBER has relevance in multiple sectors but for automotive, we focus on materials challenges associated with batteries, optical components and the increasing use of sustainable or recycled materials in molded polymer or fabrics.  The SFI centers have a cost‑share model that allows us to co‑fund projects, which is attractive for companies who want to invest in higher-risk early-stage research.”

For example, AMBER has worked with Merck Millipore in the membrane area, where AMBER and Merck have collaborated on molding of polymers and material selection, particularly in the area of new membranes for filtration, whether for air filters or oil filters.

However, moving research forward isn’t the only lure for companies in the auto sector coming to Ireland. In an era when talent is in short supply, the availability of trained technical staff coming out of the universities and research institutes is particularly attractive. Says Byrne: “At the moment, over 50% of our post-doctoral researchers are ending up in the industry as their first destination. A lot of companies are working with AMBER, not just for the research but also for access to the talent pipeline.”

Tony Hayes, VP Engineering, Industrial & Clean Technologies, IDA Ireland

Also Read:

Super Cruise Saves OnStar, Industry

Arm and Arteris Partner on Automotive

The Truly Terrifying Truth about Tesla


Podcast EP110: The Real Story Behind Cerebras Systems – What It Does and Why It Matters

Podcast EP110: The Real Story Behind Cerebras Systems – What It Does and Why It Matters
by Daniel Nenni on 10-05-2022 at 10:00 am

Dan is joined by Rebecca Lewington, Technology Evangelist at Cerebras Systems. Before Cerebras she held similar roles at Micron Technology, Hewlett Packard Labs and Applied Materials. Rebecca has a master’s degree in mechanical and electrical engineering from the University of London and holds 15 patents.

Rebecca explains the one-of-a-kind architecture behind Cerebras technology and the unique approaches it facilitates. Details of customer applications are also discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


How Deep Data Analytics Accelerates SoC Product Development

How Deep Data Analytics Accelerates SoC Product Development
by Kalar Rajendiran on 10-05-2022 at 8:00 am

Continuous Monitoring and Improvement Loop

Ever since the birth of the semiconductor industry, advances have always been at a fast pace. The complexity of SoCs have grown along the way, driven by the demanding computational and communication needs of various market applications. Over the last decade, the growth in complexity has accelerated at unforeseen rates, fueled by AI/ML processing, 5G communications and related applications. This of course has brought a strain on SoC product development cycles and time to market schedules.

Semico Research recently published a detailed report titled “Deep Data Analytics Accelerating SoC Product Development.” The report explains how deep data analytics can help accelerate all phases of an SoC product development including test and post-silicon management. proteanTecs deep data analytics technology and solution are spotlighted and the resulting benefits quantified and presented in a whitepaper. This post will cover some of the salient points from this whitepaper.

The proteanTecs Approach to Deep Data Analytics

The proteanTecs approach is to include monitoring IP into SoC designs and leverage machine learning algorithms to analyze the collected data for actionable analytics. The monitoring IP, referred to as on-chip Agents fall into four categories.

Classification and Profiling

These Agents collect information related to the chip’s basic transistors and standard cells. They are constructed to be sensitive to the different device parameters and can map a chip or a sub-chip to the closest matching process corner, PDK simulation point and RC model.

Performance and Performance Degradation Monitoring

These Agents are placed at the end of many timing paths and continuously track the remaining timing margin to the target capture clock frequency. They can be used to pinpoint critical path timing issues as well as track their degradation over time.

Interconnect and Performance Monitoring

These Agents are located inside a high bandwidth die-to-die interface and are capable of continuously detecting the signal integrity and performance of the critical chip interfaces.

Operational Sensing

These Agents turn the SoC into a system sensor by sensing the effects of the application, board or environment on the chip. They track changes in the DC voltage and temperature across the die as well as information related to the clock jitter, power supply noise, software and workload. The information gathered can be used to explain timing issues detected by the Performance and Degradation Agents. The collected information helps understand the system environment, for fast debug and root cause analysis.

The proteanTecs Deep Data Analytics Software Platform

The proteanTecs platform is a one-stop software platform that generates analytics from the data created by the on-chip Agents. It performs intelligent integration of the Agents and applies machine learning techniques on the Agent readouts to provide actionable analytics. The platform is centered on the principle of continuous monitoring and improvements and implements a continuous feedback loop as shown in the Figure below.

The platform feeds relevant real-time analytics to the appropriate teams who are responsible to take corrective actions. Depending on the type of analytics feedback, the recipients would be the marketing group, SoC hardware and software group, the manufacturing team or the field deployment and support team.

Benefits of Adopting the proteanTecs Approach

Design teams can use the data to understand how the different chip parameters  are affected by various applications and environmental conditions over time. With this type of insight from the current product, the next product can be better planned.

With the in-field monitoring, predictive maintenance can be performed and when something does fail unexpectedly, debugging becomes easier and quicker. The conditions leading to the failure can be easily recreated right in the field and the fix accomplished in a much shorter time.

Analytics shared with the software team can be used to identify and fix bottlenecks between the silicon and the software during different operations.

A further benefit could be the monetization of the data stream between the system developer and the end customers. For example, auto manufacturers could provide data to their customers on how a vehicle is operating under different road conditions, so that performance could be optimized. Data centers could provide insights to their customers on how different loading factors impact response times and latencies.

There are multiple possibilities for monetization of the data streams established via the proteanTecs approach. This could open up an additional revenue stream to the owners of such a platform.

The Quantifiable Business Impact Results

In the report, Semico includes a head-to-head comparison of two companies designing a similar multicore data center accelerator SoC on a 5nm technology node. This assessment is used for understanding the quantifiable benefits of using the proteanTecs approach. The design profile and metrics of this sample SoC is presented in the Table below.

The following Table shows the quantifiable benefits of using the proteanTecs approach as it pertains to market metrics and sales results.

Summary

The proteanTecs chip analytics platform helps drive the process of SoC design, manufacturing, testing, bring-up and deployment for a significant market advantage. It performs deep dive analytics on data captured from silicon and systems to identify potential problems in all phases of the lifecycle of an SoC. The emergence of such deep data analytics solutions will benefit the electronics industry as problems can now be avoided during the development stage and in-field issues corrected rapidly.

For more details about the proteanTecs platform, visit https://www.proteantecs.com/solutions.

You can download the Semico Research whitepaper from the proteanTecs website.

Also Read:

proteanTecs Technology Helps GUC Characterize Its GLink™ High-Speed Interface

Elevating Production Testing with proteanTecs and Advantest’s ACS Edge™ Platforms

CEO Interview: Shai Cohen of proteanTecs


Siemens EDA Discuss Permanent and Transient Faults

Siemens EDA Discuss Permanent and Transient Faults
by Bernard Murphy on 10-05-2022 at 6:00 am

wafer image min

This is a topic worth coverage for those of us who aim to know more about safety. There are devils in the details on how ISO 26262 quantifies fault metrics, where I consider my understanding probably similar to other non-experts: light. All in all, a nice summary of the topic.

Permanent and transient faults 101

The authors kick off with a section on “what are they and where do they come from”,. They describe the behavior well enough and a mechanism to model permanent faults (stuck-at). Along with general root causes (EMI, radiation, vibration, etc).

The rest of the opening section is valuable, talking about base failure rates and the three metrics most important to ISO 26262. These are single point fault metric (SPFM), latent fault metric (LFM) and the probabilistic metric for hardware failure (PMHF). These quantify FIT rates (failures in time). Permanent faults affect all three and can be estimated or measured in accelerated life testing.

How do these relate to FMEDA analysis? FMEDA estimates the effectiveness of safety mitigations against transient faults, providing a transient fault component to the SPFM and PMHF metrics. It has nothing to do with permanent faults or LFM metrics. Got that?

Safety mechanisms

There’s a nice discussion on safety mechanisms and their effectiveness in detecting different types of fault. One example they show uses software test libraries (STL), a new concept to me. They note STLs are unlikely to be helpful in detecting transient faults given the fault may vanish during the execution of the test. However, there are multiple mechanisms to help here. Triple modular redundancy and lockstep compute and ECC are examples.

There is an introduction to periodic hardware self-test, becoming more important in ASIL-D compliance for-in-flight block validation. They suggest during such testing that configuration registers could be scrubbed, eliminating transient-induced configuration errors. An interesting idea but I suspect this would need some care to avoid serious overkill in requiring a function to be reconfigured from scratch on each retest. Might be interesting if all the configuration registers have protected restore registers, allowing recovery from a known good and recent state?

More on transient faults

The paper has a good discussion on transient faults in relation to FIT rates. They point out that storage elements are most important, noting that failure rates on combinational elements rarely rise to statistical significance. Transients are about bit flips rather than signal glitches; the effect must persist for some time, if only a clock cycle. Glitches can also cause bad behavior, but the statistical significance of such problems is apparently low.

They extend this argument to the need to pay more attention to registers which are infrequently updated (e.g. configuration registers) versus registers which are frequently updated. On the grounds that a fault in a long-lived value may have more damaging consequences. I understand the reasoning with respect to FIT rate. A long-lived error may cause more faults. But the argument seems a bit loose. An error in a frequently updated register can propagate to memory where it may also live for a long time.

I didn’t learn about fault detection time intervals (FDTI) until relatively recently. The paper has a good discussion on this. Also on fault tolerant time intervals (FTTI). How long do you have after a fault occurs to detect it and do something about it? Useful information for those planning safety mitigations.

You can read the white paper HERE.


Analyzing Clocks at 7nm and Smaller Nodes

Analyzing Clocks at 7nm and Smaller Nodes
by Daniel Payne on 10-04-2022 at 10:00 am

Aging Clock

In the good old days the clock signal looked like a square wave , and had a voltage swing of 5 volts, however with 7nm technology the clock signals can now look more like a sawtooth signal and may not actually reach the full Vdd value of 0.65V inside the core of a chip. I’ll cover some of the semiconductor market trends, and then challenges of analyzing high performance clocks at 7nm and smaller process nodes.

Market Trends

Foundries like TSMC, Samsung and Intel are offering 7nm technology to designers working on a wide array of SoC devices that are used in: AI, robotics, autonomous vehicles, avionics, medical electronics, data centers, 5G networks and mobile devices. These designs demand high integration in the billion transistor range, and low power to operate on batteries or within a strict power budget.

7nm Design Challenges

There are plenty of design challenges with advanced nodes, like:

  • Transistor aging effects
  • Higher design costs, in the range of $120-$420 million per 7nm design
  • Reduced design margins with lower Vdd levels
  • Power consumption rising with clock frequency
  • Process variation effects
  • Larger delay variations
  • Interconnect RC variation increases
  • Higher resistance interconnect causing signal distortions
  • Larger power transients from faster transistor switching times
  • Many more clocks with multi-voltage power domains
  • An increase in power density and chip temperatures related to switching
  • Dramatic increase in the DRC rule deck complexity

Aging Effects

As transistor devices switch on and off there are two main physical effects that impact the reliability:

  • Negative Bias Temperature Instability (NBTI)
  • Hot Carrier Injection (HCI)

Circuit designers learn that these aging effects change the Vt of devices, which in turn will slow down the rise and fall times of the clock signals.  These aging effects over time will distort the duty cycle of the clock, and can actually cause the clock circuitry to fail. Shown below are two charts where the clock Insertion Delay and Duty Cycle eventually fail, caused by aging effects. The increase in clock jitter and rail to rail (R2R) violations also appear as aging effects.

Aging Clock

Static Timing Analysis (STA) 

For many years, EDA users have relied upon STA tools, however these tools make simplifying assumptions about aging effects by applying a blanket timing derating, instead of applying aging based upon actual switching activity. The interconnect delay model in STA will miss duty cycle distortion errors in long signal nets due to resistive shielding. A STA tool also doesn’t catch rail-to-rail failures directly, although it does measure insertion delays and slew rates. Jitter isn’t simulated as part of a STA tool, so the designer doesn’t know which areas have highest noise that require fixing.

Overcoming Analysis Limitations

An ideal clock analysis methodology would provide SPICE-level accuracy of an entire clock domain, even with millions of devices. It would allow an engineer to measure R2R and jitter at every node along the entire clock path, both with and without aging. Multiple clocks could be analyzed across many process corners and Vdd combinations, working from within the current EDA tool flow, and produce results overnight.

Infinisim Approach

Infinisim is an EDA vendor that has focused on clock analysis, and their tool is called ClockEdge. Here are two analysis examples of clock domain rise slew rate, and clock domain aged insertion delay from their tool:

CAD developers at Infinisim figured out how to simulate the entire clock domain, producing full analog results with SPICE accuracy, allowing SoC teams to actually measure the clock duty cycle while aging, or measure R2R, even measure noise-induce jitter. The ClockEdge tool even runs in a distributed fashion across multiple servers in order to produce results overnight.

Clock duty cycle degradation
Rail-to-rail failure detection
Aging effects
Jitter

ClockEdge really complements STA, so continue to use both tools, where ClockEdge becomes your clock sign-off tool. All of the device aging models are supplied by your foundry. As an example of the performance of ClockEdge, it has been run on a clock circuit with 4.5 million gates, containing billions of transistors; trace required 4.5 hours, and simulation was 12 hours total time, running on 250 CPUs.

Summary

Designing an SoC at 7nm and smaller process node is a big task, requiring specialized knowledge of clock analysis to ensure first-pass silicon success. Adding a new tool like ClockEdge into your EDA tool flow is a smart step to mitigate the effects of device aging and other effects.

Related Blogs


CEVA Accelerates 5G Infrastructure Rollout with Industry’s First Baseband Platform IP for 5G RAN ASICs

CEVA Accelerates 5G Infrastructure Rollout with Industry’s First Baseband Platform IP for 5G RAN ASICs
by Kalar Rajendiran on 10-04-2022 at 6:00 am

PentaG RAN Massive MIMO Radio Platform

The 5G technology market is huge with incredible growth opportunities for various players within the ecosystem. As a leading cellular IP provider, CEVA has been staying on top of the opportunity by offering solutions that enable customers to bring differentiated products to the marketplace. Earlier this year, SemiWiki posted a blog about CEVA’s PentaG2 5G NR IP Platform, the PentaG2 being a follow-on offering to CEVA’s successful first generation PentaG IP Platform. The PentaG2 platform’s target is the 5G user equipment segment of the 5G market and PentaG2 has been enabling customers to develop products rapidly and cost effectively.

But what about the infrastructure segment of the 5G market opportunity? This segment is also huge and lucrative and is attracting a slew of established and new players seeking a slice of the opportunity pie. CEVA recently launched their PentaG-RAN Platform IP to address the infrastructure segment of the 5G market. This industry-first scalable and flexible platform combines powerful DSPs, 5G hardware accelerators and other specialized components required for optimizing modem processing chains. The PentaG-RAN platform also lowers the barriers for entry for new players who want to serve the Open-RAN base station and equipment markets.

If system companies had their druthers, they would implement an ASIC for optimal differentiation of their solutions. Of course, developing an ASIC has gotten challenging in many ways due to design complexities, supply chain disaggregation, costs, availability of technical talent, etc., And, if you are a new player, you may not have in-house seasoned ASIC implementation teams to count on.

What if the PentaG-RAN Platform IP would help overcome the above challenges? Would system companies take the ASIC route and get freedom from the captivity of chipset suppliers? Would chiplet suppliers take advantage of the platform to quickly implement product variants for different segments/customers? What if along with the Platform IP, integration services are available to implement the ASIC? This is the backdrop and context for this blog about the PentaG-RAN announcement from CEVA.

5G RAN Market Opportunity and the Challenge

The infrastructure market covers base stations and radio configurations from small cells to Massive MIMO deployments. According to Gartner, the RAN semiconductor market is expected to grow from $5.5B in 2022 to $7.4B in 2026. Significant Open-RAN architecture penetration is expected around 2025 with Massive MIMO radio being the biggest volume opportunity. New OEMs being attracted by the Open-RAN architecture are desiring to replace cost and power inefficient FPGA-based and COTS platform based implementations. At the same time, they get intimidated just thinking of the ASIC development challenges, given the PHY and radio domains are highly complex to begin with. The scarcity of design and architecture expertise for 5G baseband processing justifiably adds to this apprehension. To top these concerns, the diverse workloads in 5G physical layer require complex and heterogeneous L1 subsystems and optimal hardware/software partitioning.

CEVA’s PentaG-RAN Offering

PentaG-RAN is a heterogeneous baseband compute platform that provides a complete licensable L1 PHY solution with optimal hardware/software partitioning. It addresses the requirements of both the Radio end and the DU/baseband end of the 5G market. The PentaG-RAN platform can also be used as an add-on to COTS CPU-based solutions to run vRAN inline acceleration tasks.

The platform includes high-performance DSPs and 5G hardware accelerators and delivers up to 10x reduction in power and area compared to FPGA and COTS CPU-based alternative solutions.

The Figure below highlights the various CEVA IP blocks that make up a Massive MIMO Beamformer Tile.

The PentaG-RAN platform makes it easier for customers to implement a MIMO beamformer SoC by integrating the included baseband beamformer tile with their own front-haul design.

The Figure below shows how the platform supports the DU end for Small Cell and vRAN designs.

Productivity Tool/Virtual Platform Simulator

As with the PentaG2 platform, the PentaG-RAN deliverables include a System-C simulation environment for architecture definition, modeling, debugging and fast prototyping. The PentaG-RAN SoC simulator supports all CEVA IP and interfaces with MATLAB platform for algorithmic development. A PentaG-RAN based system can also be emulated on a FPGA platform for final verification.

To learn more details, visit the PentaG-RAN product page.

CEVA’s 5G Co-Creation Offering

Through its Intrinsix team, CEVA offers SoC design services for those customers who would like to customize the platform IP to build a highly differentiated SoC. The Intrinsix team is well versed in mapping customer use cases to the PentaG-RAN platform. The team can work on hardware architecture spec, solution dimensioning, interconnect definition, process node selection, and software architecture. In essence, customers can engage CEVA for full-service ASIC engagement from architecture to GDS.

PentaG-RAN Availability

PentaG-RAN will be available for general licensing in 4Q 2022.

Also Read:

5G for IoT Gets Closer

LIDAR-based SLAM, What’s New in Autonomous Navigation

Spatial Audio: Overcoming Its Unique Challenges to Provide A Complete Solution


Micron and Memory – Slamming on brakes after going off the cliff without skidmarks

Micron and Memory – Slamming on brakes after going off the cliff without skidmarks
by Robert Maire on 10-03-2022 at 10:00 am

Wiley Coyote Semiconductor Crash 2022 1

-Micron slams on the brakes of capacity & capex-
-But memory market is already over the cliff without skid marks
-It will likely take at least a year to sop up excess capacity
-Collateral impact on Samsung & others even more important

Micron hitting the brakes after memory market already impacts

Micron capped off an otherwise very good year with what appears to be a very bad outlook for the upcoming year. Micron reported revenues of $6.6B and EPS of $1.45 versus the street of $6.6B and $1.30.

However the outlook for the next quarter , Q1 of 2023….not so much. Guidance of $4.25B +-$250M and EPS of 4 cents plus or minus 10 cents, versus street expectations of $5.6B and EPS of $0.64….a huge miss even after numbers had been already cut.

A good old fashioned down cycle

It looks like we will be having a good old fashioned down cycle in which companies get to at or below break even numbers and cut costs quickly to try and stave off red ink.

At least this is the case in the memory business, which is usually the first to see the down cycle and tends to suffer much more as it is largely a commodity market which results in a race to the bottom between competitors trying to win a bigger piece of a reduced pie.

Will foundries and logic follow memory down the rabbit hole?

While we don’t expect as negative a reaction on the logic side of the semi industry, reduced demand will impact pricing of foundry capacity and bring down lead times. There will certainly be a lot more price pressure on CPUs as competitive pricing will heat up quite a bit. TSMC will likely drop pricing to take back overflow business it let go and we will see second tier foundry players suffer more.
The simple reality is that if manufacturers are buying less memory, they are buying less of other semiconductor types, its just that simple.

Technology versus capacity spending

For many, many years we have said that there are two types of spend in the semiconductor industry. Technology spending, in order to keep pushing down the Moore’s law curve and stay competitive. Capacity spending is usually the larger of the two, obviously mostly in an up cycle, in which the next generation of technology is put into high volume production.

Micron is obviously cutting off all capacity related spend and is just spending on keeping up its lead in technology, which they can never stop given that they are in competition with Samsung.

There is obviously some bricks and mortar spending to build the new fab in Idaho that will continue, but will only be filled with equipment and people when the down cycle in memory is over.

Micron did talk about announcing a second new fab in the US but that is likely to be very far behind the Boise fab announced and may never get built within the 5 year CHIPS for America window. The new Boise fab is 3-5 years away and will likely be on the slow side given the current down cycle.

Capex cut in half – We told you so, 3 months ago.

When you are in a hole, stop digging

We are surprised that everyone, including so called analysts, are shocked about the capex cuts. It doesn’t take Elon Musk (a rocket scientist ) to tell you to stop making more memory when there is a glut and prices have collapsed.
Maybe Micron’s comments about holding product off market last quarter should have been a clue and gotten more peoples attention as a warning sign (it got our attention).

Back when Micron reported their last quarter, 3 months ago we said ” We would not at all be surprised to see next years capex cut down to half or less of 2022’s”

Our June 30th Micron note

In case some readers didn’t get the memo we repeated our prediction of a 50% Micron capex cut a month ago “Micron will likely cut capex in half and Intel has already announced a likely slowing of Ohio and other projects”

Our August 30th note

Semi equipment companies more negatively impacted than Micron

When the semiconductor industry sneezes the equipment companies catch a cold

Obviously cutting Micron’s WFE capex in half is a big deal for the equipment companies as their revenues can drop faster than their customers.

While Micron cutting capex in half is a big deal, Samsung following suit with a capex cut would be a disaster. Its not like it hasn’t happened before ….a few years ago Samsung stopped spending for a few quarters virtually overnight.
We are certain Samsung will slow along with Micron, the only question is how much and do they also slow the foundry side of business.

Could China be the wild card in Memory?

While Micron and Samsung and other long term memory makers have behaved more rationally in recent years and moderated their spend to reduce the cyclicality we are more concerned about new entrants, such as China, that want to gain market share. Its unlikely that they will slow their feverish spending as they are not yet full fledged members of the memory cartel.
This will likely extend the down cycle because even if the established memory makers slow, China will not and will likely extend the glut and extend the down cycle.

Technology will help protect Micron in the down cycle

As long as Micron keeps up its technology spend & R&D spend to stay ahead of the pack or at least with the pack they will be fine in the longer run when we come out of the other side of the down cycle.
Micron has a very long history about being very good spenders and very good at technology and if they keep that up they will be fine. We highly doubt they will do anything stupid.

The stocks

We see no reason to buy Micron any time soon at near current levels.
As we have said recently, we would avoid value traps like the plague.
Semi equipment stocks should see a more negative reaction as they are the ones to see the negative impact of the capex cuts.

Lam , LRCX, is obviously the poster child for the memory industry equipment suppliers and is a big supplier to Micron and more importantly Samsung
We also see no reason to go near Samsung and Samsung may be a short as investors may not fully understand the linkage to the weakness in the memory industry. Semiconductors are the life blood of Samsung and memory is their wheelhouse whereas foundry is their foster child.

We warned people months ago “to buckle up, this could get ugly” and so it continues.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

The Semiconductor Cycle Snowballs Down the Food Chain – Gravitational Cognizance

KLAC same triple threat headwinds Supply, Economy & China

LRCX – Great QTR and guide but gathering China storm