Banner 800x100 0810

DFT Moves up to 2.5D and 3D IC

DFT Moves up to 2.5D and 3D IC
by Daniel Payne on 10-06-2022 at 10:00 am

2.5D and 3D chiplets min

The annual ITC event was held the last week of September, and I kept reading all of the news highlights from the EDA vendors, as the time spent on the tester can be a major cost and the value to catching defective chips from reaching production is so critical. Chiplets, 2.5D and 3D IC design have caught the attention of the test world, so I learned what Siemens EDA just announced to address the new test demands with their DFT approach. Vidya Neerkundar is a Product Manager for the Tessent family of DFT products, and she presented an update.

DFT Challenges

For most of the history of IC designs we’ve had one die in one package, along with multi-chip modules (MCM). For 2.5D and 3D ICs with multiple dies, how do you take the individual die tests, then make them work for the final package?

What if the DFT architectures for each internal die are different from each other?

Is there an optimal way to schedule the die tests while in a package to reduce test times?

2.5D and 3D chiplets

Tessent Multi-die

Siemen’s development team extended their technology to support 2.5D and 3D IC packaging with Tessent Multi-die. At SemiWiki we blogged last year about the Tessent Streaming Scan Network, which used 2D hierarchical scan test. This same approach extends 2D hierarchical DFT into 2.5D and 3D ICs now. Here’s what that looks like for three chiplets in a 2.5D device:

The IEEE created a standard for test access architecture for 3D stacked ICs, known as IEEE 1838-2019. IEEE 1687 defines the access and control of instrumentation embedded inside an IC using another standard, IEEE 1149.1 – with test access ports. Tessent Multi-die supports all of these standards.

Each die in a chiplet design has a Boundary Scan Description Language (BSDL) file, and then Tessent Multi-die creates the package level BSDL for you.

IEEE 1838

This die-centric test standard became board approved in November 2019, and allows testing of a die as part of a multi-die stack. A 3D stack of die is connected for test purposes using a Flexible Parallel Port (FPP), along with Die Wrapper Registers (DWR) and Test Access Ports (TAP):

3D Stack for Testing

IEEE 1687 – Internal JTAG

This 2014 standard helps to streamline the use of instruments that are embedded inside each die.  There’s an Instrument Connectivity Language (ICL), and Procedure Description Language (PDL) to define the instrumentation. The flow between an ATE system and internal JTAG is shown below:

IEEE 1687 flow

IEEE 1149.1 JTG

The boundary scan standard with a Test Access Port goes back to 1990, and the Boundary Scan Description Language (BSDL) came along in 2001. This standard defines how instructions and test data flow inside a chip.

IEEE 1149.1 JTAG

Bringing all of these test standards together, we can see how Tessent Multi-die connects to each chiplet inside of a 3D stack. The test patterns for cores within each die and test scheduling is accomplished with Tessent Streaming Scan Network (SSN).

Tessent Streaming Scan Network

SSN basically packetizes test data delivery, which decouples the core DFT and chip DFT, allowing an independent shift of concurrently tested cores. Practical benefits are time savings for DFT planning, easier routing and timing closure, and up to a 4X test time and volume reduction.

Tessent SSN

Summary

Close collaboration between foundries, design, test and the IEEE have created a vibrant 2.5D and 3D eco-system, with all of the technology in place to advance semiconductor innovation. Siemens EDA has extended their Tessent software to embrace the new test challenges, while using IEEE standards. Tessent Multi-die is integrated with all of the other Tessent products and platform, so you don’t have to cobble tools and flows together.

Related Blogs


U.S. Automakers Broadening Search for Talent and R&D As Electronics Take Over Vehicles

U.S. Automakers Broadening Search for Talent and R&D As Electronics Take Over Vehicles
by Tony Hayes on 10-06-2022 at 6:00 am

DOCKLANDS3

The auto industry isn’t for the faint of heart in late 2022. As Deloitte recently explained, chip shortages, supply chain bottlenecks, unpredictable consumer demand and the industry overhaul mandated by the rise of EVs are all creating unprecedented turmoil in this key sector. One particularly pressing challenge is the ongoing dearth of technically oriented employees in this era when cars make decisions on their own.

The automotive sector is predicted to face a global shortage of 2.3 million skilled workers by 2025 and 4.3 million by 2030 according to a research project from global executive recruiting firm Ennis & Co, which specializes in the sector.  This underscores the rapidly growing amount of modern technology in vehicles today. According to Edward Jones, Professor in Electrical and Electronic Engineering at the University of Galway, “The mechanical components are just one part of the equation and there’s probably as much if not more emphasis now on the software, the electronics, the sensors and the user experience.”

He and his research colleagues have had a front-row seat to the quickly evolving auto industry because so many manufacturers have come to his region in a search of new solutions to hiring, development, testing and other key aspects of building a modern, in-demand vehicle and the key electronics that make it run. Jaguar Land Rover (JLR), General Motors, Intel, Analog Devices, Valeo and many other players have heavily focused on the West of Ireland , in recent years to tap into the resources available there. For example, JLR opened a technology development center in 2017 and keeps adding to its staff while GM has increasingly grown and evolved its Irish operations to encompass key priority areas such as AI, data management and cyber.

As Jones describes it, Ireland offers companies “many of the high-value pieces of the technology stack, software, sensing, AI, machine learning, as well as the test infrastructure.” One company paying attention is Valeo, a top supplier of automotive cameras that sells products to vehicle manufacturers worldwide. University of Galway has several intriguing government-funded projects underway with Valeo that include examining the effect of inclement weather on autonomous vehicle sensors, multi-sensor fusion to improve the perception of a car’s vision system, and modeling the behavior of road users at junctions so that a vehicle’s intelligent systems can plan ahead rather than taking reactive actions.

According to Martin Glavin, an autonomous vehicle expert and Professor of Electronic and Computer Engineering at the University of Galway, “The auto belt in the west of Ireland conducts research on auto sensors and systems in ways that maybe aren’t possible in a lot of the United States. The weather in Ireland is known to be very variable and changeable.” Both Glavin and Jones are also researchers at Lero (the Science Foundation Ireland Research Center for Software), where they do extensive research and testing of vehicle sensors and perception systems.

“In Ireland, our road network length is equivalent to France,” reports Glavin. “We have everything from small country lanes all the way up to high-end motorways.  R&D in Ireland is ideal in that the country is compact and very dynamic in terms of the climate and conditions. Within 10 or 15 minutes, you can go from winter to summer or be on a small country road or a busy motorway.”

The University of Galway team and other research groups are also conducting testing of intelligent technology in the farm equipment arena. Projects are underway with McHale Engineering, part of a leading agricultural equipment dealer headquartered in Ireland.  One ag-tech project is focusing on data capture and algorithm development so that equipment operators and the devices they use are more efficient. Meanwhile, the team is also performing “analysis based on the behavior of the machine and how the machines are being used, ultimately with the aim of predicting failures on those machines and offering a more robust machine that can diagnose its own problems,” notes Glavin.

On the testing side, the Irish government funded Future Mobility Campus Ireland (FMCI), a facility that lets manufacturers put their technologies through their paces via heavily monitored test tracks.  Partners include GM, JLR, Cisco, Analog Devices, Seagate and Red Hat. Noted FMCI CEO Russell Vickers: “There are two main reasons why companies come to Ireland.  One is probably European localization there’s also the areas of data management, data processing, AI and machine learning. That’s why Jaguar Land Rover set up in Ireland because they could get access to software developers that have those skills. You have to follow the people.”

Underpinning the talent search is the growing demand for EVs due to emission concerns and costs at the pump. Proof of this important direction was seen recently in America’s Inflation Reduction Act of 2022, which offers new or expanded tax incentives for buying EVs, as well as the recent mandate in California, America’s environmental pacesetter, that all new vehicles sold by 2035 must be EVs.  The Paris Accord, which includes 196 of the world’s nations, was the forerunner – aligning with a vision of zero-emission vehicles, fewer crashes and reduced congestion.

Pursuing new technologies reflects the public/private partnership that has long characterized Ireland, with government organizations funding and collaborating with universities, research operations and companies. For example, the Science Foundation Ireland (SFI) centers work with companies in the areas of lithium batteries for EVs as well as breakthrough non-metal batteries and vehicle parts.

Noted Lorraine Byrne, executive director of AMBER, the SFI-funded materials science center headquartered at Trinity College Dublin, “We offer companies multidisciplinary scientific expertise to address specific research questions associated with their technology roadmaps. We help to accelerate early-stage research that can reduce the time to market for our industry partners. The materials science work we do at AMBER has relevance in multiple sectors but for automotive, we focus on materials challenges associated with batteries, optical components and the increasing use of sustainable or recycled materials in molded polymer or fabrics.  The SFI centers have a cost‑share model that allows us to co‑fund projects, which is attractive for companies who want to invest in higher-risk early-stage research.”

For example, AMBER has worked with Merck Millipore in the membrane area, where AMBER and Merck have collaborated on molding of polymers and material selection, particularly in the area of new membranes for filtration, whether for air filters or oil filters.

However, moving research forward isn’t the only lure for companies in the auto sector coming to Ireland. In an era when talent is in short supply, the availability of trained technical staff coming out of the universities and research institutes is particularly attractive. Says Byrne: “At the moment, over 50% of our post-doctoral researchers are ending up in the industry as their first destination. A lot of companies are working with AMBER, not just for the research but also for access to the talent pipeline.”

Tony Hayes, VP Engineering, Industrial & Clean Technologies, IDA Ireland

Also Read:

Super Cruise Saves OnStar, Industry

Arm and Arteris Partner on Automotive

The Truly Terrifying Truth about Tesla


Podcast EP110: The Real Story Behind Cerebras Systems – What It Does and Why It Matters

Podcast EP110: The Real Story Behind Cerebras Systems – What It Does and Why It Matters
by Daniel Nenni on 10-05-2022 at 10:00 am

Dan is joined by Rebecca Lewington, Technology Evangelist at Cerebras Systems. Before Cerebras she held similar roles at Micron Technology, Hewlett Packard Labs and Applied Materials. Rebecca has a master’s degree in mechanical and electrical engineering from the University of London and holds 15 patents.

Rebecca explains the one-of-a-kind architecture behind Cerebras technology and the unique approaches it facilitates. Details of customer applications are also discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


How Deep Data Analytics Accelerates SoC Product Development

How Deep Data Analytics Accelerates SoC Product Development
by Kalar Rajendiran on 10-05-2022 at 8:00 am

Continuous Monitoring and Improvement Loop

Ever since the birth of the semiconductor industry, advances have always been at a fast pace. The complexity of SoCs have grown along the way, driven by the demanding computational and communication needs of various market applications. Over the last decade, the growth in complexity has accelerated at unforeseen rates, fueled by AI/ML processing, 5G communications and related applications. This of course has brought a strain on SoC product development cycles and time to market schedules.

Semico Research recently published a detailed report titled “Deep Data Analytics Accelerating SoC Product Development.” The report explains how deep data analytics can help accelerate all phases of an SoC product development including test and post-silicon management. proteanTecs deep data analytics technology and solution are spotlighted and the resulting benefits quantified and presented in a whitepaper. This post will cover some of the salient points from this whitepaper.

The proteanTecs Approach to Deep Data Analytics

The proteanTecs approach is to include monitoring IP into SoC designs and leverage machine learning algorithms to analyze the collected data for actionable analytics. The monitoring IP, referred to as on-chip Agents fall into four categories.

Classification and Profiling

These Agents collect information related to the chip’s basic transistors and standard cells. They are constructed to be sensitive to the different device parameters and can map a chip or a sub-chip to the closest matching process corner, PDK simulation point and RC model.

Performance and Performance Degradation Monitoring

These Agents are placed at the end of many timing paths and continuously track the remaining timing margin to the target capture clock frequency. They can be used to pinpoint critical path timing issues as well as track their degradation over time.

Interconnect and Performance Monitoring

These Agents are located inside a high bandwidth die-to-die interface and are capable of continuously detecting the signal integrity and performance of the critical chip interfaces.

Operational Sensing

These Agents turn the SoC into a system sensor by sensing the effects of the application, board or environment on the chip. They track changes in the DC voltage and temperature across the die as well as information related to the clock jitter, power supply noise, software and workload. The information gathered can be used to explain timing issues detected by the Performance and Degradation Agents. The collected information helps understand the system environment, for fast debug and root cause analysis.

The proteanTecs Deep Data Analytics Software Platform

The proteanTecs platform is a one-stop software platform that generates analytics from the data created by the on-chip Agents. It performs intelligent integration of the Agents and applies machine learning techniques on the Agent readouts to provide actionable analytics. The platform is centered on the principle of continuous monitoring and improvements and implements a continuous feedback loop as shown in the Figure below.

The platform feeds relevant real-time analytics to the appropriate teams who are responsible to take corrective actions. Depending on the type of analytics feedback, the recipients would be the marketing group, SoC hardware and software group, the manufacturing team or the field deployment and support team.

Benefits of Adopting the proteanTecs Approach

Design teams can use the data to understand how the different chip parameters  are affected by various applications and environmental conditions over time. With this type of insight from the current product, the next product can be better planned.

With the in-field monitoring, predictive maintenance can be performed and when something does fail unexpectedly, debugging becomes easier and quicker. The conditions leading to the failure can be easily recreated right in the field and the fix accomplished in a much shorter time.

Analytics shared with the software team can be used to identify and fix bottlenecks between the silicon and the software during different operations.

A further benefit could be the monetization of the data stream between the system developer and the end customers. For example, auto manufacturers could provide data to their customers on how a vehicle is operating under different road conditions, so that performance could be optimized. Data centers could provide insights to their customers on how different loading factors impact response times and latencies.

There are multiple possibilities for monetization of the data streams established via the proteanTecs approach. This could open up an additional revenue stream to the owners of such a platform.

The Quantifiable Business Impact Results

In the report, Semico includes a head-to-head comparison of two companies designing a similar multicore data center accelerator SoC on a 5nm technology node. This assessment is used for understanding the quantifiable benefits of using the proteanTecs approach. The design profile and metrics of this sample SoC is presented in the Table below.

The following Table shows the quantifiable benefits of using the proteanTecs approach as it pertains to market metrics and sales results.

Summary

The proteanTecs chip analytics platform helps drive the process of SoC design, manufacturing, testing, bring-up and deployment for a significant market advantage. It performs deep dive analytics on data captured from silicon and systems to identify potential problems in all phases of the lifecycle of an SoC. The emergence of such deep data analytics solutions will benefit the electronics industry as problems can now be avoided during the development stage and in-field issues corrected rapidly.

For more details about the proteanTecs platform, visit https://www.proteantecs.com/solutions.

You can download the Semico Research whitepaper from the proteanTecs website.

Also Read:

proteanTecs Technology Helps GUC Characterize Its GLink™ High-Speed Interface

Elevating Production Testing with proteanTecs and Advantest’s ACS Edge™ Platforms

CEO Interview: Shai Cohen of proteanTecs


Siemens EDA Discuss Permanent and Transient Faults

Siemens EDA Discuss Permanent and Transient Faults
by Bernard Murphy on 10-05-2022 at 6:00 am

wafer image min

This is a topic worth coverage for those of us who aim to know more about safety. There are devils in the details on how ISO 26262 quantifies fault metrics, where I consider my understanding probably similar to other non-experts: light. All in all, a nice summary of the topic.

Permanent and transient faults 101

The authors kick off with a section on “what are they and where do they come from”,. They describe the behavior well enough and a mechanism to model permanent faults (stuck-at). Along with general root causes (EMI, radiation, vibration, etc).

The rest of the opening section is valuable, talking about base failure rates and the three metrics most important to ISO 26262. These are single point fault metric (SPFM), latent fault metric (LFM) and the probabilistic metric for hardware failure (PMHF). These quantify FIT rates (failures in time). Permanent faults affect all three and can be estimated or measured in accelerated life testing.

How do these relate to FMEDA analysis? FMEDA estimates the effectiveness of safety mitigations against transient faults, providing a transient fault component to the SPFM and PMHF metrics. It has nothing to do with permanent faults or LFM metrics. Got that?

Safety mechanisms

There’s a nice discussion on safety mechanisms and their effectiveness in detecting different types of fault. One example they show uses software test libraries (STL), a new concept to me. They note STLs are unlikely to be helpful in detecting transient faults given the fault may vanish during the execution of the test. However, there are multiple mechanisms to help here. Triple modular redundancy and lockstep compute and ECC are examples.

There is an introduction to periodic hardware self-test, becoming more important in ASIL-D compliance for-in-flight block validation. They suggest during such testing that configuration registers could be scrubbed, eliminating transient-induced configuration errors. An interesting idea but I suspect this would need some care to avoid serious overkill in requiring a function to be reconfigured from scratch on each retest. Might be interesting if all the configuration registers have protected restore registers, allowing recovery from a known good and recent state?

More on transient faults

The paper has a good discussion on transient faults in relation to FIT rates. They point out that storage elements are most important, noting that failure rates on combinational elements rarely rise to statistical significance. Transients are about bit flips rather than signal glitches; the effect must persist for some time, if only a clock cycle. Glitches can also cause bad behavior, but the statistical significance of such problems is apparently low.

They extend this argument to the need to pay more attention to registers which are infrequently updated (e.g. configuration registers) versus registers which are frequently updated. On the grounds that a fault in a long-lived value may have more damaging consequences. I understand the reasoning with respect to FIT rate. A long-lived error may cause more faults. But the argument seems a bit loose. An error in a frequently updated register can propagate to memory where it may also live for a long time.

I didn’t learn about fault detection time intervals (FDTI) until relatively recently. The paper has a good discussion on this. Also on fault tolerant time intervals (FTTI). How long do you have after a fault occurs to detect it and do something about it? Useful information for those planning safety mitigations.

You can read the white paper HERE.


Analyzing Clocks at 7nm and Smaller Nodes

Analyzing Clocks at 7nm and Smaller Nodes
by Daniel Payne on 10-04-2022 at 10:00 am

Aging Clock

In the good old days the clock signal looked like a square wave , and had a voltage swing of 5 volts, however with 7nm technology the clock signals can now look more like a sawtooth signal and may not actually reach the full Vdd value of 0.65V inside the core of a chip. I’ll cover some of the semiconductor market trends, and then challenges of analyzing high performance clocks at 7nm and smaller process nodes.

Market Trends

Foundries like TSMC, Samsung and Intel are offering 7nm technology to designers working on a wide array of SoC devices that are used in: AI, robotics, autonomous vehicles, avionics, medical electronics, data centers, 5G networks and mobile devices. These designs demand high integration in the billion transistor range, and low power to operate on batteries or within a strict power budget.

7nm Design Challenges

There are plenty of design challenges with advanced nodes, like:

  • Transistor aging effects
  • Higher design costs, in the range of $120-$420 million per 7nm design
  • Reduced design margins with lower Vdd levels
  • Power consumption rising with clock frequency
  • Process variation effects
  • Larger delay variations
  • Interconnect RC variation increases
  • Higher resistance interconnect causing signal distortions
  • Larger power transients from faster transistor switching times
  • Many more clocks with multi-voltage power domains
  • An increase in power density and chip temperatures related to switching
  • Dramatic increase in the DRC rule deck complexity

Aging Effects

As transistor devices switch on and off there are two main physical effects that impact the reliability:

  • Negative Bias Temperature Instability (NBTI)
  • Hot Carrier Injection (HCI)

Circuit designers learn that these aging effects change the Vt of devices, which in turn will slow down the rise and fall times of the clock signals.  These aging effects over time will distort the duty cycle of the clock, and can actually cause the clock circuitry to fail. Shown below are two charts where the clock Insertion Delay and Duty Cycle eventually fail, caused by aging effects. The increase in clock jitter and rail to rail (R2R) violations also appear as aging effects.

Aging Clock

Static Timing Analysis (STA) 

For many years, EDA users have relied upon STA tools, however these tools make simplifying assumptions about aging effects by applying a blanket timing derating, instead of applying aging based upon actual switching activity. The interconnect delay model in STA will miss duty cycle distortion errors in long signal nets due to resistive shielding. A STA tool also doesn’t catch rail-to-rail failures directly, although it does measure insertion delays and slew rates. Jitter isn’t simulated as part of a STA tool, so the designer doesn’t know which areas have highest noise that require fixing.

Overcoming Analysis Limitations

An ideal clock analysis methodology would provide SPICE-level accuracy of an entire clock domain, even with millions of devices. It would allow an engineer to measure R2R and jitter at every node along the entire clock path, both with and without aging. Multiple clocks could be analyzed across many process corners and Vdd combinations, working from within the current EDA tool flow, and produce results overnight.

Infinisim Approach

Infinisim is an EDA vendor that has focused on clock analysis, and their tool is called ClockEdge. Here are two analysis examples of clock domain rise slew rate, and clock domain aged insertion delay from their tool:

CAD developers at Infinisim figured out how to simulate the entire clock domain, producing full analog results with SPICE accuracy, allowing SoC teams to actually measure the clock duty cycle while aging, or measure R2R, even measure noise-induce jitter. The ClockEdge tool even runs in a distributed fashion across multiple servers in order to produce results overnight.

Clock duty cycle degradation
Rail-to-rail failure detection
Aging effects
Jitter

ClockEdge really complements STA, so continue to use both tools, where ClockEdge becomes your clock sign-off tool. All of the device aging models are supplied by your foundry. As an example of the performance of ClockEdge, it has been run on a clock circuit with 4.5 million gates, containing billions of transistors; trace required 4.5 hours, and simulation was 12 hours total time, running on 250 CPUs.

Summary

Designing an SoC at 7nm and smaller process node is a big task, requiring specialized knowledge of clock analysis to ensure first-pass silicon success. Adding a new tool like ClockEdge into your EDA tool flow is a smart step to mitigate the effects of device aging and other effects.

Related Blogs


CEVA Accelerates 5G Infrastructure Rollout with Industry’s First Baseband Platform IP for 5G RAN ASICs

CEVA Accelerates 5G Infrastructure Rollout with Industry’s First Baseband Platform IP for 5G RAN ASICs
by Kalar Rajendiran on 10-04-2022 at 6:00 am

PentaG RAN Massive MIMO Radio Platform

The 5G technology market is huge with incredible growth opportunities for various players within the ecosystem. As a leading cellular IP provider, CEVA has been staying on top of the opportunity by offering solutions that enable customers to bring differentiated products to the marketplace. Earlier this year, SemiWiki posted a blog about CEVA’s PentaG2 5G NR IP Platform, the PentaG2 being a follow-on offering to CEVA’s successful first generation PentaG IP Platform. The PentaG2 platform’s target is the 5G user equipment segment of the 5G market and PentaG2 has been enabling customers to develop products rapidly and cost effectively.

But what about the infrastructure segment of the 5G market opportunity? This segment is also huge and lucrative and is attracting a slew of established and new players seeking a slice of the opportunity pie. CEVA recently launched their PentaG-RAN Platform IP to address the infrastructure segment of the 5G market. This industry-first scalable and flexible platform combines powerful DSPs, 5G hardware accelerators and other specialized components required for optimizing modem processing chains. The PentaG-RAN platform also lowers the barriers for entry for new players who want to serve the Open-RAN base station and equipment markets.

If system companies had their druthers, they would implement an ASIC for optimal differentiation of their solutions. Of course, developing an ASIC has gotten challenging in many ways due to design complexities, supply chain disaggregation, costs, availability of technical talent, etc., And, if you are a new player, you may not have in-house seasoned ASIC implementation teams to count on.

What if the PentaG-RAN Platform IP would help overcome the above challenges? Would system companies take the ASIC route and get freedom from the captivity of chipset suppliers? Would chiplet suppliers take advantage of the platform to quickly implement product variants for different segments/customers? What if along with the Platform IP, integration services are available to implement the ASIC? This is the backdrop and context for this blog about the PentaG-RAN announcement from CEVA.

5G RAN Market Opportunity and the Challenge

The infrastructure market covers base stations and radio configurations from small cells to Massive MIMO deployments. According to Gartner, the RAN semiconductor market is expected to grow from $5.5B in 2022 to $7.4B in 2026. Significant Open-RAN architecture penetration is expected around 2025 with Massive MIMO radio being the biggest volume opportunity. New OEMs being attracted by the Open-RAN architecture are desiring to replace cost and power inefficient FPGA-based and COTS platform based implementations. At the same time, they get intimidated just thinking of the ASIC development challenges, given the PHY and radio domains are highly complex to begin with. The scarcity of design and architecture expertise for 5G baseband processing justifiably adds to this apprehension. To top these concerns, the diverse workloads in 5G physical layer require complex and heterogeneous L1 subsystems and optimal hardware/software partitioning.

CEVA’s PentaG-RAN Offering

PentaG-RAN is a heterogeneous baseband compute platform that provides a complete licensable L1 PHY solution with optimal hardware/software partitioning. It addresses the requirements of both the Radio end and the DU/baseband end of the 5G market. The PentaG-RAN platform can also be used as an add-on to COTS CPU-based solutions to run vRAN inline acceleration tasks.

The platform includes high-performance DSPs and 5G hardware accelerators and delivers up to 10x reduction in power and area compared to FPGA and COTS CPU-based alternative solutions.

The Figure below highlights the various CEVA IP blocks that make up a Massive MIMO Beamformer Tile.

The PentaG-RAN platform makes it easier for customers to implement a MIMO beamformer SoC by integrating the included baseband beamformer tile with their own front-haul design.

The Figure below shows how the platform supports the DU end for Small Cell and vRAN designs.

Productivity Tool/Virtual Platform Simulator

As with the PentaG2 platform, the PentaG-RAN deliverables include a System-C simulation environment for architecture definition, modeling, debugging and fast prototyping. The PentaG-RAN SoC simulator supports all CEVA IP and interfaces with MATLAB platform for algorithmic development. A PentaG-RAN based system can also be emulated on a FPGA platform for final verification.

To learn more details, visit the PentaG-RAN product page.

CEVA’s 5G Co-Creation Offering

Through its Intrinsix team, CEVA offers SoC design services for those customers who would like to customize the platform IP to build a highly differentiated SoC. The Intrinsix team is well versed in mapping customer use cases to the PentaG-RAN platform. The team can work on hardware architecture spec, solution dimensioning, interconnect definition, process node selection, and software architecture. In essence, customers can engage CEVA for full-service ASIC engagement from architecture to GDS.

PentaG-RAN Availability

PentaG-RAN will be available for general licensing in 4Q 2022.

Also Read:

5G for IoT Gets Closer

LIDAR-based SLAM, What’s New in Autonomous Navigation

Spatial Audio: Overcoming Its Unique Challenges to Provide A Complete Solution


Micron and Memory – Slamming on brakes after going off the cliff without skidmarks

Micron and Memory – Slamming on brakes after going off the cliff without skidmarks
by Robert Maire on 10-03-2022 at 10:00 am

Wiley Coyote Semiconductor Crash 2022 1

-Micron slams on the brakes of capacity & capex-
-But memory market is already over the cliff without skid marks
-It will likely take at least a year to sop up excess capacity
-Collateral impact on Samsung & others even more important

Micron hitting the brakes after memory market already impacts

Micron capped off an otherwise very good year with what appears to be a very bad outlook for the upcoming year. Micron reported revenues of $6.6B and EPS of $1.45 versus the street of $6.6B and $1.30.

However the outlook for the next quarter , Q1 of 2023….not so much. Guidance of $4.25B +-$250M and EPS of 4 cents plus or minus 10 cents, versus street expectations of $5.6B and EPS of $0.64….a huge miss even after numbers had been already cut.

A good old fashioned down cycle

It looks like we will be having a good old fashioned down cycle in which companies get to at or below break even numbers and cut costs quickly to try and stave off red ink.

At least this is the case in the memory business, which is usually the first to see the down cycle and tends to suffer much more as it is largely a commodity market which results in a race to the bottom between competitors trying to win a bigger piece of a reduced pie.

Will foundries and logic follow memory down the rabbit hole?

While we don’t expect as negative a reaction on the logic side of the semi industry, reduced demand will impact pricing of foundry capacity and bring down lead times. There will certainly be a lot more price pressure on CPUs as competitive pricing will heat up quite a bit. TSMC will likely drop pricing to take back overflow business it let go and we will see second tier foundry players suffer more.
The simple reality is that if manufacturers are buying less memory, they are buying less of other semiconductor types, its just that simple.

Technology versus capacity spending

For many, many years we have said that there are two types of spend in the semiconductor industry. Technology spending, in order to keep pushing down the Moore’s law curve and stay competitive. Capacity spending is usually the larger of the two, obviously mostly in an up cycle, in which the next generation of technology is put into high volume production.

Micron is obviously cutting off all capacity related spend and is just spending on keeping up its lead in technology, which they can never stop given that they are in competition with Samsung.

There is obviously some bricks and mortar spending to build the new fab in Idaho that will continue, but will only be filled with equipment and people when the down cycle in memory is over.

Micron did talk about announcing a second new fab in the US but that is likely to be very far behind the Boise fab announced and may never get built within the 5 year CHIPS for America window. The new Boise fab is 3-5 years away and will likely be on the slow side given the current down cycle.

Capex cut in half – We told you so, 3 months ago.

When you are in a hole, stop digging

We are surprised that everyone, including so called analysts, are shocked about the capex cuts. It doesn’t take Elon Musk (a rocket scientist ) to tell you to stop making more memory when there is a glut and prices have collapsed.
Maybe Micron’s comments about holding product off market last quarter should have been a clue and gotten more peoples attention as a warning sign (it got our attention).

Back when Micron reported their last quarter, 3 months ago we said ” We would not at all be surprised to see next years capex cut down to half or less of 2022’s”

Our June 30th Micron note

In case some readers didn’t get the memo we repeated our prediction of a 50% Micron capex cut a month ago “Micron will likely cut capex in half and Intel has already announced a likely slowing of Ohio and other projects”

Our August 30th note

Semi equipment companies more negatively impacted than Micron

When the semiconductor industry sneezes the equipment companies catch a cold

Obviously cutting Micron’s WFE capex in half is a big deal for the equipment companies as their revenues can drop faster than their customers.

While Micron cutting capex in half is a big deal, Samsung following suit with a capex cut would be a disaster. Its not like it hasn’t happened before ….a few years ago Samsung stopped spending for a few quarters virtually overnight.
We are certain Samsung will slow along with Micron, the only question is how much and do they also slow the foundry side of business.

Could China be the wild card in Memory?

While Micron and Samsung and other long term memory makers have behaved more rationally in recent years and moderated their spend to reduce the cyclicality we are more concerned about new entrants, such as China, that want to gain market share. Its unlikely that they will slow their feverish spending as they are not yet full fledged members of the memory cartel.
This will likely extend the down cycle because even if the established memory makers slow, China will not and will likely extend the glut and extend the down cycle.

Technology will help protect Micron in the down cycle

As long as Micron keeps up its technology spend & R&D spend to stay ahead of the pack or at least with the pack they will be fine in the longer run when we come out of the other side of the down cycle.
Micron has a very long history about being very good spenders and very good at technology and if they keep that up they will be fine. We highly doubt they will do anything stupid.

The stocks

We see no reason to buy Micron any time soon at near current levels.
As we have said recently, we would avoid value traps like the plague.
Semi equipment stocks should see a more negative reaction as they are the ones to see the negative impact of the capex cuts.

Lam , LRCX, is obviously the poster child for the memory industry equipment suppliers and is a big supplier to Micron and more importantly Samsung
We also see no reason to go near Samsung and Samsung may be a short as investors may not fully understand the linkage to the weakness in the memory industry. Semiconductors are the life blood of Samsung and memory is their wheelhouse whereas foundry is their foster child.

We warned people months ago “to buckle up, this could get ugly” and so it continues.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

The Semiconductor Cycle Snowballs Down the Food Chain – Gravitational Cognizance

KLAC same triple threat headwinds Supply, Economy & China

LRCX – Great QTR and guide but gathering China storm


WEBINAR: Taking eFPGA Security to the Next Level

WEBINAR: Taking eFPGA Security to the Next Level
by Daniel Nenni on 10-03-2022 at 6:00 am

SemiWiki Flex Logix Intrinsic-ID Webinar

We have written about eFPGA and for six years now and security even longer so it is natural to combine these two very important topics. Last month we covered the partnership between Flex Logix and Intrinsic ID, and the related white paper. Both companies are SemiWiki partners, so we were able to provide more depth and color:

In the joint Flex Logix/Intrinsic ID solution, a cryptographic key derived from a chip-unique root key is used to encrypt and authenticate the bitstream of an eFPGA. If the chip is attacked or found in the field, the bitstream of the eFPGA cannot be altered, read, or copied to another chip. That is because the content is protected by a key that is never stored and therefore is invisible and unclonable by an attacker.

Neither is the concern of counterfeit chips being inserted within the supply chain valid any longer. Each QuiddiKey user can generate an unlimited number of chip-unique keys, enabling each user in the supply chain to derive their own chip-unique keys. Each user can protect their respective secrets as their cryptographic keys will not be known to the manufacturer or other supply-chain users.

To learn even more we have a live webinar coming up where you can interact with the principles:

REGISTER HERE

5G, networking, cloud storage, defense, smart home, automotive, and others – are looking to embedded FPGAs (eFPGA) to save power and reduce cost. All these applications demand reconfigurability with lower power/cost, but they also require strong security.

  • Are you looking to integrate eFPGA into your devices and need a better understanding of how to secure your design?
  • Do you want to understand how to encrypt the eFPGA data, so it is so secure that it is not known to anyone (not even you)?
  • In that case, this is the webinar for you!

This webinar will teach you:

  • The benefits of eFPGA and how it reduces power and cost.
  • H0w to integrate eFPGAs into your design.
  • How to secure an SoC, and specifically how to secure the contents of the eFPGA using SRAM PUF technology.

SRAM PUFs create device-unique keys that are never stored on devices, that cannot be copied from one device to the next, and that are not known to anyone. Use of SRAM PUFs guarantees the data used to program the eFPGA can be trusted and that it cannot be reused on malicious or counterfeit devices, which makes them ideally suited for protecting eFPGAs in security-sensitive markets.

Speakers:

Ralph Grundler is Senior Director of Marketing at Flex Logix, the leading supplier of eFPGA technology. An experienced business development professional, Ralph has a long history in the development and marketing of semiconductors, IP, SoCs, FPGAs, and embedded systems. He has done many videos and live presentations on a wide variety of technical subjects. He has 30 years of computer and semiconductor industry experience.

Vincent van der Leest is Director of Marketing at Intrinsic ID, the leading supplier of security based on SRAM PUF technology. He started at Intrinsic ID 13 years ago working on the research into the company’s core SRAM PUF technology, after which he spent many years in business development and marketing roles when the company started growing.

REGISTER HERE

I hope to see you there!

About Flex Logix
Flex Logix is a reconfigurable computing company providing AI inference and eFPGA solutions based on software, systems and silicon. Its InferX X1 is the industry’s most-efficient AI edge inference accelerator that will bring AI to the masses in high-volume applications by providing much higher inference throughput per dollar and per watt. Flex Logix eFPGA enables volume
FPGA users to integrate the FPGA into their companion SoC resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, and others. Flex Logix is headquartered in Mountain View, California and has offices in Austin, Texas and Vancouver, Canada. For more information, visit https://flex-logix.com.

About Intrinsic ID
Intrinsic ID is the world’s leading provider of security IP for embedded systems based on PUF technology. The technology provides an additional level of hardware security utilizing the inherent uniqueness in each and every silicon chip. The IP can be delivered in hardware or software and can be applied easily to almost any chip – from tiny microcontrollers to high-performance FPGAs – and at any stage of a product’s lifecycle. It is used as a hardware root of trust to protect sensitive military and government data and systems, validate payment systems, secure connectivity, and authenticate sensors. For more information, visit https://www.intrinsic-id.com/.


Super Cruise Saves OnStar, Industry

Super Cruise Saves OnStar, Industry
by Roger C. Lanctot on 10-02-2022 at 6:00 pm

Super Cruise Saves OnStar Industry

Listen in on any automotive podcast, earnings call, or attend any automotive industry event and you will hear about “software defined” cars and “service oriented architectures.” This euphemistic terminology obscures the reality that cars in most major markets are almost universally connected – even if the owners of those cars are not fully invested in the concept of “connectivity.”

The automotive industry is still languishing in a subscription adjacent mindset with a customer base that remains largely skeptical of subscription-based models. By and large, consumers want to pay a single price for their automobiles and don’t yet fully perceive the need to pay a separate fee for vehicle connectivity.

This is not to say that all new car buyers and owners refuse to pay the $10-$30/month for a typical telematics service package. Many do – enough, in fact, to make connectivity platforms reasonably profitable or minimally loss-inducing.

What the industry needs, though, is a transformation. General Motors, the originator of OnStar vehicle connectivity a quarter century ago, is pointing the way.

Traditional telematics services such as automatic crash notification, stolen vehicle tracking and recovery, and remote diagnostics seem to have faded in importance. Intrusion detection and over-the-air software updates, meanwhile, have not yet captured consumers’ imaginations.

What has caught the attention of consumers is GM’s Super Cruise semi-automated driving solution. Already embedded in 40,000 GM vehicles currently in circulation, Super Cruise is slated for deployment in 22 GM car models by the end of 2023.

The key to Super Cruise’s industry impact is that it requires an OnStar subscription. If consumers want access to GM’s sexiest driving enhancement in the history of the company, they will have to pay a monthly fee.

It doesn’t matter that the fee covers the expense of connectivity necessary for enhancing situational awareness and positioning accuracy. It doesn’t matter that there are multiple layers of enabling technology providing redundancy and ensuring reliability.

The Super Cruise user can take their hands off the wheel as long as they are paying attention to the driving task. In fact, the driver monitoring system opens the door to driver identification and credentialling which can be used to support and enhance other connected driving tasks – and access to services.

Super Cruise is the long sought after “killer app” that is already changing the perception of vehicle connectivity. Super Cruise is a gateway to the broader deployment, adoption, and acceptance of software updates. Super Cruise gives car connectivity a reason to exist.

Dealers now have a story they can tell around car connectivity that makes sense to the customer. In fact, dealers have a powerful motivation to demonstrate the technology in action and ensure that the new car buyer is properly provisioned with cellular service before they leave the lot.

Four years into the launch of Super Cruise, GM has yet to experience a high profile failure of the technology in operation. And unlike Tesla’s Autopilot and Full-Self-Driving beta, there is no flood of Youtube videos highlighting its shortcomings.

(I will note here that Tesla’s $10/month connectivity is itself clearly subsidized and discounted due to the value Tesla is extracting from the data it is gleaning from its connected cars – hundreds of millions of dollars in value. Competing auto makers can be expected – or should be expected – to recognize a similarly discounted connectivity proposition.)

There have been no National Highway Traffic Safety Administration investigations of Super Cruise. And there have been no fatal crashes reported.

Most telling of all, though, is the fact that GM has begun advertising Super Cruise. Super Cruise is rapidly becoming a brand-defining application with broad consumer appeal and growing consumer awareness.

In sum, Super Cruise has come to rescue of OnStar’s original mission of promoting the concept of vehicle connectivity. Super Cruise has single-handedly solved the business model of subscription-based vehicle ownership and sped the adoption of over-the-air software updates.

Also Read:

Arm and Arteris Partner on Automotive

The Truly Terrifying Truth about Tesla

GM Should BrightDrop-Kick Cruise

Ultra-efficient heterogeneous SoCs for Level 5 self-driving