Banner 800x100 0810

Webinar: The Data Revolution of Semiconductor Production

Webinar: The Data Revolution of Semiconductor Production
by Daniel Nenni on 02-27-2023 at 6:00 am

ProteanTecs Webinar

How Advancements in Technology Unlock New Insights

The demand for efficient and scalable chip production has never been greater. The need to scale at volume and adapt to shorter innovation cycles makes machine learning and advanced data analytics essential components of semiconductor production.

Join us on Tuesday, March 7, 2023, for this 1-hour panel discussion with industry experts from QualcommMicrosoft Azure and Advantest as we discuss how data analytics and product insights can accelerate time to market and improve performance, yield and quality.

Topics covered: 

  • Overcoming data siloes
  • Processing massive amounts of data
  • Using machine learning and advanced data analytics
  • Uncovering data insights and actionable intelligence

Panel will be moderated by Nitza Basoco, VP of Business Development at proteanTecs. Nitza has a broad background in management, test development, product engineering, supply chain and operations. In her current role, she focuses on partnership strategies and ecosystem growth, positioning proteanTecs as the common data language to the full value chain. Before joining proteanTecs, she was VP of Operations at Synaptics and held engineering and leadership positions at Teradyne, Broadcom and MaxLinear. Nitza earned a BSEE and MEEE from MIT.

Register now and save your seat.

Meet Our Panelists

Michael Campbell is Senior Vice President of Engineering for Qualcomm CDMA Technologies, responsible for product and test, failure analysis, test automation and yield. Mike joined Qualcomm in 1996 and has led multiple teams. In his current role, he is working to streamline all processes impacting time-to-market, new process node enablement, and revolutionize product test engineering (PTE) tasks by driving machine learning as a 21st century requirement. Prior to joining Qualcomm, Mike worked at Mostek, INMOS and Honeywell. He holds a BSEE/CE from Clarkson University.

Preeth Chengappa is Head of Industry for the EDA and Semiconductor segment at Azure. Since joining Microsoft in 2018, he has worked with customers and partners across the semiconductor ecosystem to leverage cloud capabilities for all aspects of design, manufacturing, testing and lifecycle management. Preeth co-founded SiCAD in 2011, a startup that pioneered the use of cloud computing for chip design. Previously, Preeth held business development and sales management roles at Xilinx, Altran and Falcon Computing. He holds a BS in engineering from NITK, India.

Ira Leventhal is the Vice President of Applied Research & Technology at Advantest America, Inc. He has over 25 years of experience in semiconductor testing, including memory, SoC, wireless device, and system-level test. Ira has led the design and development of multiple generations of ATE systems, and holds patents in a variety of test-related technologies. In his current role, Ira is focusing on how artificial intelligence, cloud, and data analytics technologies can be catalysts for major advances in semiconductor test products and methodologies. Ira is a BSEE graduate of MIT.

Register now and save your seat.

Post-Webinar Q&A: The Data Revolution of Semiconductor Production

About proteanTecs

proteanTecs is the leading provider of deep data analytics for advanced electronics monitoring. Trusted by global leaders in the datacenter, automotive, communications and mobile markets, the company provides system health and performance monitoring, from production to the field.  By applying machine learning to novel data created by on-chip monitors, the company’s deep data analytics solutions deliver unparalleled visibility and actionable insights—leading to new levels of quality and reliability. Founded in 2017 and backed by world-leading investors, the company is headquartered in Israel and has offices in the United States, India and Taiwan. For more information, visit www.proteanTecs.com.

Also Read:

The Era of Chiplets and Heterogeneous Integration: Challenges and Emerging Solutions to Support 2.5D and 3D Advanced Packaging

proteanTecs Technology Helps GUC Characterize Its GLink™ High-Speed Interface

Elevating Production Testing with proteanTecs and Advantest’s ACS Edge™ Platforms

How Deep Data Analytics Accelerates SoC Product Development

CEO Interview: Shai Cohen of proteanTecs


Podcast EP145: How Achronix Drives Industry Innovation with Robert Blake

Podcast EP145: How Achronix Drives Industry Innovation with Robert Blake
by Daniel Nenni on 02-24-2023 at 10:00 am

Dan is joined by Robert Blake, Chief Executive Officer of Achronix Semiconductor. He has worked in the semiconductor industry for over 25 years. Prior to Achronix he was the Chief Executive Officer of Octasic Semiconductor based in Montreal, Canada. Robert also worked at Altera, LSI Logic and Fairchild.

Robert explains how Achronix helps their customers innovate, both with dedicated FPGA products and embedded FPGA IP. The embedded FPGA IP has been used to manufacture over 15 millions cores.

Areas of focus for Achronix to drive innovation include computation efficiency, data transport, connectivity and interface. The current environment that is trending toward heterogeneous compute is also discussed, along with a future assessment of the industry and Achronix.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Maven Silicon’s RISC-V Processor IP Verification Flow

Maven Silicon’s RISC-V Processor IP Verification Flow
by Sivakumar PR on 02-24-2023 at 6:00 am

1 1

RISC-V is a general-purpose license-free open Instruction Set Architecture [ISA] with multiple extensions. It is an ISA separated into a small base integer ISA, usable as a base for customized accelerators and optional standard extensions to support general-purpose software development. RISC-V supports both 32-bit and 64-bit address space variants for applications, operating system kernels, and hardware implementations. So, it is suitable for all computing systems, from embedded microcontrollers to cloud servers.

To know more about RISC-V, refer : Semiwiki Article – Is your Career at Risk without RISC-V?

In this open era of computing, RISC-V community members are ambitious to create various kinds of RISC processors using RISC-V open ISA. However, the risk of using RISC-V ISA is higher because the proven processor verification flow is still proprietary to established processor fabless IP companies and IDMs as an unrevealed secret. So, how can we make the RISC-V verification flow open and empower the RISC-V community? 

If you have been wondering, ‘How can I verify my RISC-V processor efficiently without risking TTM’, then you should explore and adopt Maven Silicon’s RISC-V verification flow for your processor IP verification, explained in this technical paper.

I have defined Maven Silicon’s RISC-V verification flow using the correct by-construction approach. The approach is to build a pre-verified synthesizable RISC-V IP fundamental IP building blocks library and create any kind of multi-stage pipeline RISC-V processor using this library. Finally, the multi-stage pipeline RISC-V processor IP can be verified using Constrained Random Coverage Driven Verification [CRCDV] in Universal Verification Methodology [UVM] and FPGA prototyping.

Let me explain the verification strategy and walk you through Maven Silicon’s RISC-V verification flow.

2.Verification Strategy

2.1 Block-Level Verification: Generate RISC-V IP Fundamental Blocks Library using formal verification

Verify all the RISC-V IP fundamental building blocks like ALU, Decoder, Program Counter, Registers, Instruction, and Data memories using Formal Verification. Design Engineers [DE] who do RTL coding of the RISC-V IP building blocks can embed the assertions [SVA/PSL] into the RTL modules. Verification Engineers [VE] will verify the RISC-V RTL IP blocks using the formal verification EDA tool. DEs can synthesize the verified RTL blocks and fix all synthesis-related issues.

Finally, VEs can further verify those synthesizable RTL blocks and create RISC-V IP Blocks Library.

2.2 IP-Level Verification: Verify RISC-V IP which is built using pre-verified RISC-V IP Blocks Library using Constrained Random Coverage Driven Verification [CRCDV]

DEs can realize any kind of multi-stage pipeline RISC-V processor using the pre-verified RISC-V IP fundamental blocks library. VEs will create the verification environment using UVM and verify the RISC-V multi-stage pipeline processor using CRCDV.

VEs will also create necessary reference models, interface assertions for protocol validation, and functional coverage models. Finally, VEs will sign off the IP level regression testing based on the coverage closure [Code + Functional Coverage].

The verified RISC-V processor IP can be verified further by booting OS using FPGA Prototyping if the IP implements a general-purpose processor that supports a standard Unix-like OS.

  1. Maven Silicon’s RISC-V Processor IP Verification Flow

As shown in figure1, Maven Silicon’s RISC-V Verification flow implements the verification strategy explained above.

Figure1: Maven Silicon’s RISC-V IP Verification Flow

4.RISC-V IP Verification using UVM

IP-level VEs can create the verification environment using Universal Verification Methodology, as shown in figure 2.

As our Maven Silicon’s RISC-V IP RTL design uses an AHB interface, we have modeled the instruction and data memories as AHB slave UVM agents. The RISC-V processor reference model was modeled as AHB master UVM agent, and the complete UVM environment with all the testbench components scoreboard, subscribers with coverage models, reset, interrupt, and RAL UVM agents was validated by connecting RISC-V AHB agents back-to-back, especially to verify the TB dataflow and coverage generation. Once the verification environment became stable, one of the reference models was replaced by the RTL. UVM RAL was extensively used to sample the RISC-V IP registers and memories for the data comparison in the scoreboard.

To understand how this verification environment works, refer to this demo video:

Maven Silicon RISC-V UVM Verification Environment

Figure 2: Maven Silicon’s RISC-V IP UVM Verification Environment

To know further about various verification methodologies like formal verification, IP, Sub-system and SoC verification flow, CRCDV using code and functional coverage, refer:

Semiwiki Article: SoC Verification Flow and Methodologies

One can also consider integrating Google’s Instruction Stream Generator [ISG] for the stimulus generation, and open source Instruction Set Simulator [ISS] like Spike as a reference model into their UVM environment and efficiently do exhaustive verification.

Conclusion

Efficient, high-quality RISC-V IP verification can be realized only through the effective combination of various verification methodologies like formal verification, CRCDV using UVM and OS booting using FPGA prototyping, and reusabilities like reusing pre-verified RISC-V Blocks library and scalable IP level UVM testbenches. As RISC-V is an open ISA, we can create the reusable RISC-V fundamental blocks pre-verified library and contribute to RISC-V International as an open-source RISC-V library. Using this pre-verified library, the RISC-V community members can create any kind of multi-stage pipeline RISC-V processor as they prefer and verify their RISC-V processor as per the flow explained in this technical paper. Isn’t it the more efficient way of verifying your RISC-V processor without risking your TTM? 

Download Maven Silicon’s RISC-V Processor IP Verification Flow PDF 

About Maven Silicon
Maven Silicon is a trusted VLSI Training partner that helps organizations worldwide build and scale their VLSI teams. We provide outcome-based VLSI training with our variety of learning tracks i.e. RTL Design, ASIC Verification, DFT, Physical Design, RISC-V, and ARM etc. delivered through our cloud-based customized training solutions. To know more about us, visit our website.

Also Read:

Is your career at RISK without RISC-V?

SoC Verification Flow and Methodologies

Experts Talk: RISC-V CEO Calista Redmond and Maven Silicon CEO Sivakumar P R on RISC-V Open Era of Computing


Keysight Expands EDA Software Portfolio with Cliosoft Acquisition

Keysight Expands EDA Software Portfolio with Cliosoft Acquisition
by Daniel Nenni on 02-23-2023 at 8:00 am

image001 14

During the day I do M&A work inside the semiconductor ecosystem and I have been part of more than a dozen acquisitions during my career so I know a good one when I see it and I see a great one with Keysight and Cliosoft, absolutely.

Cliosoft came to SemiWiki 12 years ago when we first went online so I know them quite well. With more than 400 customers, the depth of experience that comes with this company is incredible. Additionally, Cliosoft has always been vendor agnostic, working closely with the top three EDA companies, which made the acquisition even more interesting. With Keysight, there will be even deeper partnerships with the top EDA companies with the expanded flow integration of Cliosoft (SoS, Hub, VDD) and Keysight Pathwave Advanced Design System. Had one of the other EDA companies acquired Cliosoft that would not have been the case.

Srinath Anantharaman, Chief Executive Officer of Cliosoft, said: “Handling exponential growth in design data and maximizing IP reuse with interoperability across EDA vendor environments is a major challenge as we approach the time of ‘More than Moore’s law’. Keysight’s broad industry leadership in applications like 5G and 6G communications, automotive, and aerospace and defense, makes Keysight uniquely positioned to realize the promise of connecting design, emulation, and test data in streamlined workflows that speed time-to-market. We are excited to join Keysight in raising engineering productivity to the next level and enabling our customers to digitally transform their development lifecycles and meet the challenges ahead.”

Keysight came to SemiWiki last year and we have written about their tools in great detail. We also did a podcast with Niels Faché, Vice President and General Manager of Keysight EDA. For Keysight, Cliosoft brings strength to the Process and Data Management (PDM) side of the business which is a critical link between the physical systems integration and physical testing to the Digital Twin (design and simulation) side of the business. The result being improved automation and traceability for product implementation and production.

Niels Faché, Vice President and General Manager of Keysight EDA, said: “One of our top business priorities is creating digital, connected workflows from design to test that accelerate customers’ digital transformation. We see a tremendous opportunity in the PDM space to leverage Cliosoft’s current capabilities combined with our design-test solutions expertise. Adding PDM solutions to the portfolio is a natural progression of our open EDA interoperability strategy to deliver best-in-class tools and workflows in support of increasingly complicated product development lifecycles. Cliosoft offers proven software tools that enable product teams to perform data analytics and accelerate time to insight. The result of faster insight and greater reuse is improved productivity in the verification phase and shorter overall development cycles.”

Bottom line: This is one to watch. Cliosoft was already a market leader and now they have the Keysight breadth of experience and strength of a world wide field sales and support channel. This is definitely one of the 1+1=3 acquisitions.

About Keysight Technologies
Keysight delivers advanced design and validation solutions that help accelerate innovation to connect and secure the world. Keysight’s dedication to speed and precision extends to software-driven insights and analytics that bring tomorrow’s technology products to market faster across the development lifecycle, in design simulation, prototype validation, automated software testing, manufacturing analysis, and network performance optimization and visibility in enterprise, service provider and cloud environments. Our customers span the worldwide communications and industrial ecosystems, aerospace and defense, automotive, energy, semiconductor and general electronics markets. Keysight generated revenues of $5.4B in fiscal year 2022. For more information about Keysight Technologies (NYSE: KEYS), visit us at www.keysight.com.

Additional information about Keysight Technologies is available in the newsroom at https://www.keysight.com/go/news and on Facebook, LinkedIn, Twitter and YouTube.

Also Read:

Cliosoft’s Smart Storage Strategy for Better Workspace Management

Designing a ColdADC ASIC For Detecting Neutrinos

Big plans for state-of-the-art RF and microwave EDA

Higher-order QAM and smarter workflows in VSA 2023


Physically Aware NoC Design Arrives With a Big Claim

Physically Aware NoC Design Arrives With a Big Claim
by Bernard Murphy on 02-23-2023 at 6:00 am

NoC manual flow min

I wrote last month about physically aware NoC design, so you shouldn’t be surprised that Arteris is now offering exactly that capability 😊. First, a quick recap on why physical awareness is important, especially below 16nm. Today, between the top level and subsystems a state-of-art SoC may contain anywhere from five to twenty NoCs, contributing 10-12% of silicon area. Interconnect must thread throughout the floor plan, adding to the complexity of meeting PPA goals in physical design. This requires a balancing act between NoC architecture, logical design and physical design, a task which until now has forced manual iteration, materially extending the implementation schedule. That iteration is a time sink which physically aware NoC design aims to cut dramatically.

(Source: Arteris, Inc.)

The Traditional Interconnect Design Flow

The early stages of NoC design are architecture-centric, figuring out how typical software use cases can best be supported. The network topology should be optimized around quality of service for high-traffic paths while allowing more flexibility for lower frequency and relatively latency-insensitive connections. Topology choices are also influenced by early floor plan estimates, packaging constraints, power management architecture, and safety and security considerations. Architects and NoC design teams iterate between an estimated floor plan and interconnect topology to converge on an approximate architecture. This planning phase consumes time (14-35 days in a production design example illustrated here), essential to approximately align between architecture, logic and physical constraints.

Then the hard work starts to refine approximations. System verification and logic verification are run to validate bandwidth and latency targets. Synthesis, place and route and timing analysis probe where the design must be further optimized to meet power, area and timing goals. At this stage, designers are working with a real physical floor plan for all the function IP components, into which NoC IPs and connections must fit. Inevitably, early approximations prove not to be quite right, the physical design team can’t meet the timing goals, and the process must iterate. Physical constraints may need to be tuned and pipelines added, driven by informed guesswork. More iterations follow further refining estimates and ultimately leading to closure. Sometimes fixing a problem may require a restart to refactor the NoC topology, putting the whole project at risk.

Those iterations take time. In one production design, closing NoC timing took 10 weeks! Experienced design teams limit iterations by over-provision pipelining and by using LVT libraries along timing critical paths. Accepting tradeoffs in area, latency and power in exchange for faster convergence. Per iteration, considering and implementing those decisions then rerunning the implementation flow and rechecking timing, altogether can take weeks.

Physically Aware NoC Design With FlexNoC 5

That trial-and-error iteration is a consequence of approximations and assumptions in architectural design. Understandable but now a much better approach is possible. After the very first pass-through implementation, a more accurate floor plan is available. Maybe the NoC topology needs to be tweaked against the floor plan but now it is working with real physical and timing constraints. The NoC generator has enough information to generate more accurate timing and physical constraints for P&R. It can also insert the right level of pipelining, exactly where it is needed. Automatically delivering a NoC topology which will close with confidence in just one more implementation pass.

(Source: Arteris, Inc.)

This is what Arteris is claiming for their new generation FlexNoC 5 solution – that a first iteration based on an implementation grade floor plan will provide enough information for FlexNoC 5 to automatically generate timing and physical constraints, together with pipelining, to meet closure on the next pass. Based on the customer design example described earlier, they assert that this automation delivers up to a 5X speedup in iterating to NoC physical closure.

That’s a big jump in productivity and one that seems intuitively reasonable to me. I talked in my earlier blog about the whole process being an optimization problem with all the challenges of such problems. A trial-and-error search for a good starting point suffers from long cycle times through implementation, to determine if a trial missed the mark and how to best adjust for the next trial. More automation in NoC generation based on more accurate estimates should find an effective starting point faster.

Proving the Claim

Arteris cites measured improvements in NoC physical implementation using a FlexNoC 5 flow, in each case using the same number of engineers with and without the new technology. For pipeline insertion, they were able to reduce iterations from 10 to 1, within the same time per iteration. In constraint (re-) definition they were able to reduce the elapsed time from 3 days to 1 day and turns on constraints from 3 to 1. In place and route, they were able to reduce 5 or more iterations to 1 iteration within the same or better iteration time. Overall, 10X faster in pipeline insertion, 9X faster in constraint (re-)definition and 5X faster in P&R.

I should add that a customer in this case (Sondrel) has significant experience in ASIC design and has been working with NoC technology for many years. These are experienced SoC designers who know how to optimize their manual flow yet agree that the new automated flow is delivering better results faster.

You can learn more about FlexNoC 5 HERE and about Arteris HERE.


Safe Driving: FSD vs. V2X

Safe Driving: FSD vs. V2X
by Roger C. Lanctot on 02-22-2023 at 10:00 am

Safe Driving FSD vs. V2X

When it comes to competing with Elon Musk’s Tesla the automotive industry is, in many ways, its own worst enemy. For more than two decades the auto industry – in the U.S. and E.U. – has struggled to bring vehicle-to-vehicle communication technology to market – first as dedicated short range communications (DSRC) and then (actually now) as cellular-based C-V2X.

Auto makers created working groups and standards committees to draw up the specifications and identify the applications along with conducting extensive testing. The industry turned to the U.S. and E.U. governments seeking mandates for the emerging technology that was intended to save lives by helping cars avoid collisions.

Musk steered clear of V2X technologies of any variety turning, instead, to radar and camera sensors to guide his vehicles with Autopilot and, now, full self driving (FSD). In the U.S., the U.S. Department of Transportation failed to initiate the final rule making that would have mandated V2X technology. In Europe, opposition to DSRC technology steadily mounted and ultimately blocked a mandate there as well.

The reason for the failure to launch DSRC had to do with the evolution of cellular networks – through 2G, 3G, 4G/LTE, and, now, 5G – and the onset of lower cost and higher resolution cameras. With superior connectivity and cameras – mainly cameras – advanced driver assist systems proliferated along with adaptive cruise control and parking assist functions. In essence, active safety technologies advanced while the V2X vision languished in the file cabinets and on the dockets of regulators and legislators.

After more than 20 years of testing and arguing, V2X remains a dream for automotive engineers to ponder. The C-V2X variety is still awaiting final rule making from – now – the Federal Communications Commission. In fact, the FCC has failed to fast track long-promised waivers for car makers and state transportation authorities to deploy V2X technology. Final rule making lies somewhere further down the road.

This is a shame as V2X technology was designed to enable collision avoidance technologies ranging from identifying vulnerable road users (pedestrians, bicyclists, emergency responders, workers), to synchronizing with traffic signals, managing intersection interactions, avoiding collisions, and a host of more esoteric use cases such as identifying black ice or seeing around corners. V2X was intended to be the cornerstone of vehicle safety by connecting vehicles to other vehicles and infrastructure with low latency communications.

Meanwhile, Tesla has focused nearly its entire enhanced driving value proposition on cameras and the safer driving they enable. Tesla’s Musk long ago dismissed vehicle-to-vehicle communications (the topic rarely comes up any more) along with hydrogen, LiDAR, and even radar – though he appears to be rethinking radar.

Tesla’s latest and most impressive advances using camera technology have been leveraging its full self driving (FSD) technology – for which Tesla owners must pay thousands of dollar – to not only identify signalized intersections but also to identify the signal phase and timing of the lights. Tesla’s equipped with FSD – properly activated and with a vigilant driver – are able to identify red lights and come to a full stop without assistance and restart and proceed on green. (Multiple-lane left turns from traffic lights can still be a challenge as are poorly marked roads.)

Car makers in the U.S. and Europe poured hundreds of millions of dollars into V2X development that has yet to save a single life or convey a single cent of added value to any car. The “legacy” auto industry is staring into an abyss of wasted time, money, and effort to bring a life-saving technology to market – a diversion fueled by the delusion of regulatory endorsement.

To be sure, V2X was a big bet – a huge vision. That vision originally foresaw a massive infrastructure build out dedicated to supporting driving safety and likely to cost “someone” (taxpayers?) billions of dollars. What has emerged from 20+ years of effort is a self-contained technology using existing cellular hardware but not dependent upon the cellular network for its direct communications.

The V2X dream (nightmare?) isn’t over. The FCC is expected to approve waivers for car makers and transportation authorities any day. A final rule will come later. But the decades-long debacle is a lesson in how not to advance driving safety.

The Tesla approach is the classic market-based model. The New Car Assessment Program (NCAP) is yet another market-based model based on conferring or withholding five-star ratings. The USDOT’s NHTSA has come to rely on voluntary adoption of guidelines – as in the case of automatic emergency braking – since the rule-making process appears to have ground to a halt.

The Infrastructure Bill passed last year in the U.S. has multiple safety mandates, but enforcement and adoption protocols are ambiguous. If we have learned nothing else from the V2X project it is that the automotive industry needs to find a new path forward for developing, defining, and deploying safety systems – especially but not exclusively in the U.S.

In the absence of effective safety leadership from USDOT/NHTSA, Tesla has emerged as the voice of reason. That can’t be good. As impressive as FSD is, it is still just a Level 2 semi-automated driving system that is both amazing and terrifying. We need to find a path forward that emphasizes the amazing and dials back the terrifying.

Also Read:

ASIL B Certification on an Industry-Class Root of Trust IP

Privacy? What Privacy?

ATSC 3.0: Sleeper Hit of CES 2023


Bleak Year for Semiconductors

Bleak Year for Semiconductors
by Bill Jewell on 02-22-2023 at 6:00 am

Top Semiconductor Revenue 2022

The global semiconductor market in 2022 was $573.5 billion, according to WSTS. 2022 was up 3.2% from 2021, a significant slowdown from 26.2% growth in 2021. We at Semiconductor Intelligence track semiconductor market forecasts and award a virtual prize for the most accurate forecast for the year. The criteria are a forecast publicly released anytime between November of the prior year and the release of January data from WSTS (generally in early March). The winner for 2022 is Objective Analysis with a 6% forecast released in December 2021. IDC was closer with a 4% forecast in September 2021, but this was outside of our contest time range. Within the contest period, WSTS was second closest with an 8.8% forecast in November 2021. Most other forecasts for 2022 made prior to March 2022 were over 10%, including ours at Semiconductor Intelligence.

How is the outlook shaping up for 2023? The year is off to a weak start. The top 15 semiconductor suppliers collectively had a 14% decline in revenue in 4Q 2022 versus 3Q 2022. The largest declines were the memory companies with a 25% decline. The non-memory companies declined 9%. Four of the fifteen companies had slight revenue increases ranging from 0.1% to 2.4%: Nvidia, AMD, STMicroelectronics, and Analog Devices.

The outlook for the top companies in 1Q 2023 is generally bleak. The first quarter of the year is typically weak for the semiconductor industry, but most companies are expecting 1Q 2023 to be weaker than normal. The nine non-memory companies providing revenue guidance for 1Q 2023 had a weighted average decline of 10%, with all nine expecting a decline. Intel was the most pessimistic, with guidance of a 22% decrease. Inventory adjustments were cited by several companies as a key factor for the grim outlook, particularly in the PC and smartphone end markets. Automotive and industrial are the lone bright spots, with five companies seeing strong demand in one or both of these segments.

Memory companies, which saw revenue declines ranging from 13% to 39% in 4Q 2022, may be starting to recover. Micron Technology expects 1Q 2023 revenues to decrease 7% compared to a 39% decline in 4Q 2023. Micron sees inventory levels improving in the current quarter. The other memory companies – Samsung, SK Hynix and Kioxia – cited continuing inventory adjustments and weak end markets but did not provide revenue guidance for 1Q 2023.

For the full year 2023, the semiconductor market will certainly decline, but the extent of the decline depends on when inventories are back in line and on the overall demand for electronic equipment. According to Gartner, shipments of both smartphones and PCs are expected to decrease in 2023, but at a rate significantly less than in 2022. Smartphones should decline 4% in 2023 versus an 11% decline in 2022. PCs are projected to drop 7% in 2023, following a 16% drop in 2022. The longer-term outlook for PCs and smartphones is for low single digit growth. IDC forecasts a 2023 to 2026 compound annual growth rate (CAGR) of 3.1% for smartphones and 2.3% for PCs and tablets combined.

Automotive production will continue to grow, but a slightly slower rate. S&P Global Mobility projects light vehicle production will grow 3.9% in 2023 versus 6.0% in 2023. S&P expects semiconductor availability to impact production in the first half of 2023 but demand constraints should have more of an impact in the second half of 2023.

The global economic outlook has improved slightly over the last few months. The International Monetary Fund (IMF) January 2023 forecast called for 2.9% growth in global GDP in 2023, an improvement from its October 2022 forecast of 2.7%. The advanced economies overall are expected to grow 1.2%, up from 1.1% in October, with the U.S. outlook improving to 1.4% from 1.0%. The forecast for most of the advanced economies has improved in the January forecast except for the UK, which is now expected to decline 0.6%. Overall emerging/developing economies are projected to grow 4.0%, up from 3.7% in October. The biggest change is in China, now forecast to grow 5.2% as its economy fully reopens, up from 4.4% in October.

The risks of recession in 2023 are moderating. Citi Research in January stated the risk of a global recession in 2023 is about 30%, down from their earlier projection of 50%. Earlier this month, Goldman Sachs put the probability of a U.S. recession in 2023 at 25%, down from their previous forecast of 35%.

A decline in the global semiconductor market in 2023 is inevitable after the 2nd half 2022 dropped 10% from the 1st half and a likely decline around 10% in 1Q 2023 from 4Q 2022. The severity of the decrease depends on when in 2023 the recovery begins. Gartner, IC Insights, WSTS and EY expect declines in the 4% to 5% range – which implies a healthy recovery beginning in 2Q 2023. Objective Analysis (our forecast contest winner for 2022) sees a 19% drop in 2023. Their assumptions include a 45% decrease in the DRAM market and slow growth in the end markets.

Our Semiconductor Intelligence forecast for 2023 is a decline of 12%. This assumes a moderate recovery beginning in 2Q 2023 and improving in the second half of 2023. Inventory adjustments should be mostly resolved by 2Q. Although PCs and smartphone shipments should decrease, in 2023, the rate of decline will be significantly less than in 2022. A continuing strong automotive market and growth in the internet-of-things (IoT) will contribute to the semiconductor recovery. The risks of a global recession in 2023 are lessening. Our preliminary assumptions for 2024 are continuing recovery in semiconductors and moderate growth in end markets. We put 2024 semiconductor market growth in the 5% to 10% range.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Also Read:

CES is Back, but is the Market?

Semiconductors Down in 2nd Half 2022

Continued Electronics Decline

Semiconductor Decline in 2023


Exponential Innovation: HFSS

Exponential Innovation: HFSS
by Matt Commens on 02-21-2023 at 10:00 am

evolution of hfss simulation capacity

The old adage: “If it ain’t broke, don’t fix it,” is as offensive to innovators as it is to grammarians. Just because something works well, doesn’t mean it cannot work better. As times change and technology advances, you either move forward or get left behind.

If you haven’t upgraded to the latest Ansys HFSS electromagnetic simulation software, you don’t know what you’re missing. Imagine if you could solve huge, complete electromagnetic designs while preserving the accuracy and reliability provided by HFSS. How will that change your design methodology? How much faster will you get to market? How much better products will you deliver?

Electromagnetic Simulation Evolves

The need for speed and capacity continues to increase significantly, and HFSS has kept pace throughout its three-decades plus history. Today, the evolution of hardware and its exponentially higher performance and design specifications has driven the need to solve staggeringly large and complex designs inconceivable only three years ago.

As simulation demands evolved, HFSS high-performance computing (HPC) technology has evolved right along with them to meet the demand. Desktop computers with multiple processors were introduced in the late 1990s. With this innovation, HFSS delivered Matrix Multiprocessing (MP) to enable HFSS users to simulate faster, driving faster time to market.

Next came the groundbreaking Domain Decomposition Method (DDM) technology in 2010. This allowed a single HFSS design to be solved in elastic hardware across distributed memory, resulting in an order of magnitude increase in problem size. As is always the case with HFSS, it was achieved in an uncompromised fashion with respect to solving a fully coupled electromagnetic system matrix. Beware of other solutions claiming parallel DDM, as they could be secretly coupling the so-called “domains” via internal ports and thereby risking the rigor and accuracy needed for cutting edge designs. If they are only benchmarking simple transmission line centric models you should be curious and concerned.

Matrix multi-processing is not limited to a single machine. In 2015 HFSS distributed memory matrix (DMM) solver was introduced, providing access to more memory on elastic hardware without compromising the rigor. This enables the greatest accuracy, lowest noise floor and best efficiency for extremely large many-port models with uncompromised accuracy.

We continue to refine DMM in HFSS. As a result of continuous innovations such as HFSS Mesh Fusion introduced in 2020, the capacity increase in HFSS has been exponential, ranging from 10,000 unknowns in 1990 to over 800 million unknowns in 2022, and we anticipate crossing the 1B threshold soon.

Figure 1 – the Evolution of HFSS Electromagnetic Simulation Capacity

Three recent innovations that contribute to such impressive speed boosts are IC Mode and meshing , a new distributed Mesh Fusion solver option in HFSS 3D Layout, and the integration of ECADXplorer capacity into 3D Layout, improving the capacity and ease-of-use for GDS based simulation flows.

We have also sped up the frequency sweeps. Introduced in the early 2000s, the Spectral Decomposition Method (SDM) allows the points in the frequency sweep to be solved in parallel on both shared and elastic hardware. Since SDM, we have continuously improved the algorithms and introduced new innovations, such as the S-Parameters Only (SPO) matrix solve. By providing a smaller memory point for frequency sweep points, we can achieve a speed up per every solution point. This memory reduction provides further payoff by enabling you to solve more frequency points in parallel with the freed-up memory, resulting in faster frequency sweeps without compromising accuracy.

Ansys continually innovates in HFSS. The technological breakthroughs of MP, SDM, DDM, DMM, and SPO along with Mesh Fusion demonstrate an Ansys commitment to continued improvement in capacity and performance, all without compromising accuracy. HFSS workflow and solver technology now enables massive system-scale capacity; IC plus package plus PCB, fully coupled and uncompromised, is now doable and routine. HFSS elastic compute solves problems eight times larger than just two years ago, and 40 times larger than the competition. Together, these leading capabilities in Computational Electromagnetic simulation are enabling today’s most cutting-edge design work, ranging from 3D-IC to MIMO and phased antenna array designs for 5G/6G. This is in fact why top semiconductor companies universally rely on HFSS to verify their designs. If you haven’t used the latest HFSS, you don’t know what you’re missing.

Come see the latest capabilities of HFSS in this webinar on March 3rdAnsys 2023 R1: Ansys HFSS What’s New | Ansys

Also Read:

IDEAS Online Technical Conference Features Intel, Qualcomm, Nvidia, IBM, Samsung, and More Discussing Chip Design Experiences

Whatever Happened to the Big 5G Airport Controversy? Plus A Look To The Future

Ansys’ Emergence as a Tier 1 EDA Player— and What That Means for 3D-IC


Hardware Security in Medical Devices has not been a Priority — But it Should Be

Hardware Security in Medical Devices has not been a Priority — But it Should Be
by Andreas Kuehlmann on 02-21-2023 at 6:00 am

iStock 136200269
Picture of medical monitors inside the ICU

Rapid advances in medical technology prolong patients’ lives and provide them with a higher standard of living than years past. But the increased interconnectivity of those devices and their dependence on wired and wireless networks leave them susceptible to cyberattacks that could have severe consequences.

Whether it’s an implantable defibrillator that transmits data to a cardiologist, an infusion pump that allows a nurse to monitor a patient’s vital signs, or even a smartwatch that logs wellness routines, these instruments broaden the attack surface bad actors can exploit.

In 2017, hospitals around the world were victimized during the large-scale WannaCry ransomware attack. The National Health Service assessed that across England and Scotland, 19,500 appointments were canceled, 600 computers were frozen, and five hospitals had to divert ambulances. In August 2022, a French hospital was subject to a ransomware attack on its medical imaging and patient admission systems, and a similar ploy targeted another nearby hospital a few months later. HIPAA Journal, citing data from Check Point Research, reported in November that an average of 1,426 cyberattacks on the healthcare industry took place each month in 2022 — a 60% increase year over year.

The medical devices themselves are rarely of interest to cyber criminals, who use them as a way to access network infrastructure and install malware or obtain data. They have also recognized that software isn’t the only way in: The hardware that powers all devices, semiconductor chips, is drawing increased attention due to security vulnerabilities that can be remotely accessed and exploited.  Vulnerabilities in software can be patched, but it’s more complex and costly to fix hardware issues.

Limited oversight of the cybersecurity of medical devices has created an environment that’s ripe for exploitation. We must begin asking questions that lead to proactively addressing these hardware vulnerabilities and develop ways to overcome the complications associated with securing a vast array of instruments before something dramatic happens.

Range of devices, shared networks among security issues

The Food and Drug Administration (FDA) has periodically released reports on the importance of securing medical devices — including in March 2020, when it raised awareness of a vulnerability discovered in semiconductor chips that transmit data via Bluetooth Low Energy. But the modern patient care environment is so heavily reliant upon interconnectivity that minimizing cybersecurity risks can be a monumental task.

In its warning, the FDA urged the seven companies that manufactured the chips to talk to providers and patients about how they can lessen the risks tied to that vulnerability. It also acknowledged that any repairs wouldn’t be simple because the affected chips appear in pacemakers, blood glucose monitors, insulin pumps, electrocardiograms, and ultrasound devices.

According to a report issued by Palo Alto Networks’ Unit 42 cyber threat research department, medical instruments and IT devices share 72% of health care networks, meaning malware can spread between computers and imaging machines — or any combination of electronics — rather seamlessly.

Medical devices’ long lifecycles can also make securing them challenging. Although they can still function as intended, they may run on an outdated operating system (OS) that can be costly to upgrade. Scanners such as MRI and CT machines are targeted because of their outdated OS; according to the Unit 42 report, only 16% of the medical devices connected to networks were imaging systems, but they were the gateway for 51% of attacks. The Conficker virus, first detected in 2008, infected mammography machines at a hospital in 2020 because those devices were running on Windows XP — an OS that no longer received mainstream support from Microsoft as of 2014.

And, because of their seemingly niche functions, many medical devices weren’t constructed with cybersecurity in mind. Few security scanning tools exist for instruments that run on a proprietary OS, making them ripe for attacks. In September, the FBI issued a warning to healthcare facilities about the dangers associated with using outdated medical devices. It highlighted research from cybersecurity firms that showed that 53% of connected medical devices have known critical weaknesses stemming from hardware design and software management. Each susceptible instrument has an average of 6.2 vulnerabilities.

When we consider the number of devices in use around the world, the way they are used, and the varying platforms they operate on, it’s apparent that such a broad attack surface presents a significant threat.

Documenting vulnerabilities offers a path forward

Fixing hardware flaws is complicated. Replacing affected semiconductor chips, if even possible given the age and functionality of the device, takes considerable resources and can lead to a disruption in treatment.

Hospitals and other patient care centers aren’t often prepared to defend the broad attack surface created by their use of hundreds of medical devices. Guidance from organizations such as the FDA — the latest of which was released in April, two months before a bipartisan bill that mandated the organization update its recommendations more frequently was introduced in the Senate — only goes so far. Manufacturers must prioritize the security of the semiconductor chips used in medical devices, and consumers throughout the supply chain must ask questions about vulnerabilities to ensure greater consideration is being put into the chips’ design and large-scale production.

A hardware bill of materials (HBOM), which records and tracks the security vulnerabilities of semiconductor chips from development through circulation, is an emerging solution. It can help ensure defective or compromised chips aren’t used — and if they are, in the case of Apple’s newest M1 chips, which have noted design flaws, would allow the weaknesses and repercussions to be thoroughly documented. Plus, even if a vulnerability is identified in the future, manufacturers can undertake a forensic review of the semiconductor chip’s design to determine which devices are susceptible to certain attacks.

By knowing the specific weaknesses in the hardware, you can prevent it from being exploited by cyber criminals and causing devastation across medical facilities.

Risks, outcomes show a high level of urgency

Emerging technology has gotten in the way of the safe operation of medical devices before. In 1998, the installation of digital television transmitters caused interference with medical devices at a nearby hospital because the frequencies they used overlapped. What’s different today, however, is that outside actors can target the power they exert over these instruments — but it’s preventable.

The increasing potential of attacks on semiconductor chips in networked medical devices demonstrates how savvy cyber criminals are becoming. Although advances in technology have made these devices a routine part of care around the globe, they’re also introducing security vulnerabilities given their interconnected nature. Patients can be exposed to serious safety and cybersecurity risks, and we must act now to shore up those vulnerabilities before something catastrophic occurs.

Also Read:

ASIL B Certification on an Industry-Class Root of Trust IP

Validating NoC Security. Innovation in Verification

NIST Standardizes PQShield Algorithms for International Post-Quantum Cryptography


AMAT- Flat is better than down-Trailing tool strength offsets memory- backlog up

AMAT- Flat is better than down-Trailing tool strength offsets memory- backlog up
by Robert Maire on 02-20-2023 at 10:00 am

AMAT CVD 3

-Strength in trailing tools offsets weak memory resulting in flat
-Order book very volatile but backlog surprisingly still grew
-Trailing edge VS Leading edge = 50/50 – Foundry/logic over 2/3
-Not nearly as bad as Lam but not as good as ASML

AMAT posts good quarter & guide – Flat for three quarters

Applied Materials reported revenue of $6.74B and EPS of $2.03, more or less flat with last quarter, versus street of $6.23 and $1.93. Guidance is for $6.4B +-$400M and EPS of $1.84 +-$0.18 versus current street estimates of $5.86B and $ 1.75 in EPS.

At this point in the industry turbulence, being flattish or slightly down is good performance. We would not complain.

Trailing edge to the rescue

The product mix between leading and trailing edge was roughly 50/50 as continued strength in trailing edge offset what is clearly a very sharp drop in memory as evidenced by Lam Research.

Applied has done a good job of predicting the need for trailing edge tools as it had previously created the ICAPS group to focus on non leading edge. They noted particular strength in implant as we have seen with Axcelis. We would be slightly concerned that Applied may make more headway in trailing edge implant which Axcelis has done very well with as they had less competition than the leading edge. With Applied’s renewed focus we could see the competition heat up and Axcelis share may be vulnerable.

In a perverse way, China sales in the trailing edge are somewhat safe from government embargoes. Though we are a little bit curious about how much is truly leading edge as Applied had previously talked about reclassifying some sales to China to to trailing edge to escape the embargo.

Backlog grew overall- fueled by trailing edge

The backlog was clearly very volatile, as we had suggested, with a lot of puts and takes. Takes from memory, replaced by puts in trailing edge. Whereas some others may live off of and reduce their backlog using it as a buffer during rainy season, Applied will also eventually reduce backlog by catching up to customer demand and re-orienting to trailing products. We would expect that there is likely a lot of disturbance in Applied supply chain given the large shift from leading edge to trailing edge.

No handle on recovery timing

As we have heard with others, management was not willing to be specific about any recovery timing other than a vague thought about DRAM improving in the later part of the year.

The memory market is still clearly in a downward trend which doesn’t seem to be slowing much. We are still stuck with excess inventory and production creating a poor pricing environment which is the worst we have seen in a very long time.

Management also said that foundry/logic is also weak but clearly not nearly as much as memory. It kind of feels to us like memory could easily be down 40% or so while foundry logic may be down closer to 10% overall. Foundry/logic was over 2/3 of Applied business which is a good mix to have when memory is off as much as it is now.

Patterning product at SPIE

The company mentioned several times a patterning product announcement at the upcoming SPIE show that we will be attending at the end of this month in San Jose. Our guess is that its a new reticle inspection tool to try to resuscitate their flagging sales in this area. Both Applied and KLA have been hurt by Lasertec in reticle inspection and Applied has lost a number of customers in the space.

MKS a $250M hit to Applied

Management pointed out that the data breach impact at MKS on Applied may be as high as $250 which will be made up over time. Probably not very meaningful to Applied but obviously a black eye on an otherwise well run MKS.
We would imagine that all tool makers are probably going to look for stricter controls on their major critical suppliers

The stocks

While Applied was weak during the daily session, it was up about 1% in the aftermarket as the news of flatness was received as better than the down experiences of others.

Obviously 2023 will be a down year overall for the industry but less so for Applied and that’s not bad. It doesn’t want to make us go out and buy the stock but it may limit potential future downside.

The overall semiconductor rally has run out a bit of steam as reality of earnings has set in. As Applied reports a month behind others there may not be a lot of appetite left to go out and buy a semiconductor stock right now given overall sentiment.

Clearly ASML remains the best performer of the group with Applied and KLA somewhat in the middle with Lam , the memory poster child, as the laggard for obvious reasons.

We don’t expect any relief on the China embargo and the CHIPS act is very slow to start even though everyone has announced projects there are many years between project announcements and actual spend.

So we don’t see any factors to rescue 2023 from being negative. The bigger question not yet in focus is what 2024 will look like and so far we have no clue nor are any companies guessing.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

KLAC- Weak Guide-2023 will “drift down”-Not just memory weak, China & logic too

Hynix historic loss confirms memory meltdown-getting worse – AMD a bright spot

Samsung- full capex speed ahead, damn the downturn- Has Micron in its crosshairs