100X800 Banner (1)

GM’s Bad RTO Optics

GM’s Bad RTO Optics
by Roger C. Lanctot on 10-23-2022 at 6:00 pm

GMs Bad RTO Optics

The automotive industry has been uniquely whipsawed by the COVID-19 pandemic. Factories, dealerships, and offices were shuttered in its earliest days undercutting both supply and demand.

The industry impacts spread outward from these initial shocks with major auto shows folding their tents and ripples of supply chain disruptions roiling normally reliable sourcing arrangements for vital semiconductors. What slowly dawned on industry participants was the reality that some changes might be longer lasting.

Sure, workers may have returned to the factories, but what would happen with white collar workers? Would supply chains heal? Would traditional auto shows return? Customers returned right away. Demand never slackened.

New answers to these questions are emerging daily. The first post-pandemic auto show outside of China was the Munich IAA event last fall. It was different – with suppliers intermingled with car makers on the show floor – and it was reasonably successful.

The L.A. Auto Show that followed later last fall was less successful and signaled that a wider recovery in typical consumer-centric auto shows was not yet in the offing. Now the Detroit auto show has come and gone with its own reverberations of disappointment.

The bigger question emerging from the lackluster consumer reaction to the Detroit Auto Show (officially the North American International Auto Show) is why car company executives expect consumers to turn out to an auto show if rank and file white collar workers won’t show up at the office. The headline in the Detroit Free Press told the tale this week: “GM Steps Back on Return-to-Work Policy after Backlash from Salaried Workers.”

For many, the home of CEO Mary Barra’s “dress appropriately” mantra was not having any of this “work appropriately” request.

As a post-pandemic frequent traveller I feel a need to express my shock at the situation unfolding at GM. Plenty of workers across the country and throughout the world have had to return the work – in fact never left work! I see and interact with these workers every day in my travels.

We don’t think about these workers. We don’t notice them. We take them for granted. They are the workers in hospitality, retail, restaurants, health care, and transportation.

When the automotive industry was on its knees in the spring of 2020, the factory workers – the car makers – they came back. And they did so with barely a whimper!

The actual car makers helped ensure that the industry’s recovery was swift. Factory workers returned en masse to automobile plants that had been modified to accommodate pandemic hygiene. And dealerships, too re-opened to customers who still had a hankering to kick tires and take test drives.

Strangely, the offices of car makers and their suppliers largely remained shuttered. It became clear this was the case when visitors discovered that in-person meetings were impossible with most headquarter facilities unoccupied.

The revolt of white collar workers at GM is truly revolting. How do they account for their sentiments as they ride in their Ubers and Lyfts or accept their bag of pretzels from the flight attendant or receive their key to their accommodation? How do they explain THEIR outrage to the Starbucks coffee maker, the mailman, the convenience store clerk?

If you were hired – pre-pandemic – to work in an office, you are obligated now, nearly three years after the onset of the pandemic, to return to that office. But, seriously, if you work in the automotive industry which is uniquely dependent upon hundreds of thousands of workers building your products, you have a moral obligation to show up physically (with exceptions for those with medical conditions that may render them vulnerable). If for no other reason, you should show up in the office to demonstrate your solidarity with those folks at the plants. The plant workers took their places on the production for you and me and the industry. What’s your contribution?

Also Read:

U.S. Automakers Broadening Search for Talent and R&D As Electronics Take Over Vehicles

Siemens EDA Discuss Permanent and Transient Faults

Super Cruise Saves OnStar, Industry


Is ASML Immune from China Impact?

Is ASML Immune from China Impact?
by Robert Maire on 10-21-2022 at 10:00 am

ASML China Immunity

-ASML has great QTR & Outlook & Huge Euro8.9B orders
-Relatively immune from China due to mainly non leading edge
-Monster Euro38B backlog – 60EUV & 375DUV systems in 2023
-5% China risk to 2023- still mainly supply constrained

ASML proves litho’s place at Apex of semiconductor food chain

ASML announced a great quarter with Euro5.8B in revenue and EPS of Euro4.29/share. Outlook is for revenues of Euro6.1B to 6.6B with gross margins of 49%. Gross margin for 2022 will come in about 50% overall.

Most importantly, orders came in at a huge Euro8.9B, 77% logic, bringing backlog to a multi-year Euro38B. ASML is looking at shipping 60EUV and 375DUV systems in 2023, assuming supply chain issues are resolved.

China immunity from two factors

ASML will have 5% or less impact next year from the China issue for two simple reasons; number one, the majority of current business is non leading edge, above 14NM as ASML was already not shipping any EUV tools to China. Number two, ASML is sold out anyway and there are a large number of customers who will happily snap up any systems that China doesn’t or can’t take.

In our view, as we had previously commented on months ago, ASML is virtually immune to China embargo issues given their leading positioning in the industry. The semiconductor industry remains a zero sum game and litho systems not shipped to China will go elsewhere to satisfy demand.

Macro economic risk remains low as well

In our view, there is obviously the macro economic risk in addition to the China risk, but ASML remains relatively immune from near term spending trends given the huge , overwhelming demand for product and multi year backlog. While there is always risk if macro economy issues get too bad, it would have to get real bad to see customers on the order queue get off line and reduce backlog. The backlog will likely keep ASML in good shape through a macro economic soft patch.

“Customers never cancel, they just re-schedule, it never goes away”

US content is critical to ASML tools

Company management played down the US content of tools which are obviously shipped from the Netherlands. US content may be small but its the most critical content as it is primarily the light source technology for the entire system, and was the subject of the most development work and delays in EUV.

Light source technology is developed by former Cymer, in San Diego, that ASML was allowed to buy by the US government. We are relatively certain that there were agreements regarding Cymers technology in order to win acquisition approval.

Many investors may not be aware that much if not most of the laser technology, especially for EUV, arose out of the “star wars” laser weapons systems of the Regan era as Cymer employed many scientists out of the ex star wars program from both the US and former Soviet Union. The technology used in the 250KW drive laser in EUV systems could be re-purposed for military applications.

We worked on the Cymer IPO in 1996 & followed it since and understand the technology well.

77% logic mix shows resilience

The fact that 77% of orders are from logic suggests that a more rapid slowdown in memory will not impact ASML at all. Management also announced orders for High NA systems along with regular EUV systems. Though high NA was not broken out, at over Euro300M a system, the numbers can add up more than twice as fast as DUV systems. We assume that TSMC, Intel and Samsung have likely already ordered multiple High NA EUV systems. TSMC’s recent capex cut clearly is not impacting their litho system orders as they understand the import of leading in litho.

The stocks

Obviously ASML had a great quarter and fantastic outlook. In our view this reaffirms their positioning in the industry as the top semiconductor equipment company in the world.

They remain in a monopoly position for the most critical and sought after equipment, lithography, which sets the pace for Moore’s Law and improvements in the industry.

While not 100% immune they are likely better than 95% immune and China and macro economic issues are minimal to near zero.

If you want to be invested in semiconductor equipment, ASML is perhaps the best stock to stay in. While the stock will gyrate a lot due to the storm surrounding the industry, the companies performance will remain the most steady of its peers throughout whatever storms come.

ASML’s report raises some collateral questions on other companies. Managements comments on the memory industry weakness and the fact that memory has fallen to 23% or orders along with memory order sluggishness reported by management suggest that companies with higher memory exposure with shorter term tools, such as etch and deposition are significantly more vulnerable to numbers getting cut.

LRCX is the top memory equipment supplier. We have already seen what appears to be a 50% cut from Micron which will certainly cut all of 2023’s business. Obviously Samsung and other memory players will cut memory spending as well so we would expect a significant reduction in Lam’s numbers from memory related customers. Applied Materials, AMAT, will also be negatively impacted with KLAC seeing the least of the US companies.

ASML remains unique in the industry and thus a sought after product and stock relative to others.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Chip Train Wreck Worsens

Semiconductor China Syndrome Meltdown and Mayhem

Micron and Memory – Slamming on brakes after going off the cliff without skidmarks


Podcast EP115: Virtualize Your Development of Arm-Based Designs and More with Corellium

Podcast EP115: Virtualize Your Development of Arm-Based Designs and More with Corellium
by Daniel Nenni on 10-21-2022 at 8:00 am

Dan is joined by Bill Neifert, Senior Vice President of Partnerships at Corellium. Prior to Corellium, Bill was Senior Director of Marketing for Arm’s Development Solutions Group. Before that he was the co-founder and CTO of Carbon Design Systems which was acquired by Arm.

Dan explores the virtualization technology of Correlium with Bill. How their products help with Arm-based design development and some of the plans to extend the technology beyond Arm designs.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Aleksandr Timofeev of POLYN Technology

CEO Interview: Aleksandr Timofeev of POLYN Technology
by Daniel Nenni on 10-21-2022 at 6:00 am

2022 AT foto 3

Aleksandr Timofeev is CEO and Founder of POLYN Technology, an innovative provider of ultra-low-power high-performance NASP (Neuromorphic Analog Signal Processing) technology. Alexander is a serial entrepreneur with more than 20 years in the high-tech industry. Prior POLYN, he founded iGlass Technology, a company that developed novel electrochromic smart glass technology. He built the core team, general technology, and product concept and successfully sold the company at the end of 2020 to a strategic investor. Aleksandr is also founder and managing partner at FPI VC team, an early-stage venture investment management company. The fund focuses on early-stage innovative companies, developing clear product concepts and strategies and working with venture firms and partners for subsequent funding rounds.

While looking at the landscape of new startups in the AI/ML industry I found POLYN. The company differs from others in its business model as well as its concept and technology approach.

POLYN is a fabless semiconductor company selling ready-to-use Analog Neuromorphic chips as Application Specific Standard Products, targeting specific technological challenges in huge and fast-growing markets, particularly wearables, connected health, and Industry 4.0. Founded in 2019, it is registered in the UK with HQ in Israel.

According to its website, POLYN offers two products, and one more is under development. Recently it was announced that POLYN was accepted into the Silicon Catalyst incubator family.

We talked with Aleksandr Timofeev, CEO and founder, to explain the technology and what he is up to now. We asked Aleksandr’s opinion on today’s neuromorphic computing, what is special about POLYN, and how far we are from a real Tiny AI solution working on the sensor level. Here’s the interview:

Q: First, congratulations on joining the Silicon Catalyst incubator Could you say few words about what is in it for POLYN?

AT: POLYN’s objective as a fabless semiconductor company focusing on ready-to-use analog neuromorphic chips is to collaborate with leading semiconductor vendors, industry partners, and entrepreneurs.  Our mission is to introduce novel analog neuromorphic solutions for wearables, hearables, and IIoT on-sensor pre-processing with highly efficient energy per inference ratio. By being part of the Silicon Catalyst community and its huge portfolio of partners, we expect to accelerate our plans to improve cost, time to market, and the reach of our unique technology.

Q: I see your company decided to go a different way with constructing a chip from a neural network, unlike many others who are developing general purpose processors to apply a neural network there?

AT: Yes, we decided that for a neuromorphic based product it is more efficient to synthesize a chip from a neural network model, not like in the digital domain where you have fixed, general purpose PU instruction sets, and different software applications using them. When you start training a neural network (NN), you don’t know what final size you will get. If you have a fixed neuromorphic core, for some NNs it will be too small, and for others too big.

Q: Ok, interesting, but that means you need to generate a lot of chipsets and that would be both time and cost consuming. How you are dealing with that challenge?

AT: We are focused on the ASSP model. Our chip is related to sensor or signal type, but not a sensor model. For example, our Voice Extraction NASP chip works with any type of analog or digital mems microphone and other signal sources. And we will generate a new NASP core only for a new sensor or signal type. As you understand, it covers millions of products.  In case some new product moves to a different physical device, we can upgrade the chip easily, thanks to our fully automated process. So, to summarize, first NASP is application-specific and not product-specific, so the volumes are huge. Second, moving from application to application is easy with the POLYN automation tools. By the way, our tools were the first technological achievement at the beginning of the company, and they remain a unique EDA instrument for neural network conversion.

Q: Very impressive, but I have another question: Your critics could say that implementing even the inference neural network models requires changes, and as a result your neural network will need tuning from time to time. If your technology is implemented in a fixed resistor layer, how do you support neural network changes and updates?

AT:  First, we use the fundamental property of a neural network: when you train a deep neural network, after a few hundred training cycles a major part of the layers will become frozen.  So typically, about 90% of layers are not changing anymore, and are not involved in the following updates. Only a few last layers require an update if you need to change the classification. In such case we use a hybrid solution:  the 90% layers are converted into a high performance NASP core and the last 10% remain in the flexible digital domain.  But it is important to remember that our solution is focused on sensor level applications. We are not simulating brain functions, where constant learning (or re-training) is critical. In many sensor-level applications the pre-processing task is fixed and doesn’t require any update.

Q: Let’s discuss the analog part. I mean, who would imagine we come back to analog after getting digital with millions of transistors on a chip and talking today about 2nm process? Why do you think analog is a right option for complex math models as neural networks?

AT: First of all, we talking about neuromorphic analog, which is not like old style analog computers. We represent a trained neural network using analog neurons. The fundamental property of this structure is true parallel data processing.

Any digital system has step-by-step execution. But the human brain, one of the most power-efficient computation devices, uses parallel data processing. It is important to note that POLYN is mimicking not the central brain but peripheral systems. We are at the sensor level where the main idea is pre-processing, removing noise, extracting data, and here the analog is irreplaceable. Digital can go down in the process, but for Joule per Inference ratio, analog will win.

Q: Any more arguments for analog? F

AT: First of all, we are talking about neuromorphic analog, that represents a trained neural network using analog neurons. The fundamental property of this structure is true parallel data processing.

Any digital system has step-by-step execution. But the human brain, one of the most power-efficient computation devices, uses parallel data processing. It is important to note that POLYN is mimicking not the central brain but peripheral systems. We are at the sensor level where the main idea is pre-processing, removing noise, extracting data, and here the analog is irreplaceable. Digital can go down in the process, but for Joule per Inference ratio, analog will always win.

Q: Any more arguments for analog? For example, how do you resolve the analog implementation noise issue? What is the product deviation on the math model?

AT: The answer again lies in the term “neuromorphic,” as neural networks are implemented in a neuromorphic analog circuit. The point is that resilience to errors is a fundamental property of neural networks, and training increases the resilience.

Circuit non-idealities can be divided into two groups: random and systematic errors.

Systematic errors occur because a typical circuit implementation only approximates an ideal signal processing operation to a limited extent. Such errors are caused, for instance, by the non-linear operating characteristics of devices or by finite gain of an analog amplifier.

Stochastic errors may happen during the fabrication of integrated circuits and result in a random variation of the properties of the fabricated on-chip elements. These errors, however, can be modelled and addressed during development. For example, the mismatch between neighboring elements is usually much smaller than the variation of parameters’ absolute values. Therefore, differential architectures could significantly improve precision.

For an analog circuit design, it is important that such errors do not accumulate. For this, the neural networks are trained using special technology for error compensation

Q:  Interesting. Could tell us about the birth of POLYN and the idea of your technology?

AT: I met Dmitry Godovsky, our Chief Scientist, at the end of 2018. Dmitry worked eight years previously on a new math model of converting a digital neural net to a new implementation. After few months of discussion, we understood that this new model can be represented as a neuromorphic analog circuit. So, in April 2019 we launched POLYN Technology. Since then, we have constantly invested in know-how and innovation. Today we have 25 patents for the technology and products.

Q: Naturally, this raises the question: what about FABs? Could they run the fabrication immediately, or they need to adapt their processes? By the way, the same question applies for the PDK and EDA tools you are using for the chip development.

AT: Our strong advantage is that we are using any standard process in 40-65 nm range and can align our product libraries to any standard PDK. Our NASP compiler and physical design module work on top of existing standard EDA-based design flow. The output is a GDSII file ready for tape out immediately. Together with that we have developed our design as a BEOL, so the resistor layer is mask programmable and could be replaced independently to optimize cost and time to market. The EDA tool, we call it T-Compiler, is important for time to market today and for our business model in the future. Right now, we are selling chipsets and IP Blocks. By the way, we also see that the market of chiplet solutions could be covered, since SiPs (systems in package) are becoming increasingly these days.
But once the technology is proven and more customers see the advantage of NASP for medium and higher volume products, then our T-Compiler tool will be a part of our business model, enabling generation of application-specific Neuromorphic Analog chips for specific tasks.

Q: Great clarification, thanks. Let’s now talk in general about when you think it makes sense to convert a neural network into silicon. What applications are you covering by the NASP solutions?

AT: We focus on any type of one-dimensional signal preprocessing, such as voice, health care sensors, accelerometer, or vibration sensors. And some of our solutions you can evaluate already with simulation that enables evaluation of the chip before its synthesis, to reduce the chance of unexpected behavior. Anyone who is looking for always-on smart sensor data pre-processing is more than welcome to contact us and get access to our D-MVP simulation model. For example, voice extraction and voice detection for hearing assistance demos are functions and running already. So, customer can evaluate and start the design in advance to be ready when the first chip will come from the factory by Q2 of 2023. Customers can also influence the functionality if they are in time to catch the last changes, we are doing these days.

Q: And what is your product strategy?

AT: Three directions are in our scope of activities for 2023, wearables, hearables, and vibration monitoring for predictive machine maintenance. The first product is planned for mid-2023 and it is our voice extraction solution we announced a week ago. The name of the product line is NeuroVoice and it is intelligent voice extraction and processing for the next generation of smartphones, earbuds, hearing aids, microphones, smart speakers, and intercoms. POLYN’s NeuroVoice NASP chip solves the problem of communication in a noisy environment. This differs from noise cancellation and can answer such challenges as irregular noises like animal sounds, babies crying, and sirens. It also solves the problem if  the sound comes over the network already mixed with noise. Together with voice extraction, NeuroVoice offers a combination of voice management features such as voice activity detection, keyword spotting, and others. In addition, the product can be customized for special requirements.

Q: Was it easy to raise money? I know that the situation changes every time.

AT:  Raising money is never easy (smiling). Of course, we worked hard to communicate with investors. We have a few VCs joining us and several more currently in the due diligence process. That is where we anticipate value in joining the Silicon Catalyst incubator, with the increased exposure we will gain through the incubator’s huge portfolio of partners.

Q: What do you think about neuromorphic chips today? Those like Intel Loihi, BrainChip and others around?

AT: We can discuss other solutions and compare performances, but in general, I can say that in our opinion they are targeted to a more centric position on the edge, with power consumption of a hundred milliwatts to a few watts, but POLYN is focused on the micro-watt level of the thin edge.

Q: And the final question. As a visionary, how do you see the neural-network- on-chip market and ways of its development? Would it be digital, in-memory, or similar to NASP?

AT: For some time, I think, things will run in parallel, and each technology will try to find niches, but finally, in my opinion, the future lies in self-organizing structures, like NASP, but with different physical principles of neurons.

Q: On that note, thank you very much, Aleksandr.

AT: Thanks a lot for the opportunity, and let’s meet in mid-2023 when the first NeuroVoice chip will roll off the line.

Also Read:

CEO Interview: Coby Hanoch of Weebit Nano

CEO Interview: Jan Peter Berns from Hyperstone

CEO Interview: Jay Dawani of Lemurian Labs


Clock Aging Issues at Sub-10nm Nodes

Clock Aging Issues at Sub-10nm Nodes
by Daniel Payne on 10-20-2022 at 10:00 am

IC failure rate chart, clock aging

Semiconductor chips are all tested prior to shipment in order to weed out early failures, however there are some more subtle reliability effects that only appear in the longer term, like clock aging. There’s even a classic chart that shows the “bathtub curve” of failure rates over time:

IC failure rate chart

If reality and expectations don’t align in the wear out region, then the financial impact of recalling chips embedded inside of systems can cost millions of dollars or even cost human life in safety critical applications.

A 7nm SoC can have 10 billion transistors, and to meet the power spec there are many clock domains, and multi-voltage power domains; resulting in aging issues like jitter, duty cycle distortion, insertion delay, reduced design margins, and increased process variation. To predict the impact of transistor aging requires knowing the circuit topology, switching activity, voltages and even temperature – a complex goal.

Transistor aging comes about from a few effects: Hot Carrier Injection (HCI), Negative Base Temperature Instability (NBTI), Positive Base Temperature Instability (PBTI). Hotter temperatures accelerate these effects. The duty cycle impacts BTI effects, and frequency has a proportionate effect on HCI. With HCI there are charges that get trapped in the oxide layer of the transistor, changing the Vt of the devices permanently. The BTI effect is higher than the HCI effect for 7nm nodes, as shown in this chart of insertion delay, where the black line is a fresh circuit, while aging effects from HCI are orange, and BTI effects are in blue.

BTI and HCI Effects

IC design methodologies above 10nm used Static Timing Analysis (STA) and some SPICE simulations of the clock, along with guard-banding for parameters like jitter. Aging could be applied across all devices to provide an idea of the electrical and timing impacts.

Under 10nm designs require a more comprehensive analysis of clock aging impacts, and Infinisim has created a tool called ClockEdge that analyzes large clock networks efficiently. The ClockEdge tool automatically creates a transistor-level netlist for analysis, and then can simulate overnight to show you the fresh and aged results.

A new clock domain netlist is created from your existing files: Verilog, Lib, leaf cell definitions, constraints, SPEF. Simulation results are generated with full SPICE accuracy at your functional clock frequency for the fresh state. The clocks are then stressed as the second step of analysis. A third step is to use the aged clock domain netlist, run the full SPICE accurate simulation at the functional clock frequency and evaluate duty cycle distortion, insertion delay, rail to rail levels, and even clock slew rates. The difference between fresh and aged results tells the design team if they have a reliable design or not.

Delving into the first step, the fresh run analyzes the clock domain from the output of a PLL, all the way through to flip-flops or output pads. This clock domain can be quite large in size with millions of devices, and the transistor-level analysis results show us the delay and slew values.

Step 1: Fresh Run

The ClockEdge tool can run clock analysis for a fresh run on a block with 4.5 million gates, 517 million MOSFETs and 3.2 billion devices overnight, by using a distributed SPICE simulation approach. Your clock topology can be implemented as trees, grids and spines.

Step 2 is a stress run where specific transistors will be selected for aging, all depending on the circuit topology and if the clock is being parked (stuck at VDD or  VSS), or toggling. The stress run also depends upon temperature, voltage and duration per usage model.

The final analysis is Step 3, using the Aged devices. For the case of devices that had a parked clock value, then only one edge of clock will be affected during aging analysis, while devices with clock toggling will have both edges affected during aging analysis. So the Duty Cycle Delay (DCD) shape will depend on your circuit topology.

Step 3, Aged Simulation

With ClockEdge a designer can perform what-if stress analysis, comparing the impact of a clock parked at 0, parked at 1, toggling, or even a combination of parked and toggling.

Summary

Clock aging is a new reliability concern, especially for IC designs at sub-10nm process nodes. With proper analysis, the effects of aging can be mitigated. Infinisim has the experience in analyzing the effects of clock aging. The ClockEdge tool from Infinisim is focused on giving designers accurate aging analysis of their clock networks, providing results quickly overnight. You get to see both DC and AC stress conditions for your aged clock domains.

Related Blogs


NIST Standardizes PQShield Algorithms for International Post-Quantum Cryptography

NIST Standardizes PQShield Algorithms for International Post-Quantum Cryptography
by Kalar Rajendiran on 10-20-2022 at 6:00 am

Signature Schemes Standardization Process

News of cyberattacks is routine these days in spite of the security mechanisms built into widely used electronics systems. It is not surprising that entities involved with ensuring safe and secure systems are continually working on enhancing encryption/decryptions mechanisms. Recently NIST standardized cryptography algorithms for the post-quantum computing world. PQShield is a company that is at the forefront of post-quantum cryptography and developing solutions to protect systems against quantum cyberattacks. The company announced that NIST has adopted an algorithm PQShield developed as one of several new post-quantum cryptography standards. An additional PQShield developed algorithm is moving to Round 4 of the NIST post-quantum cryptography standardization project.

To understand the significance of NIST’s standardization and PQShield’s announcement, it is important to get an overview of why new standards are needed, what the standardization process entails and how to deploy the new standards in one’s systems. PQShield has published three technical whitepapers that together cover the subject matter in detail. If you are involved in developing chips and software to deliver cybersecurity solutions, these whitepapers would be very informative. Links to download these whitepapers can be found in relevant sections of this blog.

The Threat from Quantum Computers

A quantum computer can perform computations much more efficiently than classical computers that have been common place to date. Maybe twenty years ago, quantum computing appeared far away, but we are now much closer to seeing quantum computers in commercial deployment. While this is great from a computing perspective, quantum computing capabilities can also easily break current security mechanisms.

RSA and ECC cryptosystems that are currently in use for secure data transmissions are easy for quantum computers to solve using Shor’s algorithm. This will open up systems for forgery of digital signatures (Integrity compromise) and decryption of previously encrypted data (Confidentiality compromise) in the future. This latter threat is considered the “Harvest Now, Decrypt Later (HNDL)” attack. Without being discovered, would-be attackers are regularly collecting and storing encrypted data with the expectation of deciphering the stolen data using quantum computers.

Based on history, we know it takes a long time to upgrade to new security standards. For example, the migration from Data Encryption Standard (DES) to Advanced Encryption Standard (AES) has taken decades. To be specific, the National Institute of Standards and Technology (NIST) established the AES standard in 2001. As such, newer security standards have to be established now, to protect against cyber threats from quantum computers.

Post-Quantum Cryptography (PQC)

The best way to mitigate the threat of attacks using quantum computers is to use quantum-safe cryptography. This post-quantum cryptography is being mandated by governments across the world for adoption by businesses who want to do business with these governments. With a likely deadline for adoption within the next three years, businesses will be forced to implement quantum-safe mechanisms quickly. But without a standard for international adoption, interoperability will become a practical roadblock.

For a detailed overview of Post-Quantum Cryptography, download this whitepaper.

The NIST Standardization Process

While a number of standardization efforts are currently underway (in Europe, China, etc.,), the NIST standardization project is the most well documented. The project was announced in 2016 with the goal of standardizing post-quantum signature schemes and key-establishment schemes.

In July 2022, NIST announced that it has selected the following three signature schemes as standards for PQC.

Primary standard:           Dilithium

Secondary standard:     Falcon

Secondary standard:     SPHINCS

The Dilithium and Falcon standards are based on hardness assumptions about structured lattices and the SPHINCS standard is based on hardness assumptions about hash functions. The following Figure shows the process that was followed leading to the selection of the standards for signature schemes.

NIST also announced the following regarding key-establishment schemes.

Standardized scheme:                   Kyber

Back-up for standardization:      NTRU (might be standardized if Kyber patent negotiations fail)

Schemes For further study:        BIKE, Classic McEliece, HQC and SIKE

The following Figure shows the process that was followed leading to the selection of the standard for the key establishment scheme.

For full details of the various NIST standards addressing PQC, download this whitepaper.

PQShield’s Role in the NIST Standardization Project

PQShield has been heavily involved in the NIST standardization project right from the start, developing and submitting various signature and key-establishment schemes for consideration as standards.

To hear about the NIST PQC standardization journey to date and what is expected going forward, watch our interviews with leading cryptographers Dr. Thomas Prest and Prof. Peter Schwabe.

Prof Peter Schwabe co-authored seven of the PQC schemes that were submitted to NIST, of which three (Kyber, Dilithium and SPHINCS) were established as NIST standards in July 2022. Another scheme (Classic McEliece) is going into Round 4 for consideration to be a standard.

Dr. Thomas Prest co-authored the compact and efficient Falcon signature scheme. Falcon was selected in July 2022 as a NIST standard for PQC signature schemes.

PQShield’s press announcement regarding the recently announced NIST standards for PQC can be accessed here.

Roadmap to PQC

As we start the transition phase to PQC, NIST has allowed for Federal Information Processing Standards (FIPS) certified solutions to be used in combination with one or more PQC schemes. This accommodation allows for the adoption of quantum-resistant schemes while still keeping the solutions FIPS certified. PQShield’s whitepaper titled “NIST PQC Standards are here – How can you keep ahead” offers a clear roadmap to follow when implementing PQC in one’s systems.

Summary

Being involved in the development and standardization process of PQC algorithms, PQShield has the advantage of developing solutions ahead of the final result. PQShield is an algorithm-agnostic vendor, offering size optimized and side-channel resistant implementations capable of utilizing all relevant NIST PQC finalists in hardware and software. It could support companies in their transition to quantum-readiness from legacy encryption schemes to the latest standards based solutions. Given the short deadline on post-quantum cryptography adoption mandate, it may be in the best interest of companies to explore PQShield’s solutions for implementation.

For more details about PQShield and its offerings, visit PQShield’s website.

Also Read:

WEBINAR: Secure messaging in a post-quantum world

Post-quantum cryptography steps on the field

CEO Interviews: Dr Ali El Kaafarani of PQShield


Continued Electronics Decline

Continued Electronics Decline
by Bill Jewell on 10-19-2022 at 2:00 pm

Continued electronics decline 2022

Third quarter 2022 data on PC and smartphone shipments shows a continuing year-to-year decline. IDC estimates PC units in 3Q 2022 were down 15% from a year earlier, matching the 2Q 2022 decline. IDC’s September forecast for PC units was a 12.8% decline for the year 2022, which is in line with the latest quarterly data. Canalys estimates 3Q 2022 smartphone shipments declined 9% year-over-year, matching IDC’s estimate of an 8.7% decline in 2Q 2022. IDC’s August projection was a 6.5% drop in smartphone units in year 2022. The final number will likely be closer to a 9% decline based on the latest data. IDC expects the PC decline to moderate to a 2.6% decline in 2023 and expects smartphones to recover to 5.2% growth.

China production data also reflects the downward trend. Three-month-average change versus a year ago (3/12 change) of China’s PC unit production peaked at 75% in March 2021. The high growth rate was due to weakness a year earlier from COVID-19 pandemic production shutdowns and strong demand for PCs in 2021 driven by the pandemic. PC 3/12 change turned negative in April 2022 and was -7.3% in the latest data from August. Mobile phone (including smartphones and feature phones) unit growth peaked at 35% in March 2021 primarily due to production shutdowns a year earlier. Mobile phone 3/12 change turned negative in June 2022 and was -4.7% in August. Total electronics production in Chinese currency (yuan) hit a peak of 36% 3/12 change in March 2021. From May 2021 through March 2022 the growth rate ranged between 12% to 13%, in line with pre-pandemic rates. China electronics production growth had slowed to around 8% in May 2022 to August 2022.

Electronics production in other key Asian countries has also been showing slower growth. South Korea reported 3/12 change in electronic production in the 20% to 24% range from June 2021 through May 2022. Since then, growth has been slowing, reaching 4.6% in August 2022. Vietnam’s growth trend has been volatile, but it was over 20% from April to June 2022. In September 2022 growth decelerated to 4.6%. Japan electronics production has been declining since October 2021. Taiwan is the exception, with electronics production growth reaching 24% in August 2022.

Unlike most of Asia, the U.S. and Europe have experienced accelerating growth in electronics production. U.S. 3/12 change was 7.8% in August 2022 and has been on an accelerating growth trend since December 2021. Electronics production trends in the United Kingdom and the 27 countries of the European Union has been volatile over the last few years due to Brexit and the pandemic. In 2022, UK electronics production has been on an upward trend, reaching 14% in August. The EU 27 showed a decline in electronics production for most of 2022 but returned to 4% growth in August.

Automotive has been a growth area for electronics in 2022 as other key drivers have slowed down or declined. However, automotive is beginning to show signs of weakening. S&P Global Mobility’s forecast from this week has global light vehicle production growing 6.0% in 2022 and 4.2% in 2023. The 2022 projection is up from its July forecast of 4.7% primarily due to improvements in supply chains, particularly in China. However, the 2023 forecast of 4.2% growth is less than half of the July forecast of 8.8% growth. Production in 2021 and 2022 has been limited on the supply side due to shortages of semiconductors and other components. Production in 2023 will be limited due to weakness on the demand side. High inflation, rising interest rates and the risk of recession are expected to negatively impact consumer demand for new vehicles in 2023.

The economic outlook remains uncertain. An August 2022 survey of chief economists by the World Economic Forum showed 73% believed a global recession was likely in 2023. Bloomberg’s October survey of 42 economists shows the probability of a U.S. recession in the next 12 months is 60%. However, Bloomberg’s economic model shows a 100% probability. Our current forecast for the semiconductor market is a 6% decline in 2023. However, most of the risk is on the downside.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Also Read:

Semiconductor Decline in 2023

Automotive Semiconductor Shortage Over?

Electronics is Slowing


Podcast EP114: The Power of the Alchip Business Model, Today and Tomorrow

Podcast EP114: The Power of the Alchip Business Model, Today and Tomorrow
by Daniel Nenni on 10-19-2022 at 10:00 am

Dan is joined by Charmien Cheng, Director, Business Development, Alchip Technologies North America. She chairs the company’s investment committee and is responsible for strategic IP alliances, supply chain partnerships, and downstream customer relationship management.  She also leads the marketing team and is responsible for global marketing and marketing communications programs.

Cheng is a  two-decade veteran of the global IC industry where she is widely respected for her experience across a broad spectrum of responsibilities, including supply chain management, account management, program management, process R&D and foundry operations.

Dan explores Alchip’s business model and how it addresses the needs of a wide range of customers, including the new requirements of large system companies. The current chip development model is discussed as well as future trends, including chiplets.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Increasing Gap between Semiconductor Companies and their Customers

The Increasing Gap between Semiconductor Companies and their Customers
by Rahul Razdan on 10-19-2022 at 6:00 am

figure1 4

Semiconductors sit at the heart of the electronics revolution, and the scaling enabled by Moore’s law has had a transformational impact on electronics as well as society.   Traditionally, the relationship between semiconductor companies and their customers has been a function of the volume driven by the customer.  In very high-volume markets such as the consumer marketplace, large numbers of staff from semiconductor companies work with their system counterparts to effectively co-design the system product.

However, for non-consumer markets, the semiconductor interface largely consists of datasheets, websites, and reference designs. Surprisingly, this picture has not changed much since the 1980s.  Meanwhile, in the intervening decades, Moore’s law has enabled the construction of much more complex devices with incredible flexibility and the support of higher levels of abstraction (AI, SW, and others). Yet, the primary method of communication between semiconductor companies and their system customers remains English text, and this interface is breaking down.

In this article, we will discuss the changing nature of System PCB design for non-consumer system designers, the current organization of the electronics design chain, the tremendous gaps being created in the current situation, and the outlines of a solution.

 

System PCB Design:

Figure 1: The Modern System PCB Design Process for non-consumer

As figure one shows, in this non-consumer electronics flow, the electronic design steps consist of the following stages:

  1. System Design:  In this phase, a senior system designer is mapping their idea of function to key electronics components.  In picking these key components, the system designer is often making these choices with the following considerations:

a. Do these components conform to any certification requirements in my application?

b. Is there a software (SW) ecosystem which provides so much value that I must pick hardware (HW) components in a specific software Architecture?

c. Are there AI/ML components which are critical to my application which imply choice of an optimal HW and SW stack most suited for my end application?

d. Do these components fit in my operational domain of space, power, and performance at a feasibility level of analysis.

e. Observation: This stage of design determines the vast majority of immediate and lifecycle cost. For a semiconductor company, if you do not participate at this level of design, you may be locked out for years.

f. Today, this stage of design is largely unstructured with the use of generic personal productivity tools such as XL, Word, PDF (for reading 200+ page data sheets), and of course google search.

System Implementation:  In this phase, the key components from the system design must be refined into a physical PCB design.  Typically driven by electrical engineers within the organization or sourced by external design services, this stage of design has the following considerations:

a. PCB Plumbing:  Combining the requirements of key components with the external facing aspects of the PCB is the job at this stage of design.  This often involves a physical layout of the PCB, defining power/gnd/clk architecture, and any signal level electrical work.  This phase also involves part selection, but typically of the low complexity (microcontrollers) and analog nature.   Today, this stage of design is reasonably well supported by the physical design, signal integrity, and electrical simulation tools from the traditional EDA Vendors such as Cadence, Zuken and Mentor-Graphics. Part Selection is reasonably well supported by web interfaces from companies such as Mouser and Digikey.    However, this stage of design is highly susceptible to a total solution package from the high-level system design phase. If someone offers a subsystem which solves a real system design problem and includes PCB Plumbing, there is no reason to recreate the PCB plumbing.

b. Bootup Architecture:  As the physical design is being put together, a bootup architecture which typically proceeds through electrical stability (DC_OK), testability, micro-code/fpga ramp up, and finally to a live operating system. Typically, connected to this work are a large range of tools to help debug the PCB board. The combination of all of these capabilities is referred to as the Board Support Package (BSP).  BSPs must span across all the abstraction levels of the System PCB, so today, often they are “cobbled” together from a base of tools with parts sitting on various websites.

This design flow is a contrast from the System PCB flow of the 1980’s where the focus of a System PCB was largely to build a function (example: sliced CPU).  The actual semiconductors used to build the function were of moderate complexity and the communication mechanism of a datasheet was adequate.   Today, the job of a System PCB designer has dramatically shifted to managing complex fabrics within complex HW/SW ecosystems (AI is coming next).

Yet, the primary method for communication of technical information is still with large volumes of English sitting in datasheets and websites. Further, most of the non-consumer marketplace has requirements for long lifecycles (LLC) which are also at odds with the core of the consumer-focused semiconductor chain.

Moreover, the organization of the distribution and support network from semiconductor companies is not very helpful in resolving this major issue.

 Current Organization of the Electronics Design Chain:

Figure 2: Electronics Design and Supply Chain

Over the decades, the electronics supply chain has evolved to a complex web of companies which connects semiconductor makers to the end customers. The major players in the ecosystem are: the semiconductor companies themselves, Electronic Manufacturing Services (EMS), Electronics Distributors, and electronic design automation companies. Let us examine each and their roles for System PCB customers.

  1. Semiconductor Companies:  Semiconductor companies with significant focus on the non-consumer market segments (Texas Instruments, Analog Devices, etc) often have tens of thousands of product SKU’s. As previously discussed, from a distribution point of view, semiconductor companies focus on high volume customers. These are the customers which get the attention of their application engineering organization.  For the broad market, broadcast mechanisms such as websites are used, and semiconductor companies try to enable distributors to support the broader marketplace. As part have become more complex, the semiconductor will often offer helper widgets (software, spreadsheets, websites) to aid in the usage of their parts. However, since there is no unified coherent structure, System PCB designers must face a dizzing range of tools/widgets/spreadsheets.
  2. Electronics Distributors:  Companies such as Avnet and Arrow provide a marketing and distribution function for semiconductor manufacturers. In the context of LLC, they can provide inventory services and often provide a layer of other support services.  Of course, their ability to support semiconductor parts is limited by the medium of communication from semiconductor companies (datasheets, websites, reference designs) to them.
  3. EMS Companies:  EMS companies such as Flextronics and Jabil Circuit engage with LLC customers to assemble and build the Printed Circuit Boards (PCBs). Originally, many EMS companies were spinouts from traditional OEMs such as IBM or HP who wanted to shed low-margin business (EMS companies have margins in the low single digits).  Over time, EMS companies have moved from serving consumer customers to LLC customers. LLC customers are generally low volume and benefit from the aggregation manufacturing function provided by EMS companies.  However, much like electronic distributors, EMS companies are limited by the medium of communication from semiconductor companies.
  4. EDA Companies:  Electronics Design Automation (EDA) companies such as Cadence Design Systems, Synopsys, and Mentor Graphics (Siemens) provide the key design tools and some level of the design and verification IP required to build electronic systems. Today, the vast majority of the business in EDA is focused on semiconductor design. Even the “systems” divisions of these companies focus on issues such as HW/SW codesign in the context of System-on-chip semiconductor implementations.  In the System PCB world, the core design tool set consists of traditional PCB physical design tools (Orcad, Zukan, Pads, Altium), independent design tools from FPGA vendors, and many chip specs off of semiconductor company websites.   Today, EDA companies do not offer anything that helps with the complex system PCB design process.

Overall, the current sales, distribution, and support structure enabled by semiconductor companies creates even more distance between themselves and their system PCB customers.

Implications of the gap between Semiconductor companies and their System Customers:

The current situation is not good for anyone.  For semiconductor companies, the current situation has the following drawbacks:

  1. Lack of market Insight: The distance between semiconductor companies and their customers is a tremendous loss of information relative to understanding issues with current products, true differentiation for their products, and the potential for future product integrations.
  2. Communicating Differentiation:   Semiconductor companies have the ability to integrate functionality, build supporting tools to take advantage of this functionality, and build whole solution stacks. However, the current data-sheet/website method of technical knowledge delivery makes it very difficult to communicate this value. This is especially the case if a combination of chip functionality and SW programmability provides unique value.

For system PCB customers, the current situation faces the following drawbacks:

  1. Complexity:  The key issue is managing the massive complexity of information coming from semiconductor companies. Two-hundred-page data-sheets are not unusual. This is the reason that the biggest indicator of which chip a system PCB designer will use is:  “Have they used the chip before?”   This is unfortunate because compelling solutions are often missed.
  2. Support:   Most system PCB customers are in a situation where they cannot count on any real access to technical support resources unless they can drive high volumes. For everyone else, understanding complex semiconductor systems must be done without any real direct support.  This is one of the reasons that independent discussion groups are popping up in the System Design community.
  3. Constraint Management:   The information on semiconductors sits on datasheets and a whole host of websites which also contains increasing amounts of soft information (which includes OS, driver, supporting tool updates, AI stack etc) which is dynamically changing. This is tedious and utterly inefficient and a place of great pain for System PCB customers.

Overall, System PCB customers face a chaotic, complex, and dynamic semiconductor marketplace. Their “tools” for managing this process are Excel, word, and google search. This is an inefficient and painful situation which results in a great headwind for product development velocity. This is bad for everyone involved in System PCB companies, Semiconductor companies, and perhaps most importantly society.

What are the outlines of a solution?

The solution to complexity has always been the addition of organization, structure, and automation.  These are the hallmarks of the EDA industry.  There is a need for an EDA tool which manages the System PCB design.   The key characteristics of such an EDA tool would be:

  1. System PCB Design Flow:  The EDA tool would follow the modern system design process. Specifically, the flow of certification, constraints, respect for AI/SW ecosystems, and more.
  2. Rationalization and Organization of Semiconductor Function:   System PCB designers think in terms of function, and semiconductors must be organized in this manner.
  3. Integration of reference design and helper tools:   Within the context of the above step, reference designs and helper tools can be organized rationally

All the above is very doable with a little cooperation between EDA, System PCB, and Semiconductor companies.  The result would be a massive acceleration in system design productivity, which is good for everyone involved.

Acknowledgements: Special thanks to Anurag Seth for co-authoring this article.

Related Information:
Also Read:

Balancing Analog Layout Parasitics in MOSFET Differential Pairs

STOP Writing RTL for Registers

The CHIPS and Science Act, Cybersecurity, and Semiconductor Manufacturing


CEVA’s LE Audio/Auracast Solution

CEVA’s LE Audio/Auracast Solution
by Kalar Rajendiran on 10-18-2022 at 10:00 am

Bluebud Turnkey Bluetooth Audio IP Platform

CEVA announced that its RivieraWaves Bluetooth® 5.3 IP family now supports Auracast™ broadcast audio. The technology behind Auracast is LE Audio broadcasting and Auracast is expected to transform the shared audio experience.  With Auracast, an audio stream is broadcast over the air by means of Bluetooth Broadcast Isochronous Stream (BIS) packets of compressed audio packets using LC3 codec. Headsets, earbuds, hearing aid devices and other LE Audio enabled devices may receive these BIS packets once synchronized to the BIS stream.

A recently published Bluetooth SIG 2022 market update forecasts that the number of Bluetooth audio streaming device shipments will grow at a 7% CAGR to reach 1.8 billion annual shipments by 2026, with earbud shipments growing 3X in that time.

The intersection of the above market potential and the attractiveness of Auracast broadcast audio presents tremendous opportunities for product companies in the audio space. This post will review the Auracast opportunity and how CEVA’s offerings can help companies get their audio products to market quickly.

Auracast Broadcast Audio

Auracast technology enables an audio source to broadcast an audio stream to an unlimited number of Bluetooth audio sink devices. Bluetooth audio broadcasts can be open, allowing any in-range sink device to participate, or closed, permitting only those sink devices with the correct passkey to participate. Thus, Auracast has the ability to connect an unlimited number of audio devices to a single source, such as a smartphone, laptop, or public access (PA) system.

Bluetooth’s Auracast webpage presents the following high level use cases.

Share Your Audio

Auracast broadcast audio will let you invite others to share in your audio experience, bringing us closer together.

Unmute Your World

Auracast broadcast audio will enable you to fully enjoy televisions in public spaces, unmuting what was once silent and creating a more complete watching experience.

Hear Your Best

Auracast broadcast audio will allow you to hear your best in the places you go and is expected to become the next generation assistive listening technology, improving audio accessibility and promoting better living through better hearing.

The Auracast webpage also enumerates the ease of joining and experiencing an Auracast broadcast. See below.

    • Searching: One way to find and join an Auracast™ broadcast will feel very similar to how you search for and connect to Wi-Fi networks today. 
    • Scanning: A simple scan of a QR code will allow you to join an Auracast™ broadcast effortlessly.
    • Tapping: A tap is now all it takes to pay. In much the same way, tap-to-hear could make access to Auracast™ broadcasts quick and easy.

Market Opportunity

Given the high level use cases and the ease of experiencing Auracast broadcast audio, many innovative use cases and products are expected to arrive. The obvious use cases are hearing flight information at airports, listening to TV in public places, sharing music with friends and assisted listening on hearing devices.

To enjoy the Auracast experience, both the transmitter and the receiver side need to be LE Audio compliant. On the transmit side, modern smartphones will be upgradeable with a software update. Older equipment can be enhanced with an Auracast bridge dongle. Initial consumer products supporting Auracast broadcast audio on the receive side are expected to hit the market by the end of the year.

To benefit from the rapid market growth, a turnkey development platform is needed to meet the aggressive time to market demands.

CEVA Eases Auracast Broadcast Audio Implementation

CEVA provides embedded solutions and a wide range of audio capabilities for building advanced ultra-low-power wireless earbuds and hearing aid devices. The RivieraWaves™ Bluetooth IP family is a comprehensive suite of IPs for embedding Bluetooth 5.3 into a chip. For more details, refer to this page. CEVA’s sensor fusion capabilities help enhance the customer’s integrated audio product, delivering stable audio stream for better hearing experience.

Bluebud IP Platform

CEVA’s Bluebud™ is a self-contained, feature-rich IP platform to streamline the development of True Wireless Stereo (TWS) earbuds, wireless headsets, speakers, smartwatches and smart glasses. Bluebud combines CEVA’s Bluetooth, audio and sensing solutions in a single, integrated solution, along with a comprehensive list of audio codecs, voice processing and motion sensing algorithms.

For more details about the Bluebud Turnkey Bluetooth Audio IP platform, refer to this page.

About CEVA

CEVA is the leading provider of Bluetooth platform IP solutions for integration into SoCs, powering

billions of Bluetooth-enabled devices to date. CEVA offers both Bluetooth LE and Dual Mode IP

platforms, including baseband controller, radio and full software protocol stack, compliant with Bluetooth 5.3, LE Audio and Auracast.

Also Read:

5G for IoT Gets Closer

LIDAR-based SLAM, What’s New in Autonomous Navigation

Spatial Audio: Overcoming Its Unique Challenges to Provide A Complete Solution