Banner 800x100 0810

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken
by Robert Maire on 01-09-2023 at 6:00 am

Samsung 2022

-Samsung off the same chip cliff as Micron- “No skid marks”
-Samsung may be winning at a game of “Capex Chicken”
-No expectation of recovery any time soon – Consumers weak
-2023 a write off- Recovery will be delayed if spending isn’t

Samsungs worst quarter in 8 years no surprise

Samsung pre released its Q4 earnings calling out a 69% drop in profits. In a cosmic coincidence Micron said its revenues were down 69% a few weeks ago (profits down even more into a loss condition).

Obviously memory pricing is a primary culprit along with general overall weakness in consumer and corporate spending.

Its not like we didn’t see this coming as we have known and talked about the memory industry falling off for over 6 months now, so no one should be surprised. Samsung said memory pricing was off in the mid 20% range in the quarter. As we have stated in prior notes it will likely get worse before it gets better….

Q1 is always seasonal weakest quarter of the year for chips

Chip demand and therefore pricing is always the best in the fall as back to school and buying of holiday electronics as well as the release of the new IPhone etc. Q1 is always the worst due to the post holiday depression, Chinese New Years etc;. We have seen this pattern for decades, but when the tide is low it becomes much more apparent as is the case now. Simply put, Q1 will get uglier.

If Capex is not cut the depression will last much longer and get much worst Potentially fatally wounding some players

It doesn’t take a rocket scientist to understand that cutting supply is the way to support higher pricing. OPEC understands this very well. Aside from just cutting wafer starts cutting capex is the way to get there as it also saves more cash when you are likely losing money already (as is the case with Micron).

That is the rational thing to do unless you have other ideas in mind such as gaining market share or hurting competitors by sucking the oxygen (profits) out of the room. If you are big enough, like Samsung, you perhaps can try to price the market for memory where it is barely profitable for you but the smaller players are under water.

But Samsung wouldn’t do something that nasty, would they?

Samsung may win at a game “Capex Chicken”

“Chicken” is a game where protagonists hurtling directly at one another in souped up cars where the first one to swerve away to avoid a collision is the loser.

In the current three way game of “Capex Chicken” in the memory industry, Samsung is driving a massive 18 wheeler, SK Hynix is driving a dump truck and Micron is the pick up truck with Idaho plates with sacks of Simplot’s potatoes in the back.

Micron, with diminishing funds in the bank realized a while ago it couldn’t win this game of chicken and swerved off the collision course a long time ago by cutting capex to sustenance levels.

A picture is worth a thousand words

Capex comparison of Samsung/SK Hynix/Micron/Intel/TSMC

So far Samsung shows no signs of slowing capex (but has done so in the past). The most rational thing to do on Samsung’s part would be to announce a capex reduction so investors could breath a sigh of relief and assume that supply will be coming down and pricing will get better eventually. But Nooooooo……

Samsung has put the put the proverbial pedal to the metal. Whether its true or not or maybe just a head fake to scare off the competition it may have the same effect of further trashing pricing as buyers expect a flood of memory coming.

It may be the case of stepping on the accelerator to make everyone else think you are crazy and swerve out of the way of the accelerating 18 Wheeler. We do think even Samsung will slow, but not before they do some damage, as even they are not suicidal.

For the moment we will leave Chinese Memory manufacturer Yangtze out of the Capex Chicken game as they are the equivalent of Chinese electric vehicle maker BYD, but without brakes, so they won’t stop and can’t stop, until they hit the brick wall of US sanctions.

Potential Double Whammy on equipment companies

Just when semiconductor equipment companies are putting out their “hair on fire” fire drill from the October China embargo they could be getting hit with Capex reductions in the memory market.

The last time Samsung cut capex, they did so very abruptly and without any warning but only slowed for a short while, a couple of quarters.

Much as with the China situation before it may be hard to get a handle on what impact the memory market collapse will have on the major equipment makers, AMAT, LRCX, TEL and KLAC

Expect more delays and pushouts

More and more fab projects and equipment purchases will be pushed out or delayed. Equipment makers will have to do some fast shuffling of the order books to try to pull in orders to fill voids and changes.

So far, the unusually long backlog going into the downturn has been a buffer that has allowed the equipment companies some breathing room and some wiggle room. But as backlog starts to fade away it will become much more difficult.

Lam had billions of dollars worth of unfinished goods waiting in crates in the field for parts and completion. That buffer of money in the field will cushion the downturn but for only so long. Once those systems are complete and signed off there is not a lot of buffer left to make up for declining orders.

Intel has pushed back on most all the fab projects announced. TSMC seems to be charging ahead. Samsung for the time being just completed a major fab not too long ago and hasn’t said anything about its US projects. Micron simply doesn’t have the financial strength right now as net cash fades and has already slashed capex so any new fabs will be delayed significantly. Globalfoundries has no “real” plans for another fab in the US, only Asia.

To EUV or Not To EUV….that is the question for memory makers

One of our major concerns in the memory market is a repeat of the EUV “haves” and the “have nots” we saw in the foundry/logic space. When TSMC went whole hog into EUV before any of its competitors it gained a huge lead in technology over everyone else…. no one else even comes close to this day. Their lead was in fact so overwhelming that GloFo simply gave up, canceled its EUV program and relegated itself to the dustbin of technology. The great Intel now has to source chips from TSMC.

EUV and Not Euv is a very wide technology chasm.

In the memory space, Samsung and SK Hynix are the EUV “haves” and Micron and Yangtze the EUV “have nots”. This will not change for Yangtze due to the embargo and likely not in the near term for Micron due to financials.

Once the smoke clears and the current over supply of memory chips finally goes away and we are back to the races, Micron and Yangtze could be left in the dust much as GloFo was due to their lack of EUV. Maybe ASML, with all its excess cash should start an EUV leasing or “rent to own” business for those less fortunate.

The stocks

Obviously even though chip stocks rallied today, the Samsung news should not be taken in any way shape or form as being positive. It is just a confirmation of exactly how bad the chip situation is and is getting worse.

There is no calculable end in sight, it could be 3 quarters , 4 quarters , 5 quarters, 2 years or more ,its just unknowable right now.

We think quarterly reports from the semiconductor industry as well as the equipment industry will likely be similarly ugly. We would not assume, as many investors may, that the worst is over, its the bottom and time to buy….its not.

We are not in a one or two quarter downdraft, we have a multi faceted deep downturn that we haven’t seen in a decade.

Secular demand and macro trends remain positive at a high level view but the near term (year or more) remains very rough.

We are not attracted to any of the stocks based on a false “the worst is over” rally. We are not at or near a bottom level for the industry, especially going into the seasonally weakest quarter.

About Semiconductor Advisors LLC‌

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Micron Ugly Free Fall Continues as Downcycle Shapes Come into Focus

AMAT and Semitool Deja Vu all over again

KLAC- Strong QTR and Guide but Backlog mutes China and Economic Impact

LRCX down from here – 2023 down more than 20% due to China and Downcycle


Podcast EP135: Democratizing HPC & AI

Podcast EP135: Democratizing HPC & AI
by Daniel Nenni on 01-06-2023 at 10:00 am

Dan is joined by Doug Norton, VP of Business Development for Inspire Semiconductor, an Austin-based high performance computing chip design company.  He is also the President of the Society of HPC Professionals, a vendor neutral, non-profit organization educating and connecting the High Performance Computing user community.

Doug explains the core technology strengths, product plans and mission of Inspire Semiconductor. He outlines the products that will soon be on the market that provide massive compute support for AI-augmented high-performance computing (HPC). Inspire delivers very high performance per watt capability with a flexible, easy to program interface. These qualities will allow Inspire to help many types of companies and applications in their journey to AI-augmented HPC.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Chris Eliasmith and Peter Suma, of Applied Brain Research Inc.

CEO Interview: Dr. Chris Eliasmith and Peter Suma, of Applied Brain Research Inc.
by Daniel Nenni on 01-06-2023 at 6:00 am

image 11
Peter Suma and Dr. Chris Eliasmith

Professor Chris Eliasmith (right side) is co-CEO and President of Applied Brain Research Inc. Chris is also the co-inventor of the Neural Engineering Framework (NEF), the Nengo neural development environment, and the Semantic Pointer Architecture, all of which are dedicated to leveraging our understanding of the brain to advance AI efficiency and scale. His team has developed Spaun, the world’s largest functional brain simulation. He won the prestigious 2015 NSERC Polanyi Award for this research. Chris has published two books, over 120 journal articles and patents, and holds the Canada Research Chair in Theoretical Neuroscience. He is jointly appointed in the Philosophy, Systems Design Engineering faculties, as well being cross-appointed to Computer Science. Chris has a Bacon-Erdos number of 8.

Peter Suma (left) is a co-CEO of Applied Brain Research Inc. Prior to ABR, Peter led start-ups in robotics and financial services as well as managed two seed venture capital funds. Peter holds degrees in systems engineering, science, law and business.

What is ABR’s vision?
ABR’s vision is to empower the world’s devices with intelligent, concept-level conversations and decision-making abilities using our innovative Time Series Processor (TSP – https://appliedbrainresearch.com/products/tsp/) chips.

Whether it’s enabling full voice and language processing on a small, low-power chip for consumer electronics and automotive applications, processing radar signals faster and for less power, bringing cloud-sized AI signal processing on to devices, or integrating situational awareness AI to give robots the ability to understand and respond to complex commands to interact with people in a natural and intuitive way, our TSP chip family is poised to revolutionize the way devices sense and communicate.

ABR has been delivering advanced AI R&D projects since 2012 to clients including the US DoD, Intel, BMW, Google, Sony and BP. Some examples of our work include, developing the world’s largest functional brain simulation, building autonomous drone controllers for the US Air Force, and building small, powerful voice control systems for cars, appliances and IoT devices. Our TSP chips are our latest innovation as we work to fit more and better AI models into devices to give devices better artificial ‘brains’.

How did ABR begin?
ABR was founded out of Dr. Chris Eliasmith’s lab at the Centre for Theoretical Neuroscience at the University of Waterloo. Applied Brain Research Inc. (ABR) is now a leading brain-inspired AI engineering firm. Our AI engineers and neuroscientists develop technologies to improve AI inspired by work in AI and brain research at the lab.

You mentioned you have some recent developments to share. What are they?
We are very excited to announce that ABR has been admitted to the ventureLab and Silicon Catalyst Incubator programs to support the development of our new Time Series Processor (TSP) family of edge AI chips, which allow cloud-sized speech and signal AI models to run at the edge at low cost, power, and latency. We will be exhibiting at CES in the Canada-Ontario Booth in the Venetian Expo Hall D at booth number 55429 from Jan 5th to Jan 8th, 2023, in Las Vegas. ABR is also a CES Innovation Awards Honoree (https://appliedbrainresearch.com/press/2022-11-21-ces-innovation-awards/) this year.

Tell us about these new chips you are building?
Most electronic devices already do, or will soon have to, utilize AI to keep pace with the smart features in their markets. More powerful AI networks are larger AI networks. Today’s edge processors are too small to run large enough AI models to deliver the latest possible features, and CPUs and GPUs are too expensive for many electronic devices. Cloud AI is also expensive, and for many products connections cannot be guaranteed to be accessible and are often not configured correctly by the customer.

What device makers need is a small, inexpensive, low-power chip that can run large AI models to enable the products to lead their respective markets. A very efficient, economical, and low-power way to achieve this is to compress large AI models and design a computer chip that runs these compressed models.

ABR has done exactly this with a new patented AI time-series compression algorithm called the Legendre Memory Unit or LMU. With this compression algorithm we have developed a family of small but very powerful time series processing AI processors that run speech, language and signal inference AI models in devices that previously would have required a cloud server.

This enables more powerful and smarter devices with low power consumption. Batteries last longer, devices converse in full natural language sentences, and sensors process more events with greater accuracy. ABR is enabling a new generation of intelligent devices with our revolutionary low-power, low-cost and low-latency powerful AI Time Series Processor (TSP) for AI speech, language and signal processing.

What are the chips in the ABR TSP family?
There are currently two chips in the ABR TSP family. The Chat-Chip TSP and the Signal-TSP.

The ABR Chat-Chip TSP is the world’s first all-in-one full voice dialog interface, low-power chip. Low-cost and low-power speech chips until now have been limited to keyword-spotting AI models which are limited to understanding 50 or so words. These chips deliver those oh-so-frustrating speech interfaces in cars, toys and other speech-enabled, sometimes-disconnected, low-BOM-cost devices. ABR’s Chat- Chip TSP replaces those chips for the same cost with a full natural language experience. Dramatically upgrading the customer’s experience.

The ABR Chat-Chip enables a full natural language voice assistant in one chip, including noise filtering, speech recognition (ASR), natural language processing (NLP), dialog management, and text-to-speech (TTS) AI. The ABR Chat-Chip TSP can run cloud-sized speech and language AI models in one chip, consuming less than 50 milli-watts of power. This combination of low-cost, low-power and large speech and language AI model processing means the ABR Chat-Chip TSP brings full Alexa-like natural language dialog to all devices including devices that, until now, could never have implemented full language dialog systems due to cost, latency and model size limitations when using existing chips.

Cameras, appliances, wearables, hearables, robots, and cars can all carry on complex, real-time, full language dialog with their users. People can hear better with larger de-noising and attention-focusing AI models in earpieces. People can interact with devices, more privately, instantly, and more hygienically without touching buttons. The many robots in our lives now and the near future can interact verbally without a cloud connection. Devices can also explain to users how to use them, offer verbal troubleshooting, deliver their user manuals verbally, offer hygienic, touchless interfaces, handsfree operation, and market their features to consumers. All of this without needing an internet connection, but able to take advantage of one if present. Voice interfaces delivered locally are more private, as they do not send sound recordings to the cloud, eliminating the risk of leaking background noise and emotional context. As well, local dialog processing is faster, without the latency of a cloud network. Local dialog processing reduces device makers’ costs per device and in the cloud, by removing large portions of the cloud processing needed for voice interfaces and performing the local processing at up to 10x less in-device processor cost.

The ABR Signal-TSP performs AI signal pattern and anomaly detection by running larger AI models, faster and for less power than existing CPUs and GPUs. In a market where larger AI models are typically much more accurate AI models, device makers need inexpensive, low-power, large AI model processors to make their devices smarter than the competition’s. ABR’s Time Series Processors (TSPs) cost just few dollars but run large AI models that otherwise would require a full CPU or GPU costing between $30 to $200 USD to execute the same workload in real-time. ABR’s Signal TSP typically reduces power consumption by 100x, latency by 10x and cost by 10x over functionally equivalent CPUs or GPUs.

How are the TSP chips programmed?
ABR supports the TSP chips with an API and an AI hardware deployment SaaS platform called NengoEdge (edge.nengo.ai). AI models can be imported from TensorFlow and then optimized for deployment to the TSP and other chips using NengoEdge. With NengoEdge you can pick a network, set various hardware-aware optimizations, and then have NengoEdge train and optimize the network using hardware specific optimizations, including quantization and utilization of any available AI acceleration features, such as the LMU fabric if a TSP is targeted. The result is an optimal packing of the AI network onto the targeted chips to deliver the fastest, lowest-power and most economical solution for delivering the chosen network onto the target hardware. All without buying each chip to test or learning the details of each chip. Users can see the TSP shine on all time series workloads, for example for voice assistants or radar processing AI systems.

Can you tell us more about your LMU compression algorithm?
The Legendre Memory Unit (LMU) was engineered by emulating the algorithm used by time cells in the human brain and specifically how time cells are so efficient at learning and identifying event sequences. The LMU makes the ABR TSP’s large gains in efficiency, performance and cost possible for inferencing all time series and sequence-based AI models. We patented the LMU worldwide in 2019 and announced it NeurIPS in December 2019. We then published the software versions of the LMU on our website and GitHub in 2020. There are many papers now published using the LMU and achieving state of the art results on time series workloads by other groups. We have many clients who have licensed the LMU software running on CPUs, GPUs or MCUs for signal and speech processing in devices such as wearables, medical devices and drone controllers. Many of those are now waiting to move to a TSP chip to extend their battery life and support even larger models at lower power, cost and latency levels.

When will the TSP chips be available?
We are working to have first silicon TSP chips for both the Chat-Chip and Signal design available by Q1 2024. We are signing pre-orders and design LOI’s now. Contact Peter Suma, co-CEO of ABR at peter.suma@appliedbrainresearch.com or on 1-416-505-8973 to learn how we can super charge your devices to be the smartest in their class.

Also Read:

CEO Interview: Ron Black of Codasip

CEO Interview: Aleksandr Timofeev of POLYN Technology

CEO Interview: Coby Hanoch of Weebit Nano

CEO Interview: Jan Peter Berns from Hyperstone


The Smartphone Snitch in Your Pocket

The Smartphone Snitch in Your Pocket
by Roger C. Lanctot on 01-05-2023 at 6:00 am

The Smartphone Snitch in Your Pocket

The story in the New York Times came with a sensational headline: “Couple in Car Survive 300-foot Fall into a Canyon.” The canyon in question was Monkey Canyon in the Angeles National Forest outside Los Angeles and the couple survived, so the story goes, thanks to their satellite-connectivity-enhanced iPhone.

This is the kind of story that can transform what consumers think they know about SOS calling. It might lead one to believe that automatic crash notification, of the sort provided by OnStar-like services, are unnecessary in a world populated with iPhones and Google Pixel phones equipped with satellite-based SOS calling capability.

You might buy that idea until you read further in the Times story and learn that the iPhone that “saved” the couple in California was found by them 10 yards away from the car with a smashed screen – though somehow still working. The device prompted the couple that it could call for help with the new satellite functionality.

This couple is clearly lucky to be alive and lucky they had an iPhone. Had they been unconscious or unable to find the phone, the outcome might have been different.

The Times story contrasts with reports from across the Internet of iPhones at amusement parks mistaking rollercoaster rides for car crashes. Of course, users could leave their iPhones behind or turn off the emergency function before getting on a roller coaster – but the iPhone misinformation is likely creating at least minor headaches for emergency call centers.

The increasing promotion of smartphone based automatic crash notification is unfortunate but expected given the steadily expanding role of smartphones in cars. Every new car today comes with a companion application that allows for locating the car, operating the car remotely, determining the car’s functional status, and monitoring driver behavior.

If you have bought a new car in the past year or two or intend to in the next year or two your car will provide you with a driving score that you may use to obtain insurance quotes. Simultaneous with this shift has been an industry-wide embrace of mobile apps by insurers.

Insurers also want to evaluate your driving – for obvious reasons – but they also want you to use your phone to report claims. In fact, leading claims management company CCC Intelligent Solutions tells us that 20% of repairable claims are reported today using photo-based estimates derived from smartphones. More than 80% of consumers prefer using mobile claims management, the company says.

Smartphone-based insurance claims management does sound attractive, particularly from the standpoint of accelerating the claims process. But surely consumers will want to retain control of this process.

New technology from companies such as Sfara and Cambridge Mobile Telematics allow for the smartphone-based detection of low-speed crashes. These are precisely the kinds of vehicular interactions that many consumers prefer not to report to their insurance companies.

As we connect our cars and our insurance companies via mobile apps, we might all take care to ensure that we understand precisely which data is being collected and shared and under what circumstances. It’s not clear to me that the default mode for these applications is “opt out,” but it should be.

Smartphones are amazing devices and it is possible for a smartphone – these days – to be a life-saving tool. But the potential for misuse or abuse of personal data is enough to give any smartphone user pause before jumping into this particular pool.

It is also a heads up that the best form of OnStar-like automatic crash notification is built into the vehicle and able to detect the airbag deployment and gather important data from vehicle sensors to be shared with SOS call centers and first responders. Smartphones simply cannot replace this function.

Also Read:

Regulators Wrestle with ‘Explainability’​

Functional Safety for Automotive IP

Don’t Lie to Me


Formal Datapath Verification for ML Accelerators

Formal Datapath Verification for ML Accelerators
by Bernard Murphy on 01-04-2023 at 10:00 am

Datapath complexity min

Formal methods for digital verification have advanced enormously over the last couple of decades, mostly in support of verification in control and data transport logic. The popular view had been that datapath logic was not amenable to such techniques. Control/transport proofs depend on property verification; if a proof is found in a limited state space it is established absolutely. For large designs the check is often “bounded” – proven correct out to some number of cycles but not beyond. Experienced product teams generally had no problem with this limitation. Most electronic products allow for some threshold – perhaps 2 weeks or a month – before a reset is needed. But for one class of functions, datapaths, we will not tolerate any errors. These require a completely different approach to formal verification, which proves to be very important for math-intensive ML accelerators.

What are datapaths and why are they hard to verify?

A datapath is the part of a compute engine that does data processing, particularly math processing. It typically supports integers, fixed point numbers and floating-point numbers, with a range of precision options. Operations span from basic arithmetic to exponents and logs, trig, hyperbolic trig and more. Our intolerance of even occasional errors in such functionality was most infamously demonstrated by the Pentium FPDIV bug, estimated to occur in only one in 9 billion operations yet considered responsible for a $475M charge for replacement and write-off and a significant black eye for Intel. (In fairness, Intel are now leaders in applying and advancing state-of-the-art formal methods for more complete proving.)

Datapath verification (DPV) far exceeds the reach of simulation, as illustrated in the figure above. Faster machines and massive parallelism barely dent these numbers. Formal methods should shine in such cases. But property checking quickly runs out of gas because bounded model checkers (like SAT) can only search out through so many cycles before an exponentially expanding design state space becomes unmanageable. Instead, formal methods for datapaths are based on equivalence checking. Here equivalence is tested not between RTL and gate-level designs, but between RTL and reference C (or C++ or SystemC) models. If the reference model is widely trusted (such as Soft-float) this comparison should provide high confidence in the quality of the implementation.

VC Formal DPV and the Synopsys ARC NPX6 NPU Processor for AI

Synopsys recently hosted a webinar on application of their formal datapath verifier built on these principles. After an intro to the tool from Neelabja Dutta, Shuaiyu Jiang of the ARC group described how he used VC Formal DPV to verify datapath logic for their ARC NP6X Neural Processing Unit (NPU) IP.

The convolution accelerator example is useful to understand how the ARC team decomposed the verification task for what I think of as an assume/verify strategy though here applied to equivalence checking. The multiply phase is one such sub-component. Here assumptions would be that inputs to the C reference and the RTL implementation must be the same. In place of an output property check the proof defines a “lemma” requiring the outputs are the same. A similar process is run over each component in the convolution accelerator, followed by a top-level check for the assembled sub-proofs.

Shuaiyu also talks about application to the ARC Generic Tensor Ops Accelerator (GTOA). Briefly, ML frameworks (TensorFlow, TF-Lite, PyTorch, JAX, etc) work with tensor objects – here 2D image x color depth x sample size for a 4D tensor. These build on large sets of operators somewhat unique to each network (>1000 for TF), impeding portability, uniformity, etc. Following the ISA philosophy, Arm developed and open-released TOSA – Tensor Operator Set Architecture with ~70 basic instructions. TOSA-compliant frameworks and inference platforms should eliminate such inconsistencies. Though Shuaiyu did not comment on this point I assume ARC GTOA is built in line with the TOSA approach. The ARC ALU for these operations is necessarily even more math intensive than the convolution example, making it an even better example for DPV proofs, suitably decomposed.

To learn more

You can register to watch the webinar HERE. Also I suggest you read the second edition of “Finding Your Way Through Formal Verification (Second Edition)”. This has been updated in several areas since Synopsys released the first edition five years ago. There is a whole chapter now dedicated to DPV. Well worth a read – I’m a co-author 😀


How to Efficiently and Effectively Secure SoC Interfaces for Data Protection

How to Efficiently and Effectively Secure SoC Interfaces for Data Protection
by Kalar Rajendiran on 01-04-2023 at 6:00 am

secure interfaces article fig1

Before the advent of the digitized society and computer chips, things that needed protection were mostly hard assets such as jewelry, coins, real estate, etc. Administering security was simple and depended on strong guards who provided security through physical means. Then came the safety box services offered by financial institutions such as banks. The bank vaults themselves were not easily penetrable and the assets remained safe. But the service itself was not of much value if the assets couldn’t be taken out and put back in whenever the bank customer wanted. And therein was the vulnerable aspect of the service, which was at the time and point of access. What if an unauthorized party gets hold of the safety box key and accesses the contents? The institutions offering the service instituted a two-step process. The first step was to authenticate the party who wants to access the box contents. This was accomplished by checking the person’s relevant identity credentials. The second step was to use the appropriate key to open the box itself. To prevent any bad actors within the institution itself from opening the box without the customer being present, a dual-key mechanism was deployed.

Fast forward to the digitized society, other than the house we live in and the vehicles we drive, most other assets are not physical in nature. Stocks, bonds, intellectual property ownership, fiat currencies, crypto currencies, etc. The list goes on. These assets are secured not by some physical means but rather through encryption and storage in the form of zeroes and ones in electronic form around the world. In other words, security is being provided through a combination of electronic hardware/software solutions. For every security solution that is deployed, cyber criminals are always working to identify a weakness to break-in and steal assets. The goal of digital security mechanisms deployed in electronic systems is efficiency and effectiveness. At a conceptual level, the mechanism is similar to the bank safety box access method. Authenticate the user and decrypt the data using valid keys. Given this, how to protect and secure the interfaces without compromising on fast access time for legitimate users of the assets? Low latency authentication and encryption are key.

Starts with the Design

The added complication with digital security mechanisms is that they have to deal with many different types of interfaces to the data. This is pushing the industry to look at security as an integral part of electronic design architecture, not as an afterthought. The block diagram below showcases the various types of data interfaces in an electronic system.

Securing all of these interfaces at a hardware level and implementing zero-knowledge architecture so that the data is encrypted and can’t be used maliciously is critical. To add complexity to the mix, the interface standards bodies are regularly upgrading existing protocol specifications and bringing out new interfaces standards as well. These changes need to be implemented in designs either at the controller level, PHY level or both without compromising throughput and latencies.

The Demand for Secure Interface Solutions Keeps Growing

As an example, while the autonomous vehicle market is still in its early stage, it has already exposed security risks that are being addressed by today’s specifications used in cars for networking, ADAS camera/sensor connectivity, and displays. As advances in various fields of technology and markets happen, better security implementations will be needed. For example, quantum computing will have the capability to break today’s public key algorithms. Interface standards will need to adapt with quantum-safe algorithms over the coming years.

Implementing data security will continue to be on the top of the list of SoC designers’ tasks.

Synopsys Secure Interfaces

Synopsys offers the entire spectrum of interfaces that designers need for a variety of different applications. Their Interface IP products are pre-verified solutions that include silicon-proven Synopsys Controllers integrated with security features and offer reduced risk solutions for optimal security, low latency and area without compromising on performance. This makes it easier for SoC designers to address and implement data protection and security for quick time-to-market.

For more details on the following interfaces, visit the interfaces portfolio page.

Summary

Incorporating security into SoCs is a fundamental requirement for complying with international laws and regulations and satisfying privacy and data protection requirements of electronic systems users. Synopsys offers the industry’s broadest secure interfaces built for various applications such as HPC, Mobile, Automotive and IoT. For more details on Synopsys’ secure interface IP products, visit the product page.

Also Read:

Synopsys Crosses $5 Billion Milestone!

Configurable Processors. The Why and How

New ECO Product – Synopsys PrimeClosure


9 Trends of IoT in 2023

9 Trends of IoT in 2023
by Ahmed Banafa on 01-03-2023 at 6:00 am

9 Trends of IoT in 2023

The year 2023 will hit all 4 components of IoT Model:

  • Sensors,
  • Networks (Communications),
  • Analytics (Cloud)
  • Applications

With different degrees of impact.

 IoT Trend 1: Growth in Data and Devices with More Human-Device Interaction

By the end of 2019 there were around 3.6 billion devices that are actively connected to the Internet and used for daily tasks. With the introduction of 5G that will open the door for more devices, and data traffic.

You can add to this trend the increase adoption of edge computing which will make it easier for business to process data faster and close to the points of action

IoT Trend 2: AI a Big Player in IoT (again)

Making the most of data, and even understanding on a basic level how modern infrastructure functions, requires computer assistance through artificial intelligence.

The major cloud vendors, including Amazon, Microsoft, and Google, are increasingly looking to compete based on their AI capabilities.

Various startups hope to increase their market share thorough AI algorithms able to leverage machine learning and deep learning, allowing businesses to extract more value out of their ever-growing volumes of data.

Artificial intelligence is the fundamental ingredient needed to make sense of the vast amount of data collected these days, and increase its value for business. AI will help IoT data analysis in the following areas:

  • data preparation,
  • data discovery,
  • visualization of streaming data,
  • time series accuracy of data,
  • predictive and advance analytics,
  • real-time geospatial and location (logistical data)

IoT Trend 3: (VUI) Voice User Interface will be a Reality

It’s a battle among industry leaders who would like to dominate the market of IoT at an early stage.

Digital assistant devices, including Alexa, Siri and Google Assistant, are the future hubs for the next phase of smart devices, and companies are trying to establish “their hubs” with consumers, to make it easier for them to keep adding devices with less struggle and no frustrations

Voice represents 80% of our daily communications, taking a chapter from Sci Fi movies, talking to robots is the common way of communications, R2D2, C-3PO, and Jarvis to name few.

The use of voice in setting up the devices, change that set ups, giving commands and receiving results will be the norm not only in smart houses, factories but in between like cars, wearables for example.

IoT Trend 4: More Investments in IoT

IoT’s indisputable impact has and will continue to lure more startup venture capitalists towards highly innovative projects in hardware, software and services.

Spending on IoT will hit 1.4 trillion dollars by 2023.

IoT is one of the few markets that have the interest of the emerging as well as the traditional venture capitalists.

The spread of smart devices and the increase dependency of customers to do many of their daily tasks using them, will add to the excitement of investing in IoT startups.

Customers will be waiting for the next big innovation in IoT—such as

  • Smart mirrors that will analysis your face and call your doctor if you look sick,
  • Smart ATM machine that will incorporate smart security cameras,
  • Smart forks that will tell you how to eat and what to eat,
  • Smart beds that will turn off the lights when everyone is sleeping

IoT Trend 5: Finally, a Real Expansion of Smart IoT

IoT is all about connectivity and processing, nothing will be a better example than smart cities, but smart cities have been in a bit of a holding pattern recently.

Smart sensors around the neighborhood will record everything from walking routes, shared car use, building occupancy, sewage flow, and temperature choice 24/7 with the goal of creating a place that’s comfortable, convenient, safe, and clean for those who live there.

Once the model is perfected, it could be the model for other smart neighborhoods and eventually smart cities. The potential benefits for cities, however, make IoT technology especially compelling.

Cities of all sizes are exploring how IoT can lead to better efficiency and safety, and this infrastructure is increasingly being rolled around the world.

Another area of spreading smart IoT is auto industry with self-driving cars become a normal occurrence in the next few years, today tons of vehicles have a connected app that shows up to date diagnostic information about the car.

This is done with IoT technology, which is the heart of the connected vehicle.  Diagnostic information is not the only IoT advancement that we will see in the next year or so. Connected apps, voice search, and current traffic information are a few other things that will change the way we drive.

IoT Trend 6: The Rise of Industrial IoT & Digital Twin Technology

An amalgamation of technologies is pushing this new techno-industrial revolution, and IoT plays a big part in making manufacturing more efficient, less risky, and more profitable.

Industrial IoT brings enhanced efficiency and productivity through data integration and analysis in a way that isn’t possible without an interconnected manufacturing process

Another notion that is gaining popularity is “digital twin” technology. Through its use, organizations can create a clear picture of how their IoT devices are interacting with the manufacturing process.

This gives keen businesses insight into how the life cycle of their machines operates, and allows them to predict changes that may be needed ahead of time.

According to a Gartner survey, 48% of smart manufacturing adopters have made plans to make use of the digital twin concept

IoT Trend 7: More Movement to the Edge

Edge computing is a technology that distributed the load of processing and moved it closer to the edge of the network (sensors in case of IoT).

The benefits of using fog computing are very attractive to IoT solution providers.

Some of these benefits allow:

  • Users minimize latency,
  • Conserve network bandwidth,
  • Operate reliably with quick decisions,
  • Collect secure a wide range of data
  • Move data to the best place for processing with better analysis and insights of local data.
  • Edge computing has been on the rise in recent years, but the growing scope of IoT technology will make this move even more pronounced. Two factors are leading this change:
  • Powerful edge devices in various form factors are becoming more affordable
  • Centralized infrastructure is becoming more stressed.

Edge computing also makes on-device AI a realistic proposition, as it allows companies to leverage real time data sets instead of having to sift through terabytes of data in a centralized cloud in real time. Over the coming years and even decades, it’s likely that tech will shift to a balance between the cloud and more distributed, edge-powered devices.

Hardware manufacturers are building specific infrastructure for the edge deigned to be more physically rugged and secure, and security vendors will start to offer endpoint security solutions to their existing services to prevent data loss, give insights into network health and threat protection, include privileged user control and application whitelisting and control, that will help in the fast adoption and spread of edge computing implementations by businesses

IoT Trend 8: More Social, Legal, and Ethical Issues

IoT devices are a largely unregulated new technology. IoT will inevitably find itself facing social and legal questions in the near future. This is particularly relevant for data collected by these devices, which may soon find itself falling under the umbrella of the General Data Protection Regulation (GDPR). This regulation regarding the handling of personal data and privacy in the European Union, the GDPR extends its reach beyond the European region. Any business that wants to successfully operate within the EU will need to comply with the guidelines laid out in its 88-page document

Security issues are essential when it comes to legal regulation of personal data. Development teams can ensure the required level of security and compliance on various levels, including data encryption, active consent, various means of verification and other mechanisms. Their goal is to collect data legitimately and keep its accessibility, processing, and storage to a minimum that is dictated by the software product

IoT Trend 9: Standardization Still a Problem

Standardization is one of the biggest challenges facing growth of IoT—it’s a battle among industry leaders who would like to dominate the market of IoT at an early stage. But what we have now is a case of fragmentation. One possible solution is to have a limited number of vendors dominating the market, allowing customers to select one and stick to it for any additional connected devices, similar to the case of operating systems we have now have with Windows, Mac and Linux for example, where there are no cross-platform standards.

To understand the difficulty of standardization, we need to deal with all three categories in the standardization process:

  • Platform,
  • Connectivity,            
  • Applications.

In the case of platform, we deal with UX/UI and analytic tools, while connectivity deals with customer’s contact points with devices, and last, applications are the home of the applications which control, collect and analyze data.

All three categories are inter-related and we need them all, missing one will break that model and stall the standardization process. There is no way to solve the problem of fragmentation without a strong push by organizations like IEEE or government regulations to have common standards for IoT devices

Ahmed Banafa, Author the Books: Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

1. “Secure and Smart IoT Using Blockchain and AI “ Book by Prof. Ahmed Banafa

2. “Blockchain Technology and Applications “ Book by Prof. Ahmed Banafa

Also Read:

Microchips in Humans: Consumer-Friendly App, or New Frontier in Surveillance?

5G for IoT Gets Closer

Webinar: From Glass Break Models to Person Detection Systems, Deploying Low-Power Edge AI for Smart Home Security


IEDM 2022 – TSMC 3nm

IEDM 2022 – TSMC 3nm
by Scotten Jones on 01-02-2023 at 6:00 am

TSMC CPP

TSMC presented two papers on 3nm at the 2022 IEDM; “Critical Process features Enabling Aggressive Contacted Gate Pitch Scaling for 3nm CMOS Technology and Beyond” and “A 3nm CMOS FinFlexTM Platform Technology with Enhanced Power Efficiency and Performance for Mobile SOC and High Performance Computing Applications”.

When I read these two papers prior to the presentations, my initial reaction was the first paper was describing TSMC’s N3 process and the second paper is describing the N3E process, this was confirmed by the presenter during the second presentation.

My second reaction was these papers continue TSMC’s habit of minimizing the amount of technical detail they present. As I discussed about TSMC’s 5nm paper in 2019 here there is minimal critical pitch information and in 2019 all the electrical results were normalized. In these two papers the electrical results are at least in real units, but the first paper only has Contacted Gate Pitch and the second paper only has a minimum metal pitch. I find this very frustrating, the critical pitches will be measured and disclosed as soon as parts hit the open market and insiders and TSMC’s competitors likely already know what they are, I don’t see how presenting a quality technical paper would be a problem. When Intel presented Intel 4 at the VLSI Technology Symposium last year they presented an excellent paper with all of the key data (I wrote about that paper).

N3 Paper

In the first paper, a Contacted Gate Pitch (Contacted Poly Pitch, CPP as I describe it) of 45nm is disclosed. CPP is made up of Gate Length (Lg), Contact to Spacer Thickness (Tsp), and Contact Width (Wc), as illustrated in figure 1.

Figure 1. CPP.

From figure 1. We can see TSMC has been reducing CPP for each new node by reducing all three elements that make up CPP. Logic designs are done by using standard cells and CPP is a major driver of standard cell width, therefore shrinking CPP is a key part of improving density for a new node.

Minimum Lg is a function of gate control of the channel, for example moving from single gate planar devices with unconstrained channel thickness to FinFETs with 3 gates surrounding a thin channel enabled shorter Lg. Gate control of a FinFET is weakest at the base of the fin and optimization is critical. Figure 2 illustrates DIBL versus Lg for multiple TSMC nodes and also how optimizing the fin reduced DIBL for the current work.

Figure 2. DIBL versus Lg.

The second element in shrinking CPP is the Tsp thickness. Reducing Tsp drive up parasitic capacitance unless the spacer is optimized to lower the k value. Figure 3 illustrates TSMC’s investigation of low-k spacers versus an air gap spacer. TSMC found that a low-k spacer was the best solution for scaled CPP.

Figure 3. Contact to Gate Spacer.

The final element of CPP is contact width. In this work an optimized self-aligned contact (SAC) scheme was developed that provided lower contact resistance. The left side of figure 4 illustrates the SAC and the right side illustrates the resistance improvement.

Figure 4. Self-Aligned Contact.

This work enabled the N3 process with a high-density SRAM size of 0.0199μm2. This work will also be important as TSMC moves forward to their 2nm process. At 2nm TSMC is going to move to a type of gate-all-around (GAA) architecture known as a horizontal nanosheet (HNS) and HNS enables shorter Lg (4 gates instead of three surrounding a thin gate), but Wc and Tsp will still have to be optimized.

N3E

The N3E process is described by TSMC as an enhanced version of N3, interestingly N3E is believed to implement relaxed pitches versus N3, for example CPP, M0 and M1 are all believed to be relaxed for performance and yield reasons. There are varying stories about TSMC N3 and whether it is on time or not. The way I look at it is N5 entered risk starts in 2019 and by Christmas 2020 there were Apple iPhones in store with N5 chip. N3 entered risk starts in 2021 and iPhones won’t hit the market with N3 chips until next year. In my view the process is at least 6 months late. In this paper a high-density SRAM cell size of 0.021 μm2 is disclosed. Larger than the N3 SRAM cell of 0.0199 μm2. The yields for N3 are generally described as being good with 60% to 80% mentioned.

There are two major features of this process discussed in this paper:

  1. FinFlexTM
  2. Minimum metal pitch of 23nm with copper interconnect with an “innovative” liner for low resistance.

FinFlexTM is a kind of mix and match strategy with double height cells that can be 2 fins cells on top with 1 fin cells on the bottom for maximum density, 2 fin cells over 2 fin cells as kind of mid performance and density and 3 fin cells over 2 fin cells for maximum performance. This give designers a lot of flexibility to optimize their circuits.

Figure 5 illustrates the various FinFlexTM configurations and figure 6 compares the specifications for each configuration to a standard 2 over 2 fin cell at 5nm.

Figure 5. FinFlexTM cells.

 

Figure 6. 3nm FinFlexTM cell performance versus 5nm cells.

 A plot in this paper is the via resistance distribution for the 15 level metal stack at approximately 550 ohms. In current processes power comes in through the top of the metal stack and has to travel through the via chain down to the devices, 550 ohms in a lot of resistance in a power line. This is why Intel, Samsung and TSMC have all announced backside power delivery for their 2nm class processes. With extreme wafer thinning the vias bringing power in from the backside should offer a >10x improvement in via resistance.

Comparisons

One question you may have as a reader is how this process compares to Samsung’s 3nm process. TSMC is still using FinFETs while Samsung has transitioned to GAA – HNS they call multibridge.

At 5nm by our calculations TSMC’s densest logic cells are 1.30x the density of Samsung’s densest logic cells. If you look at TSMC density values in figure 6., the 2-2 fin cells are 1.39x denser than 2-2 cells in 5nm, and the 2-1 cells offer a 1.56x density improvement. Samsung has two versions of 3nm with the SF3E (3GAE) version 1.19x denser than 5nm and the SF3 (3GAP) version 1.35x denser than 5nm, falling further behind TSMC’s industry leading density. I also believe TSMC has better performance at 3nm and slightly better power although Samsung has closed the power gap likely due to the HNS process.

Also Read:

IEDM 2022 – Ann Kelleher of Intel – Plenary Talk

Does SMIC have 7nm and if so, what does it mean

SEMICON West 2022 and the Imec Roadmap


Podcast EP134: A New Year’s Perspective with Daniel Nenni and Mike Gianfagna

Podcast EP134: A New Year’s Perspective with Daniel Nenni and Mike Gianfagna
by Daniel Nenni on 12-30-2022 at 6:00 am

Dan is joined by his podcast partner and producer Mike Gianfagna. Dan and Mike review the hot topics that trended on SemiWiki over the past year.

Included are discussions about how the semiconductor industry is changing, touching on Moore’s law, chiplets. and government intervention.The forces that are changing semis are also discussed. There are also some observations about two cornerstone industries: foundry and EDA/IP. And some observations about the new post-COVID normal.Are we there yet?

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Analog to Digital Converter Circuits for Communications, AI and Automotive

Analog to Digital Converter Circuits for Communications, AI and Automotive
by Daniel Payne on 12-29-2022 at 6:00 am

RF Data Converters, analog to digital converter, min

Sensors are inherently analog in nature, and they get digitized for processing by using an Analog to Digital Converter (ADC) block. At the recent IP SoC event I had the chance to see the presentation by Ken Potts, COO of Alphacore on their semiconductor IP for ADCs. I learned that Alphacore started out in 2012, now offering both standard and custom IP for AMS, RF, imaging and radiation hardened electronics through a global organization, based in Arizona.

Data converters can be designed in any IC process node, however the FD-SOI technology provides the lowest power while being tolerant to radiation effects. A 28nm FD-SOI chip will consume 70% lower power when compared to a bulk CMOS process.

RF data converters need to have both high bandwidth and low power to fit applications like phase array architectures, direct to RF sampling, beamforming and 5G radios.

Alphacore designed a hybrid ADC named the A11B5G with a sampling rate of 5GS/s, resolution of 11 bits, with a 800mV supply, and a power of just 50mW by using a 22nm FD-SOI process from GlobalFoundries. One useful feature of this ADC is an integrated auto-calibration, as it eliminates interleaving spurs.

Output spectrum before calibration
Spurs removed after calibration

Another Analog to Digital Converter with even lower power is the A10B3G with a sampling rate of 3GS/s, 8.6 Effective Number Of Bits (ENOB) at 100MS/s, consuming just 13mW, fabricated on the 22nm FD-SOI process from GlobalFoundries.

A10B3G ADC

The first low-power Digital to Analog Converter (DAC) that Ken showed was the D6B5G, and it consumed only 16mW, with 5.4 ENOB, 6-bit input and running at 5GS/s.

Phase Locked Loop (PLL) circuits can be used when demodulating a signal, to distribute clock signals inside an SoC, create a new clock frequency multiple, or recover a signal from a communication channel. The PLL5G is a very low jitter <150fs design, taping out in January 2023 in the 22FDx node.

For serial communications a SerDes circuit is used, and Alphacore has a 22FDx-based design taping out in January 2023, dubbed the SD16G, supporting a data rate from 1Gb/s to 16Gb/s, using either 8 or 16 bit for serialization/de-serialization width. All the popular protocols are supported: PCIe, JESD204, SATA, SRIO, SG-MII, USR/XSR.

All IP from Alphacore comes with a design kit that includes everything that you’ll need for customization:

  • GDSII
  • RTL
  • Schematics
  • DRC/LVS logs
  • Abstract
  • Extracted View
  • Extracted simulation model
  • Verilog-AMS models
  • Integration guide: DFT, I/O

Roadmaps for ADC, DAC, PLL and SerDes were shared for four foundry nodes: TSMC 28HPC+, TSMC 12FFCP, Intel16, GF 22FDx. So 2023 is a very busy year for silicon proven IP.

At Alphacore they are experts at designing radiation hardened circuits, taking special care for effects like Total Ionizing Dose (TID) and Single Event Effects (SEE). They have rad-hard ADC and DAC in GF 22FDx now, then plans for Intel16 in Q2’23, GF 22FDx in Q3’23, and SkyWater RH90 in Q4’23.

Three more rad-hard design examples were for Power Management ICs (PMIC), a 2-color in-pixel ADC, and an imager/camera with high frame rate of 120 FPS.

Summary

Low-power and radiation-hardened applications are a niche market, requiring specialized expertise. At Alphacore there’s a strong track record of delivering a growing family of ADC, DAC, PLL, SerDes, PMIC and imagers. The tapeout schedule for 2023 looks quite full, meaning that you get even more IP that is silicon proven for your designs in 5G, space communications, automotive, even in quantum computing.

Related Blogs