RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Will EUV take a Breather in 2021?

Will EUV take a Breather in 2021?
by Robert Maire on 02-07-2021 at 6:00 am

KLA EUV Slowdown

-KLAC- Solid QTR & Guide but flat 2021 outlook
-Display down & more memory mix
-KLAC has very solid Dec Qtr & guide but 2021 looks flattish
-Mix shift to memory doesn’t help- Display weakness
-Despite flat still looking at double digit growth
-EUV driven business may see some slowing from digestion

As always, KLAC came in at the high end of the guided range with revenues of $1.65B and non GAAP EPS of $3.24 versus the guided range of $2.82 to $3.46. Guidance is for $1.7B +-75M and non GAAP EPS range of $3.23 – $3.91. By all financial and performance metrics a very solid quarter

A “flattish” 2021 while WFE grows “mid teens”

Management suggested that WFE which exited 2020 at $59-$60B would grow double digits in 2021 but the year would look a bit more flat for KLAC as its acquired display group is expected to shrink and there is an expected mix shift towards memory which is less process control intensive.

Foundry has been strong which has been very good for KLA and the current quarter is expected to see roughly 68% of business from foundry

Will EUV take a breather?

KLA obviously sells process management tools to companies working on new process such as EUV. TSMC has bought so many EUV tools it probably has problems finding the space for more. TSMC has also clearly gone well over the hump of getting EUV to work and likely may not need as much process control and maybe could slow its EUV scanner purchases a bit given that its so far ahead.

Intel is obviously still coming up the learning curve and purchasing curve and Samsung is in between the two. We would not expect either Samsung nor Intel to be as EUV intensive as TSMC has been, at least not in the near term. All this being said , it is not unreasonable to expect EUV related process management to slow slightly.

Memory not as intensive as Foundry/logic

The industry is expecting memory makers to increase capex spend in 2021 as supply and demand have been in reasonable balance and supply is expected to get tighter.

Most of the expectation is on the DRAM side which is slightly less process control intensive as compared to NAND and likely lower in overall spend. This mix shift towards memory is obviously better for memory poster child Lam than for foundry poster child KLA. However its not like foundry is falling off a cliff with TSMC spending a record of between $26B and $28B in capex.

Service adding nice recurring revenue

As we have seen with KLA’s competitors, the service business continues its rise in importance to the company. The recurring revenue stream counterbalances the new equipment cyclicality and lumpiness. Having 25% or more of your revenue coming from service is very attractive

Wafer inspection positive while reticle inspection negative

EUV “print check” has obviously been very good for KLA and a way to play the EUV transition given the issues in reticle inspection. Patterning (AKA reticle inspection) was down significantly after a nice bump in prior quarters where KLA managed to take back some business from Lasertec (which now sports a $10B Mkt Cap).

Obviously “missing the boat” on EUV reticle inspection is toothpaste that can’t be put back in the tube. We expect Lasertec to get the lions share of Intel’s business as it ramps up EUV.

The stock

If we assume roughly $7B in revenues for 2021 ($1.75B/Q) with roughly $15 in EPS ($3.75/Q) we arrive at roughly 19X forward EPS, at the current stock price. This is likely a pretty good valuation for a company with stellar/flawless execution in a slowing, but still strong, market.

Investors will likely get turned off by the “flattish” commentary despite the good numbers. It also doesn’t help that the chip stocks have been feeling a bit like they are turning over here Despite any weakness KLA remains the top financial performer in the industry.

Also Read:

New Intel CEO Commits to Remaining an IDM

ASML – Strong DUV Throwback While EUV Slows- Logic Dominates Memory

2020 was a Mess for Intel


Podcast EP6: The Traitorous Eight and Fairchild Semiconductor

Podcast EP6: The Traitorous Eight and Fairchild Semiconductor
by Daniel Nenni on 02-05-2021 at 10:00 am

Dan and Mike are joined by John East, a silicon valley industry veteran who takes you on a tour of the very foundation of Silicon Valley and venture capital. John explores the beginnings of these key parts of the world as we know it today and explains who the Traitorous Eight are and what role they played.

Biography
John East retired from Actel Corporation in November 2010 in conjunction with the transaction in which Actel was purchased by Microsemi Corporation.  He had served as the CEO of Actel for 22 years at the time of his retirement.  Previously, he was a senior vice president of AMD, where he was responsible for the Logic Products Group.  Prior to that, Mr. East held various engineering, marketing, and management positions at Raytheon Semiconductor and Fairchild Semiconductor.  In the past he has served on the boards of directors of Adaptec,  Pericom and Zehntel (public companies), and MCC,  Atrenta and Single Chip Systems (private companies).  He currently serves on the boards of directors of SPARK Microsystems – a Canadian start-up involved in high speed, low power radios — and Tortuga Logic  —  a Silicon Valley start-up involved in hardware security.   Additionally,  he is presently an advisor to Silicon Catalyst  — a Silicon Valley based incubator actively engaged in fostering semiconductor based start-ups. Mr. East holds a BS degree in Electrical Engineering and an MBA both from the University of California, Berkeley.  He has lived in Saratoga, California with his wife Pam for 46 years.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Pim Tuyls of Intrinsic ID

CEO Interview: Pim Tuyls of Intrinsic ID
by Daniel Nenni on 02-05-2021 at 6:00 am

Pim from bio

Pim Tuyls, CEO of Intrinsic ID, founded the company in 2008 as a spinout from Philips Research. It was at Philips, where he was Principal Scientist and managed the cryptography cluster, that he initiated the original work on Physical Unclonable Functions (PUFs) that forms the basis of the Intrinsic ID core technology. With more than 20 years of experience in semiconductors and security, Pim is widely recognized for his work in the field of SRAM PUFs and security for embedded applications. He speaks regularly at technical conferences and has written extensively on the field of security. He co-wrote the book Security with Noisy Data, which examines new technologies in the field of security based on noisy data and describes applications in the fields of biometrics, secure key storage, and anti-counterfeiting. Pim holds a Ph.D. in mathematical physics from Leuven University and has more than 50 patents.

What brought you to semiconductors?
For that we must go back to 2002. At that time I was part of the security group of Philips Research and we were working on “Ambient Intelligence,” which is currently known as the Internet of Things (IoT). Foreseeing that everything around us would be connected and operations would be automated, it was clear to us that there were major security issues on the horizon. These issues would come up at the silicon level, as all measurements, processing, and connectivity in the IoT is provided by chips. That is when we started thinking about how we could help to increase the security of chips at a low cost to facilitate the needs of an upcoming market with potentially billions of devices. It was clear from the beginning that this problem required a novel and innovative approach with as little overhead as possible. That is when we decided to base security on the physical characteristics of chips and was the idea born to work with silicon-based Physical Unclonable Functions or PUFs.

PUFs convert tiny variations in silicon into a digital pattern of 0s and 1s that is unique to that specific chip and is repeatable over time. This pattern is a “silicon fingerprint,” comparable to its human biometric counterpart. The fingerprint is turned into a cryptographic key that is unique for that individual chip and is used as its root key. This root key is reliably reconstructed from the PUF whenever it is needed by the system, without a need for storing the key in any form of memory.

And how does this relate to the backstory of Intrinsic ID?
When we started working on these PUFs, our biggest internal customer at Philips was its semiconductor division. However, as we all know, in 2006 Philips decided to spin off this division into the independent company NXP. For our team this meant losing our internal customer, as research teams were supposed to work for internal customers only. At that point we were given the opportunity by Philips to make the change from a research team into a venture activity. This allowed us to create a business for ourselves that we successfully spin out of Philips in 2008, with the help of the VC Prime Ventures, as Intrinsic ID. From that point on we were able to start commercializing our own products and start building our customer portfolio, which includes several of the biggest semiconductor companies in the world, such as NXP, Intel, Silicon Labs, Microchip, and many others.

What customer challenges are you addressing?
The main problem for high-volume semiconductors, such as those in IoT devices, is to have a strong and low-cost implementation of a root of trust that also scales well over the ever-decreasing technology nodes. It is clear that a security implementation needs to be sufficiently strong, otherwise there is no point to it. The enormous volumes in IoT also demand the solution to be low-cost. But the impact and the importance of the solution to be scalable are often overlooked. Especially for hardware-based security, it is not trivial that an implementation scales along with decreasing technology nodes. If this is possible, it enables chip manufacturers to use the same technology over different nodes, which guarantees continuity and eases the burden on development and maintenance of software. High security and low cost with flexible scalability are what we provide with our security solutions based on PUF technology.

What are the products Intrinsic ID has to offer?
We have three flagship products at this moment: a semiconductor product, a software product, and an FPGA product. The semiconductor product is QuiddiKey, which consists of RTL that generates a root of trust for chips from an SRAM PUF. Additionally, QuiddiKey provides key management for the keys that are derived from the PUF. For existing silicon, or chips where additional RTL cannot be added, we have a software implementation of the same solution called BK, which runs on virtually any processor. And since last year, we have a specific solution for FPGA, called Apollo. Apollo facilitates the creation of a PUF-based root of trust in the programmable fabric of Xilinx FPGAs.

I’m happy to mention that later this year we will be launching a brand-new product called Zign, which provides a non-intrusive way to track high-volume devices. We also have a few other new developments in the works regarding random-number generators for off-the-shelf devices, as well as an activation product.

What is your competitive positioning?
PUF technology in general provides several benefits over traditional methods of key provisioning and storage. Most importantly, with an SRAM PUF, no sensitive data is ever stored on a chip. The root key of the device is created from the physical characteristics of the silicon and it is only generated when needed. All sensitive data and keys are encrypted with this root key before storage and therefore uniquely bound to the hardware of the chip, making it impossible to extract or copy any data. Furthermore, because the root key is created from silicon, there is no need for external provisioning of this key. This simplifies the supply chain by eliminating the need for key provisioning at a trusted facility. Also, no member of the supply chain will have any knowledge about the root key because it has not been provisioned and it never leaves the chip – it is intrinsic to the chip itself.

The benefits of our specific SRAM PUF technology include very strong security. This means the SRAM PUF provides high entropy to create the cryptographic root key on any chip. It also has high reliability over time – in fact, in some cases is even higher than the reliability of non-volatile storage for keys. On top of that, SRAM is a standard semiconductor component that is available in any technology node and in every process. This ensures the scalability of SRAM PUF over different nodes and processes and allows for easy testing and evaluation as this is a well-known semiconductor component. And finally, it is fully digital. This means that adding an SRAM PUF does not require any additional mask sets, analog components (like charge pumps), or special programming.

What kind of year has 2020 been for Intrinsic ID?
Clearly 2020 was a tough and challenging year for everyone due to the global pandemic. Working from home and worrying about the health of ourselves and our loved ones was hard on all of us. But despite these challenges, 2020 was a very good year for Intrinsic ID. We saw strong growth in revenue and royalty income, while also being able to launch new products. 2020 has been an important year for our presence on FPGAs, with the launch of our Apollo product for Xilinx FPGAs as well as a dedicated SRAM PUF implementation for Intel FPGAs, such as the Stratix X. So business wise, 2020 has been a great year for us.

What does 2021 have in store for Intrinsic ID?
We expect 2021 to be another great year for us, both financially and in growth of the company itself. We are starting the year strong with a great pipeline with top-tier prospective customers. And given the current growth in the semiconductor market, we also expect a steady growth of our royalty income. With Zign we will be launching another new product this year, which is currently already being evaluated by beta customers. We are also growing our team (see: www.intrinsic-id.com/careers) to keep up with ever-increasing customer demand. And finally, we are launching a new community website for people interested in PUF technology, www.pufcafe.com. This website provides a forum for people from the security community to get together, find resources, attend webinars, and submit their own documents to really drive the discussions on where the development of PUF technology in general (not just our products) should be headed. We are really looking forward to building an active community that will shape the future of PUF technology.

https://www.intrinsic-id.com/

Also Read:

CEO Interview: Tuomas Hollman of Minima Processor

CEO Interview: Lee-Lean Shu of GSI Technology

CEO Interview: Arun Iyengar of Untether AI


Expanding Role of Sensors Drives Sensor Fusion

Expanding Role of Sensors Drives Sensor Fusion
by Tom Simon on 02-04-2021 at 10:00 am

CEVA Sensor Fusion

It is long past the time when general purpose processors could meet the needs of sensor fusion. Sensor fusion performs operations to process and integrate raw sensor data so that downstream processing is simplified and is performed at a higher level. When done properly it offers several other significant benefits such as lower latency & power, bandwidth savings and improved efficiency. CEVA, a provider of processor and platform IP, addressed the growing sophistication of sensor fusion last year with their SensPro Sensor Hub DSP. Since then the market has steadily grown with expanded requirements for new types of sensors and more powerful processing capabilities. In many cases new applications have driven these requirements. This includes everything from earbuds to automotive ADAS systems. CEVA has just announced a major update to this offering which is called SensPro2.

SensPro2 Major Improvements

CEVA has packed a lot into this update. They have expanded the number of cores from 3 to 7. There is ASIL-B compliance and ASIL-D support. Parallel processing benefits from their wide-memory bandwidth. The neural network support includes RNN and FC layers. There are ISA extensions specific for AI, vision, SLAM, Radar and sound. Combined with other changes SensPro2 delivers a 2X boost for AI inferencing, up to 6X peak performance gain, 2X better memory bandwidth and 20% energy savings. These improvements create the opportunity for SensPro2 to support an expanded range of high and low end applications.

CEVA Sensor Fusion

Across all members of the SensPro2 family there is a common ISA, which means that moving to a different core is made seamless when performance needs to scale. In addition to the three previous cores, SP250, SP500 and SP1000, there are two new low-end cores, SP50 and SP100, along with two new floating point cores, SPF2 and SPF4. The SP50 through SP1000 have MACs that support INT8 and INT16, and allow the addition of a FP32 MAC. The SPF2 and SPF4 offers  FP32 floating point only MACs.

Focus on Performance

Under the hood SensPro2 offers impressive specifications. There is 8-way VLIW with a highly configurable architecture. It clocks up to 1.6GHz on 7nm silicon. It can deliver 3.2 TOPS (INT8) and 400 GFLOPS, using 64 single precision or 128 half precision FP MACs. The memory architecture offers 400 GByte/second of bandwidth. It includes a 4-way instruction cache and a 2-way vector data cache.

Using their own benchmark numbers CEVA shows that SensPro2 beats their previous generation by anywhere from 1.8X to 5X on CV benchmarks. SLAM benchmark results for SensPro2 show 1.8X to 6.4X over the previous generation. Similarly, for audio processing the SP250 core showed DeepSpeech2 results that were 18.9X faster than their general purpose CEVA-BX2 DSP. SensPro2 also has improved Radar performance capabilities as well.

Development Environment

CEVA backs up these IP improvements with solid and mature software development libraries. Included are ClearVox noise reduction, WhisPro speach recognition, wide angle imaging, SLAM SDK, Tensor Flow Lite Micro support, CDNN, and OpenVX & OpenCL. These all contribute to an extremely wide range of end applications. In the area of AI they support TensorFlow. CEVA has its own neural network compiler, CDNN, that supports over 200 NNs and is fully optimized for the SensPro2 processors. It includes graph optimizers for accuracy optimization, retraining and scaling per layer.

CEVA is well positioned with this new generation of senor fusion IP. The IP covers the full range of potential applications and is highly configurable. It is well supported with development libraries. They have shown great strides in improving performance to keep up with market needs. The full announcement can be found on the CEVA website.  

Also Read:

Sensor Fusion Brings Earbuds into the Modern Age

Sensor Fusion in Hearables. A powerful complement

Low Energy Intelligence at the Extreme Edge


Best Practices are Much Better with Ansys Cloud and HFSS

Best Practices are Much Better with Ansys Cloud and HFSS
by Daniel Nenni on 02-04-2021 at 6:00 am

Ansys PAM4 PKG

Compute environments have advanced significantly over the past several years. Microprocessors have gotten faster by including more cores, available RAM has increased significantly, and the cloud has made massive distributed computing more easily and cheaply available.

HFSS has evolved to take advantage of these new capabilities, and as a result is magnitudes more capable of solving large designs, designs that you could barely imagine solving before. For some customers, it means being able to solve more design variations in parallel to find the optimal design before manufacturing the first prototype. For other customers, it means spending less time and thought in simplifying designs and instead creating models that include more of the electronic component details, its surrounding system, including the device enclosure and/or even placing it in its operating environment.

The process of preparing a model to be solved with HFSS has decreased significantly over the years as various steps in the process have become automated. But whether the steps are automated or manually done, a decision is made every time a simplification occurs – is removing that detail going to impact accuracy? Is removing that detail going to save significant computational time? And making those decisions, or compromises, requires experience and expertise.

Customers are compromising less and achieving more when using current best practices in the latest version of HFSS. Take, for example, this PAM4 Package model from Socionext where the goal is to extract 12 critical high-speed IO nets.

A legacy best practice is to cut the package model down as much as possible to reduce overall RAM footprint. This often results in a complex shaped cutout with boundaries very close to the critical nets of interest. How close is too close? That decision requires experience and expertise so that accuracy is not compromised. Let’s compare this legacy complex cutout method to a simpler, rectangular cut that preserves the true boundaries of the original package model.

The additional RAM and time to solve is not significant, especially considering the time and thought that goes into creating a conformal cutout and the unfortunate compromise in accuracy when, as in the case above, a differential pair proved to be “too close” to the conformal cut boundary! With the cost of RAM greatly reduced and compute resources more readily accessible, there is no longer a need to compromise on accuracy with legacy best practices like creating conformal cutouts.

But wait – did you miss the fine print in the table above? What do we mean when we indicate that the Ansys BKM model used “5.5GB (44GB distributed across 8 tasks)”? This means the model was solved on 8 compute nodes where it required a maximum of 5.5GB on any given node and 44GB total RAM when you add up the RAM used on all 8 nodes.

Getting your model to distribute the solve process across multiple nodes is enabled by default with HFSS’s automatic HPC setting – in other words, you don’t need to do anything more than submit the job to run on multiple nodes and HFSS will automatically distribute the solve process. Using automatic HPC settings along with submitting jobs to Ansys Cloud are two HFSS HPC best practices, and they were both used to solve the above Socionext PAM4 package model. Simply following those two HPC best practices make it easy to solve bigger problems faster than ever before.

Not sure how much of your layout you can preserve when performing a full 3D extraction with HFSS? Get ready to be amazed by HFSS’s speed and capacity when you read this blog post to discover that HFSS on Ansys Cloud can be used to model an entire RFIC!

HFSS best practices have evolved to reduce time spent in both pre-processing, by eliminating compromises made when simplifying models, and solving, by taking advantage of distributed computing and the cloud, especially Ansys Cloud. To summarize a few of the best practices described in this blog: 1) using the latest version of HFSS, 2) forgoing complex shaped cutouts because the risk to accuracy no longer warrants insignificant RAM-savings, 3) using automatic HPC settings on Ansys Cloud. The best practices discussed here are just a few examples that span across many layout-based applications; please work with the Ansys Customer Excellence team to ensure that you’re using all the latest best practices for your specific application.

If it has been some time since you’ve reviewed and updated your HFSS practices and scripts, you are leaving performance and accuracy on the table. You are entitled to the advantages of HFSS’ advancements. You are paying for it. Make sure you use it. It can boost your productivity by 10x or more.

Related link: The Easiest New Year’s Resolution: Better, Faster Simulations

Also Read

System-level Electromagnetic Coupling Analysis is now possible, and necessary

HFSS – A History of Electromagnetic Simulation Innovation

HFSS Performance for “Almost Free”


ESD Alliance and IEEE CEDA Announce a New Recognition Program – the Phil Kaufman Hall of Fame

ESD Alliance and IEEE CEDA Announce a New Recognition Program – the Phil Kaufman Hall of Fame
by Mike Gianfagna on 02-03-2021 at 10:00 am

Phil Kaufman Award Winners

Anyone even remotely associated the EDA industry will know about the Phil Kaufman award. Every industry has its ultimate recognition – the Academy Awards and the Grammys are familiar ones in pop culture. The Nobel Prize gets a bit geekier and the Morris Chang award from the GSA is geekier still. If you’re an EDA geek, the Phil Kaufman award is the ultimate recognition. There have been 25 recipients of this prestigious honor since its inception in 1994, pictured above. There’s one problem though. Deceased members of the community are not eligible to receive the Phil Kaufman Award, a policy set by the IEEE. A recent decision has changed that. When I saw the ESD Alliance and IEEE CEDA announce a new recognition program – the Phil Kaufman Hall of Fame, I got very interested.

The new program posthumously recognizes individuals who made significant and noteworthy contributions through creativity, entrepreneurism and innovation to the electronic system design industry and were not recipients of the Phil Kaufman Award.  Some of the Kaufman Award recipients above are no longer with us. Each was a significant force in our industry; they are missed. I think it’s a great idea to allow other deserving and high-impact contributors who are no longer with us to be recognized. The Phil Kaufman Hall of Fame allows this to occur.

“Many individuals made significant contributions to the semiconductor design industry and helped it grow to where it is today, underpinning the entire global semiconductor and electronic products markets,” said Bob Smith, executive director of the ESD Alliance. “Unfortunately, many of these contributors died but should be recognized for their efforts that were instrumental in shaping our community. The Phil Kaufman Hall of Fame is intended to change that.”

Nominations for the Phil Kaufman Hall of Fame are now open. Submissions will be reviewed by the ESD Alliance and IEEE CEDA Kaufman Award review committees and approved nominees will be honored for their contributions and achievements in 2021. Nominations will remain open through Friday, March 26, 2021. You can learn more about the program and download the nomination form here. Anyone can submit a nomination and the form is relatively short, so think about deserving professionals who are no longer with us. This is a great way to keep their memory alive.

Inductees will be announced in early April. A special Phil Kaufman Hall of Fame page on the ESD Alliance and IEEE CEDA websites will host their photos, citations and tributes.

A little background on the Phil Kaufman Award and the organizations that support it would be useful. The Phil Kaufman Award honors individuals who have had a demonstrable impact on the field of electronic system design through technology innovations, education/mentoring, or business or industry leadership. It was established as a tribute to Phil Kaufman, the late industry pioneer who turned innovative technologies into commercial businesses that have benefited electronic designers.

After many years as a design engineer and manager at companies including Intel, Phil became chairman and president of Silicon Compiler Systems, an early provider of high-level EDA tools.

Subsequently, Phil became CEO of Quickturn Design Systems, a pioneer in emulation.  Phil passed away from a heart attack during a business trip in Japan in 1992. The ESD Alliance (previously the EDA Consortium) founded the Phil Kaufman Award to honor his memory and contributions to the electronic design industry.

The Electronic System Design (ESD) Alliance, a SEMI Technology Community representing members in the electronic system and semiconductor design ecosystem, is a community that addresses technical, marketing, economic and legislative issues affecting the entire industry. It acts as the central voice to communicate and promote the value of the semiconductor design ecosystem as a vital component of the global electronics industry.

The IEEE Council on Electronic Design Automation (CEDA) provides a focal point for EDA activities spread across seven IEEE societies (Antennas and Propagation, Circuits and Systems, Computer, Electron Devices, Electronics Packaging, Microwave Theory and Techniques, and Solid-State Circuits). The Council sponsors or co-sponsors over a dozen key EDA conferences including: the Design Automation Conference (DAC), Asia and South Pacific Design Automation Conference (ASP-DAC), International Conference on Computer-Aided Design (ICCAD), Design Automation and Test in Europe (DATE), and events at Embedded Systems Week (ESWEEK).  

The Council also publishes IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems (TCAD), IEEE Design & Test (D&T), and IEEE Embedded Systems Letters (ESL). The Council boasts a prestigious awards program in order to promote the recognition of leading EDA professionals, which includes the A. Richard Newton, Phil Kaufman, and Ernest S. Kuh Early Career Awards. The Council welcomes new volunteers and local chapters.

Download the nomination form and get involved as the ESD Alliance and IEEE CEDA announce a new recognition program – the Phil Kaufman Hall of Fame. You’ll be glad you did.


Qualcomm Takes the Wheel

Qualcomm Takes the Wheel
by Roger C. Lanctot on 02-03-2021 at 6:00 am

Qualcomm Takes the Wheel

Qualcomm took center stage in the automotive industry this week to state its intention to dominate future dashboard infotainment systems. Long known for its wireless connectivity presence, Qualcomm took the wraps off its ramping up infotainment design wins for its Snapdragon 3 platform while revealing its next generation Snapdragon 4 solution.

The range of announcements, which included multiple strategic collaborations with car makers and suppliers, highlighted Qualcomm’s successful convergence of connectivity, safety, and infotainment technology into a single device thereby tipping a hat to the architectural transformation sweeping the automotive industry. Qualcomm is leaning in to the cockpit domain controller movement that is integrating functionality to enhance driving safety and pleasure.

Suitably, Qualcomm thrust General Motors to the forefront of its announcement. GM is the go-to partner to highlight infotainment innovation as the company provides the optimal combination of volume vehicle deliveries with innovation and risk taking in dashboard designs.

GM provided the added impetus this year of highlighting its own thrust into electrification – with the launch of the Ultium EV platform – and its aggressive moves into driving automation, also a highlight of Qualcomm’s announcements. Qualcomm’s partnership with GM had the added benefit of emphasizing the importance of China’s automobile market, the largest in the world. For years, GM has sold more cars in China than it has in the U.S. (GM is second only to Volkswagen among foreign auto makers.)

An important element of Qualcomm’s launch was the inclusion of its Snapdragon Ride Platform portfolio of safety-grade system-on-chips (SoCs) designed for automotive safety integrity level D (ASIL-D) systems. These chips embody the essential integration of safety, connectivity, and artificial intelligence capabilities suited to fulfill requirements for New Car Assessment Program (NCAP) Level 1 advanced driving assistance systems (ADAS) systems and Level 2 automation systems. Seeing Machines, Arriver, and Valeo (Park4U) were all mentioned as Qualcomm strategic partners.

A further roster of essential partners mentioned as part of Qualcomm’s announcement were key tier collaborators including Garmin, Google, Harman International, Joynext Technology, LG Electronics, Panasonic, AlpsAlpine, Maxim Integrated, Micron, and many others.

Qualcomm’s multifaceted announcement marks a critical changing of the guard in the automotive industry and the conclusion of a decades-long battle to tackle the automotive infotainment market. Prior to Qualcomm’s late arrival with high profile design wins across the globe, the company watched from the sidelines as first Intel and then Nvidia sought to conquer the automotive infotainment opportunity.

Intel notched strategic wins with BMW, GM, FCA (Stellantis), Volvo and Tesla while Nvidia touted wins at Audi. Both companies ultimately shifted their focus almost exclusively toward advanced driver assistance and autonomous drive systems.

Qualcomm’s climb to the top was also complicated by its failed $44B acquisition of fellow automotive semiconductor supplier NXP – a deal initiated in October 2016 on the cusp of President Donald Trump’s election and finally abandoned in the face of Chinese objections more than a year later. The rejected acquisition may or may not have delayed Qualcomm’s rise, but its official arrival was completed this week.

(Similar roadblocks have emerged to SoftBank Group’s attempt to sell U.K. chip designer Arm to U.S. chipmaker Nvidia, according to Nikkei.com The proposed $40B acquisition “is hitting regulatory roadblocks in major markets, as the blockbuster deal has raised antitrust and national security concerns among policymakers.”)

The onset of in-dash Snapdragon solutions marks the simultaneous rise of Qualcomm and China in the automotive market. It will be interesting to see Qualcomm’s performance in the automotive market both inside and outside China – given the company’s massive intellectual property portfolio.

As a major advocate of protecting intellectual property, Qualcomm stands at the fulcrum of the world’s largest automotive market as a rising force in automotive and wireless semiconductors setting the stage for unique opportunities and outcomes. Given the substantial contributions to Qualcomm’s announcement from Chinese partners it is cleare the company is committed to a massive in-market presdence.

Making Qualcomm’s announcements this week even more important is the potential impact on vehicle electrification, connectivity, and driving automation. Converging all of these experiences in the fourth generation Snapdragon platform sets the stage for entirely new driving experiences – in mass market vehicles – intended to save lives, fuel, and time while enhancing the overall in-vehicle experience.

Announcements and endorsements highlighted in Qualcomm’s Automotive Redefined Technology Showcase 2021 event are available here: https://www.qualcomm.com/news/media-center/press-kits/automotive-redefined-technology-showcase-2021


Do You Care About What You’re Measuring? Part 2: Cloud Data Centers

Do You Care About What You’re Measuring? Part 2: Cloud Data Centers
by Steve Logan on 02-02-2021 at 10:00 am

Do You Care About What Youre Measuring Cloud Data Centers

When I think about servers and data centers, I think about multiple-core/high-power CPUs, Intel’s domination over the years and GPUs coming on strong in recent years. I think about very fast digital interfaces, such as PCI Express connections and the latest DDR memory interface. Precision analog isn’t something that first comes to mind. But it’s there, with one of the larger cloud computing companies, if you look closely enough.

One of my favorite aspects of 20 years in the semiconductor industry is the job of new product definition. Finding a market need for a new analog mixed signal device, especially in this era with tens of thousands of integrated circuits, is definitely not easy. In this case study, our product definition started with a great relationship between our sales team and the customer. Based on the level of feedback we were able to get, the account manager and field apps engineer clearly earned the trust of the lead designer at this cloud computing customer.

In data centers, “metered power” has become one of the common ways of charging their end customers for usage. Fixed costs for power allocation are common by the rack. In more recent years, a new “pay for use” pricing model has emerged. By pricing for allocated power and reserved cooling capacity, cloud computing companies and their end customers can reduce overbuilding and overpaying.

In our customer example, we learned this designer needed to accurately measure power in order for the cloud computing company to accurately bill their end customers down to their milliseconds (or maybe even microseconds) of server power usage. As the business manager of the amplifier product line, I got pulled into the conversation. We initially discussed using a 1milliohm shunt resistor, an existing current sense amplifier and measuring the amplifier output with a microcontroller’s ADC. The MCU also had an integrated multiplexer in front of the ADC. The designer’s original thought was to use our current sense amplifier through one mux channel and measure the voltage with another channel.

Measure current. Measure voltage. Calculate power in the digital domain. Simple, right?

But not so fast. After some reflection from my product definer, along with the customer, we thought about the use case of transient responses. With those complex CPU functions, GPU functions, memory reads and writes and high-speed digital interfaces, the transients and load steps could be pretty nasty. With each ADC measurement and power calculation, they’d have to account for the delay through the multiplexer between the voltage and current readings. In a slow moving or steady state current load, they’d be fine. But with fast moving transients, both sides came to the conclusion that a power multiplier/amplifier was needed. The power amplifier calculated instantaneous power by multiplying the load current across a sense resistor by a fraction of the voltage set by an external resistive divider in order to produce a true power output from the amplifier.

Another interesting product differentiation was providing a current out of the power amplifier. Nearly every other amplifier and current sense amplifier provides a voltage output. In this customer’s case, the power amplifier was located on the server board a long distance away from the MCU with the integrated ADC. Similar to the principle behind 4-20mA industrial current loops, the current output from the power amplifier eliminates any errors caused by voltage drops across the parasitic resistance of the PCB, which is often significant for high-current systems. With a simple resistor to ground at the mux/ADC input, the customer could convert the current to voltage and get accurate, fast readings representing instantaneous power.

In the end, the data center company could accurately measure the power drawn and bill their customers accordingly. They designed in our power multiplier. And they clearly cared about what they were measuring.

Part I The Question That Has Guided My Analog Mixed Signal Career


Falsely Vilifying Cryptocurrency in the Name of Cybersecurity

Falsely Vilifying Cryptocurrency in the Name of Cybersecurity
by Matthew Rosenquist on 02-02-2021 at 6:00 am

Falsely Vilifying Cryptocurrency in the Name of Cybersecurity

I get frustrated by shortsighted perceptions, which are misleading and dangerous is far easier to vilify something people don’t fully understand.

Here is another article, titled Bitcoin is Aiding the Ransomware Industry, published by Coindesk, implying cryptocurrency is the cause of digital crime.

This is one of many such pieces littering the Internet. I am a cybersecurity expert and have spent over 30 years fighting theft, fraud, and misuse of computers. I find such articles to be shortsighted and lacking the strategic picture which must consider both the benefits and drawbacks of any technology.

Correlation is not causation.

One small negative aspect is not the whole picture and shouldn’t be used to undermine great innovation (such tactics also tried to stop the automobile, electricity, and the Internet).

Let’s not only focus on skewed statements but instead, think critically about how technology can be leveraged for the greater benefit.

If narrow viewpoints, like attributing Bitcoin is aiding ransomware, are considered, then it would be equally logical to contemplate the following absurdities:

  • Email is aiding the Phishing Industry
  • Web browsers are aiding Online Fraud Industry
  • Operating Systems are aiding the Malware industry
  • Networks are aiding the Denial-of-Service industry
  • Cash is aiding every Crime industry
  • Communications are aiding the Disinformation Industry
  • Credit cards are aiding fraudulent purchasing

Is it justifiable to blame digital technology for every societal issue? Technology innovation can be greatly beneficial and should not be flatly vilified for a small negative contribution.

For the record, less than 1% of crypto transactions are related to crime. Percentage-wise, cash is used more often in criminal activities, as it is the preferred monetary instrument and store of value. If impacting crime is the motivator, then perhaps the conversations should be about eliminating cash. From a cybersecurity perspective, moving to a digital system has far more merits than moving away from cryptocurrency.

Fear of the Unknown

Fear or a lack of understanding is the key to this line of false-logic being perpetuated. Nobody is advocating the elimination of the Internet, computers, electricity, automobiles, or credit cards because society already understands their value. They are willing to accept the accompanying risks because they want the benefits.

Most of the people I encounter, who are quick to shun digital currency, often are unable to describe the long-term benefits to themselves or the global community. Only knowing the downsides makes for a highly biased position. History repeats itself. Years ago, people rallied to outlaw the emergence of automobiles, favoring a world reliant on animal power. They only saw the obnoxious sound, smell, and mechanical dangers but failed to see the benefits of a global transportation and logistics network that fueled massive economic growth, social liberties, preservation of freedoms, and expansion of personal independence.

We all must look beyond the hype.

Crime, fraud, and theft existed long before cryptocurrency, digital networks, and computers. Crypto and digital currencies also bring tremendous benefits, including the amplification of innovation, presently and especially in the future.

Tech is a tool and it can be used for good or bad. See both sides before making a judgment.


Trust, but verify. How to catch peanut butter engineering before it spreads into your system — Part 1: Validation.

Trust, but verify. How to catch peanut butter engineering before it spreads into your system — Part 1: Validation.
by Raul Perez on 02-01-2021 at 10:00 am

iStock 1071112784

I will address this topic with two blog posts: validation (i.e. post silicon) — Part 1, and verification (pre-silicon) — Part 2 (coming soon!). In this blog post, I will focus on validation.

One of the upsides of using catalog chips that have been in the market for a long time and have ramped in substantial volumes is that other system companies already found a lot of the bugs . The chip supplier has had an opportunity to fix them, screen or calibrate defective parts at automated testing (ATE), withdraw the chip from the market, or at least warn new users about bugs with an ERRATA. Your system may be different, and you may still get bitten by some bug that is exposed due to your unique operational conditions. But generally catalog parts after they have ramped for some time in volume provide a certain herd immunity to the system companies that use them.

Unfortunately, catalog chips are also going to cost you significantly more than a custom silicon chip in larger volumes, the footprint will be significantly bigger considering all components needed, and will also result in a more inflexible set of options for the system designers. But when you go the custom silicon route YOU are the first and possibly the ONLY user for this chip. So how do you prevent silicon bugs getting into your system?

First, let’s talk about the risks:

  • Peanut butter engineering at your chip supplier.

This refers to the reality that your chip supplier is in the business of making as many chips as they can in as short a period of time as they can with a fixed amount of resources. Strong engineering culture at your supplier is a mitigation in general. But given the commercial pressures chip companies are under they need to deliver revenue. And the pressure from management is generally to produce more with the same engineering resources; to spread the peanut butter so to speak over all the chips they are working on. So how does peanut butter engineering manifest itself in real life during validation?

Here are some ways:

  1. Very liberal (i.e. watered down) interpretations of JEDEC standards.
  2. Over leveraging previous chip data. Examples are; to qualify by similarity (QBS) more than they should, using old chip data to reduce how much is validated in a new chip under the concept that it’s “the same circuit–even though layout is different and it’s not exactly the same circuit or surrounded by the same circuits/noise”, too small sample quantity bench validation, no corner material ATE testing, etc…
  3. Unroot caused ECOs: shotgun engineering and spec limit opening to get the silicon out the door without knowing why something is not behaving as expected according to worst case simulations.
  • Automated test equipment (ATE) program changes over the lifetime of the product.

J-STD-46 standard defines some of the reasons why a chip supplier must inform customers that a major change has been made provided that they have purchased components up to 2 years prior, and with whom there are various possible contractual obligations. In annex A of the J-STD-46 under datasheet changes, it is stated that the “Elimination of final electrical measurement or burn-in (if specifically stated in the datasheet as being performed)” is listed as an example of a major change that requires a PCN (product or process change notice) to be issued. However, most datasheets that I’ve seen don’t explicitly state what is ATE tested and what isn’t. So the supplier absent some other agreement with the system company will not issue a PCN for a test program change. In my experience, suppliers remove tests over the lifetime of the chip without issuing a PCN to respond to market pressures to reduce the price of the product while trying to maintain profit margins. This means they lower costs by removing tests based on “historical data” for that chip. You may get a lot that is very different from historical data and that can cause a problem that is undetected at ATE because the tests have been removed.

  • Supplier development teams in different business units (BUs) don’t necessarily have the same engineering methodologies and same rigor.

You may have a good relationship and good working knowledge of how one team at a chip supplier works, and with which you’re very satisfied. But when it comes to validation and verification, each team tends to do its own thing. Sometimes justifiably so depending on what products they make. But many times simply because the teams have different engineering cultures, and there may not be a truly unified methodology in practice for all of the supplier’s development teams. It often happens that smaller chip companies are acquired through time by larger ones, and they continue on doing things the way they have always done them. Which may have been great 10 or 20 years ago but may no longer be so great. So every time you start working with a new chip dev team at one of your known suppliers, you need to raise your guard and check as if this is a new supplier you’ve never worked with before.

All of these risks have mitigation, which are fairly straightforward to implement as long as the chip supplier is cooperative and the system company has the right specialists on its side.

The following are the main mitigation to the risks above:

  • Contracts. This is not legal advice, and I am not a lawyer, so please make sure to seek legal counsel to draft good contracts.

Require in your contracts with custom silicon suppliers the following:

  1. Establish your PCN requirements in your supplier agreement. Make sure to cover changes to the ATE program. This can get messy though since these are changed often by suppliers. They’d have to issue a new datasheet now showing what parameters are no longer covered by the datasheet assuming you implement point (2) below.
  2. Require the supplier to provide you a datasheet that describes how every parameter will be verified/guaranteed. Usually, the main ways to guarantee a parameter in a datasheet is by design/simulation, by bench evaluation (30 or more units), by corner lot evaluation, by ATE, by qualification testing (JESD47, JESD22, and JESD17) or by a combination of the above. Notice that by asking for the datasheet to show how each parameter is validated/guaranteed (including ATE) is going to require the supplier now to issue a PCN whenever the test program is changed to reduce coverage according to J-STD-46.
  3. Require the supplier to provide you with the validation reports used to guarantee the parameters in the datasheet. This is important since you want to make sure to see that the data was collected, for how many units, and what is the CPK and confidence interval for the data. The supplier should provide you with detailed bench testing reports for all the blocks and interfaces, detailed 30 unit or more bench evaluation reports for all the parameters guaranteed in the datasheet by bench evaluation and for whichever ones it is possible to char that are guaranteed by design, and a detailed report showing corner samples at a minimum passing all parameters guaranteed by ATE in the datasheet.
  • Review all validation reports in detail.

Check the following in the reports:

  1. Look at the plots to check that truly the points of data add up to the sample size the supplier is saying they took to calculate CPK, and also to check that they actually took the data for your chip and didn’t just re-use old data from a different chip.
  2. Check what tests are failing for corner parts. Do you care about that parameter that is failing for corner parts? Is it one of the corners that broke your system tests during the corner build? Chip suppliers may simply say not to worry, that corners don’t really happen in real life, or any other type of justification to do nothing about the issue. They could do this because they have many other chips on their work list to worry about. So your chip’s corner fails are not their worry. But when the system engineer has the corner sample test fixture results he may realize that the corner parameter that is failing ATE at the supplier is also correlated to the tests he is seeing issues with at the system factory tester for the corner build data. So while those corner units are being filtered out by ATE at the supplier, if later in the chip’s lifetime that ATE test is removed to cost down the chip testing you will now start seeing those corner units making it into the system builds and showing up as DPPM issues. So bottom line, the chip supplier needs to provide these reports with data plots using the same parameter naming as in the datasheet so that the system engineers can check against their factory test fixture data, and flag what ATE tests should never be removed by the chip supplier.
  3. Check the CPK for the bench validation data. Parameters in the datasheet that are guaranteed by bench validation or guaranteed by design are NOT ATE tested, which means whatever data was taken by the supplier to generate those reports is the data that will forever guarantee those parameters. Is the sample size big enough? How many lots of units did the supplier use for the bench evaluation? Do the data plots look bi-modal?
  4. Check the qual report to make sure the chip supplier is following the JEDEC standards previously mentioned.
  5. Complete a correlation between your system factory test fixture and the chip supplier’s ATE tests for system critical parameters. I’ve worked on a lot of PMICs and one test that is a classic for this is end of charge. Since once you enter the final phase of charging in voltage mode the final time to 100% is heavily influenced by the resistance in the system charge path which is system specific. There are many different types of test correlations that could be applicable to your system.
  • ECO reviews.

When the chip is evaluated, some bugs may be found. It is critical that the system company has chip specialists helping to review these ECOs proposed by the chip supplier to make sure that proper root causing is completed by the chip supplier. Chip specialists that may be needed depending on what types of bugs are found are: DV and AMS verification engineers, analog chip designers, digital chip designers, RF chip designers, package engineers, foundry engineers, and others. Chip suppliers are sometimes under pressure to tape out quick fixes, or simply hand waive away issues and ask for spec limit changes. This is a very serious danger to your custom chip program as you can end up in a run-break-fix cycle with multiple tape outs due to bad or incomplete root causing which will put your system schedule at risk. There are many tools available to debug chip issues such as FIBs, and other FA techniques. You must check that proper methods are being used to root cause your chip’s bugs and not accept incomplete root causes for your ECOs. This is why it is vital that your system company has chip technical experts on your side to ensure your project is not the one where the chip supplier spreads resources thin and you end up getting peanut butter engineering adding risk to your system launch.

  • Request corner samples for the custom chip, and build some of your systems with them.

Not all suppliers, especially the ones that have their own internal fabs will want to do this without some pushing from the system company side. But it’s really the only way you have to check if your system will have any issues when you ramp into higher volumes than what you are building at your EVT and DVT builds. Usually you can build 100 samples of each corner and see what the CPK looks like for your factory tests with those units. On a mostly digital chip, using the slow NMOS slow PMOS (SS), fast NMOS slow PMOS (FS), slow NMOS fast PMOS (SF), and fast NMOS fast PMOS (FF) will give you good insights into whether you will have a problem. As you know if the CPK is 1 you have a problem, so you want to see CPK of 1.33 or better at your factory tests with corner samples. This check is part of the validation phase of the custom silicon dev process we run at customsilicon.com.

Some suppliers will tell you that corner units will be filtered out at ATE to try to avoid having to provide the corner samples. But that is not a valid argument because many parameters in a datasheet are not ATE tested, and even the ones that are may not be tested in the future as the supplier starts to remove testing over the lifetime of the product. So you need to know if your system will be sensitive to process corners using system factory testers before you ramp to mass production. If you do all your builds with mostly nominal material you may not see an issue. But once you hit volume you will start seeing DPPM issues if your system is sensitive to some of the chip supplier process corners. It’s better to catch this early and either spin the chip to fix them, change OTP trim, change the ATE to fix the issue by calibration or filter them out at the chip supplier’s ATE so you never receive the parts. Parameters that you find are critical like this should be highlighted to the supplier as a “never remove ATE testing” parameter.

  • Trust, but verify.

Custom system silicon when done with the assistance of silicon experts puts the system company in control of its own destiny. It’s important to note that when purchasing catalog parts for your system, unless you perform similar due diligence to what is described above, you’re trusting but not verifying that your components will be of good quality and not likely to cause yield or other issues when you go to production in high volumes.

For more information contact us.