DAC2025 SemiWiki 800x100

NoC-Based SoC Design. A Sondrel Perspective

NoC-Based SoC Design. A Sondrel Perspective
by Bernard Murphy on 08-03-2022 at 6:00 am

NoC optimization min

Why are NoCs important in modern SoCs and what are best design practices for using NoC? As always, a great place to start is the perspective of an SoC design organization which depends on pumping out high performance designs. Sondrel is a turnkey ASIC service provider, covering the spectrum from system design to silicon supply. Clearly doing well since they have created designs now in production in hundreds of products in mobile, security, AR/VR and other applications. Drawing on this experience they recently released a white paper detailing their rationale for depending on Arteris FlexNoC interconnect and their approach to NoC floorplan and performance optimization.

Why choose a NoC and why FlexNoC?

Interestingly, for Sondrel “Why a NoC?” isn’t even a question that needs to be asked. I would guess that they have already had more than enough experience with the congestion, timing closure and other problems that come with crossbar-based networks in large SoCs.

Why FlexNoC versus an existing in-house network generator? Sondrel cite packetization and serialization in the transport layer, providing them the ability to precisely control where they can reduce wiring and area without compromising performance. They also cite the ability to create a physically aware design even at a very early stage and control over managing power within the network. Perhaps an in-house network could be adapted to provide similar capabilities? In my view, not really. The basic architecture of a NoC is fundamentally different from a crossbar or anything derived from a crossbar. Adapting would be more like redesigning from scratch. I would guess that Sondrel would not consider this a realistic option.

Where does NoC design start?

I’ve always wondered about where NoC design starts. Do you go for design for traffic optimization first, then floorplan, or the other way round? According to Sondrel either approach can take a while to converge. They start instead with architectural performance exploration. This is used to decide on an appropriate size for the interconnects and memory subsystem by modelling the memory traffic patterns that are generated by all the subsystems as if they were running on the real system (to a reasonable approximation). Here I believe they start with spreadsheet estimation, then move to SystemC modeling with channels for connectivity.

Once this step has converged, then they start running real trials on NoC RTL. This is generated to match the goals of the performance exploration. This they do using a proprietary testbench called Performance Verification Environment. The RTL connects to transactors modeling processors and subsystems defined in Python. In this flow Python generates memory-mapped bus traffic is generated and drives it through the NoC. Allowing the NoC architect to quickly see what is going on in the design and how changes will improve the data traffic flow.

In this flow, the NoC definition starts from an already converged architectural performance goal. From Sondrel’s perspective it is then much easier to fine tune the NoC for performance and floorplan deltas. Avoiding major oscillations in the plan. It is also easier to adjust the NoC architecture as needed in response to spec updates. Just as important, providing feedback to their customers on likely impact of those changes on key performance metrics.

You can learn more about Sondrel HERE.

Also read:

Closing the Communication Chasms in the SoC Design and Manufacturing Supply Chain

SoC Application Usecase Capture For System Architecture Exploration

Sondrel explains the 10 steps to model and design a complex SoC


Chiplets at the Design Automation Conference with OpenFive

Chiplets at the Design Automation Conference with OpenFive
by Daniel Nenni on 08-02-2022 at 10:00 am

OpenFive Chiplet 59DAC

SemiWiki has been tracking the popularity of chiplets for two years now so it was not surprising to see that they played a key role at DAC. The other trend we foresaw was that the ASIC companies would be early chiplet adopters and that has proven true. One of the more vocal proponents of chiplets at DAC#59 was OpenFive, a 17+ year spec-to-silicon design company with more than 350 tapeouts and more than 150M parts shipped, simply amazing!

Chiplets are all about reusability and quick time-to-market for chips big and small. Using silicon proven hard IP enables design starts by reducing costs and cutting tapeout time, which of course plays perfectly into the spec-to-design ASIC business perfectly.

OpenFive has gained significant momentum in this trending market with several early customer engagements and multiple wins on D2D IP for chiplets. OpenFive also recently announced that they have joined the Universal Chiplet Interconnect Express (UCIe) Consortium which we covered (OpenFive Joins Universal Chiplet Interconnect Express (UCIe) Consortium). OpenFive is very well positioned based on their D2D IP, custom silicon implementation, and advanced packaging and manufacturing experience.

We have also written about IP Subsystems and Chiplets for Edge and AI Accelerators and Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets.

In April OpenFive launched a die-die interface solution for the Chiplet Ecosystem:

  • Features IOs running at up to 16Gbps (effective throughput of ~1.75Tbps/mm)
  • Features extremely low latency and <0.5pJ/bit offering best power performance benchmarks

It’s all about the ecosystem, right?

The new D2D PHY helps disaggregate large SoC die into smaller die, resulting in better yield, cost and power savings. It features up to 16Gbps NRZ signals with clock forwarding architecture. Each channel, comprising of 40 IOs, can provide effective throughput of up to ~1.75Tbps/mm. Users can stack up multiple channels to further increase overall throughput. The PHY also features built-in PLL, programmable output drivers, and link training state machines.

“The D2D subsystem, including both the controller and PHY, provides best-in-class latency, performance and power profile for various IO, CPU and analog chiplets,” said Ketan Mehta, Sr. Director, Product/Application Marketing, SoC IP, at OpenFive.

“OpenFive’s die-to-die connectivity IP solution will enable widespread integration of proven solutions from chiplet ecosystem partners,” said Mohit Gupta, SVP and GM, SoC IP at OpenFive. “As a custom ASIC and IP provider, OpenFive is well-positioned to provide an entire chiplet solution to our customers at any stage of development, whether it be during design, integration, manufacturing, or testing of Known-Good-Die (KGD).”

In their DAC booth OpenFive also highlighted an eye catching demo of an AI vision application and working silicon SoC boards which included:

  • HBM2E, D2D I/O, and RISC-V on 5nm SoC
  • LPDDR5/4X brought up on 7/6nm SoC
  • HBM2E I/O and RISC-V on 5nm SoC
  • HBM2E on 7nm SoC

For more information, please visit www.openfive.com/ip

About OpenFive
OpenFive, a SiFive business unit, is focused on custom silicon solutions and differentiated IP. With spec-to-silicon design capabilities, customizable SoC platforms, and differentiated IP for Artificial Intelligence, Cloud/Datacenter, High Performance Computing, Networking, and Storage applications, OpenFive is uniquely positioned to deliver highly competitive processor agnostic domain-specific SoCs.

The OpenFive IP portfolio includes High-Bandwidth Memory (HBM3/2E) and low power LPDDR5/4x memory subsystems; Die-to-Die (D2D) interface IP subsystems for heterogeneous multi-die connectivity including chiplets; low-latency, high-throughput Interlaken interface IP for chip-to-chip connectivity; 400/800G Ethernet MAC/PCS subsystems, and USB controller IP. OpenFive offers end-to-end expertise in custom SoC architecture, design implementation, software, silicon validation, and manufacturing to deliver high-quality silicon in advanced nodes down to 4nm.

Also read:

Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads

Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets

Enabling Edge AI Vision with RISC-V and a Silicon Platform


A Solid Methodology is the Margin of Victory

A Solid Methodology is the Margin of Victory
by Mike Gianfagna on 08-02-2022 at 6:00 am

A Solid Methodology is the Margin of Victory

Designing complex systems is difficult. It seems to me that the difficulty has increased at a rate that is more exponential than linear with design complexity. Some version of Moore’s law for design is at work. The challenges for advanced designs just seem to keep mounting. A new item that is top of mind for many executives is the talent shortage. Even if you had the budget, you just can’t find enough qualified engineers. It’s a vexing problem with many proposed solutions. At the root of many solutions is a simple, elegant focus – a solid methodology is the margin of victory.

Semifore recently posted a press release – CommScope Selects Semifore to Accelerate Product Development and Improve Quality of Results. This sounded like one of those margin of victory strategies. You can read the press release at the link provided. It’s also on Semifore’s SemiWiki page here, with lots of other good information about the company.  In this post, I wanted to explore the story beyond what was in the press release. Does Semifore deliver a margin of victory strategy? Here is what I found.

Who is CommScope?

First, a bit about the company mentioned in the press release. According to the press release:

CommScope is a global leader in communications technology to create the world’s most advanced networks. Across the globe, the company is redefining connectivity, solving today’s communications challenges, and driving the innovation that will meet the needs of what’s next.

Networks and communications are indeed a critical enabler for a lot of what’s next. The company provides both hardware and software to enable broadband, enterprise and wireless networks. The product line, support strategy and overall focus on the big picture are quite impressive. After some digging, I wanted to learn more about how ambitious companies found the margin of victory in their markets, and how Semifore fit.

For that, I spoke with Semifore’s co-founder and CEO, Rich Weber. Rich has a storied career designing complex systems at major corporations such as Data General, Silicon Graphics and Cisco.

Exploring the Margin of Victory with Rich Weber

My conversation with Rich explored the bigger picture of what it takes to successfully build a complex system. The specifics for CommScope are outlined in the press release. I was after the bigger story.

Our discussion began with several stories of confusion and chaos. They all tied back to a lack of a solid methodology and timely communication. The stories Rich related were primarily focused on the hardware/software interface, or HSI for complex systems. The HSI is composed of many registers that implement the complex protocols allowing software to control the hardware required to implement the software algorithms.

Small errors in this part of the system can cause big problems. Beyond the visible failure mode of “it didn’t do what we expected” there is the invisible failure mode of “the new feature can’t be implemented because the HSI doesn’t seem to support it”. It turns out that errors in the HSI implementation aren’t the whole story.  Rich described several scenarios where a well-meaning engineer made a seemingly harmless change to a bit in the HSI on a Friday evening. That caused regressions to fail over the weekend and further cause the data that was driving the various teams working on the project to be invalid. Teams like documentation, testing and design were all victims. Monday morning was met with chaos.  The HSI is quite fragile – small tweaks can result in big headaches.

We also talked about specifying the HSI and the role standards play. It turns out Rich has been directly involved in developing many of these standards through organizations like Accellera. Standards such as SystemRDL and IP-XACT. What I discovered in my discussions was that these standards are certainly useful, but they all fall short of delivering a complete, robust solution that can express all aspects of the HSI. Solving this problem allows an executable specification of the HSI to be developed, something of a holy grail item for many design teams.

With this backdrop, I began to see a pattern. Semifore offers a way to fully specify the HSI with a robust language (the elusive executable spec). They also offer a way to create a correct-by-construction HSI with accurate, up-to-date representations for all team members. No more chaos.

It seems like CommScope figured out that a solid methodology is the margin of victory. If you dig into what Semifore is up to, you may find your margin of victory as well.


EDA in the Cloud with Siemens EDA at #59DAC

EDA in the Cloud with Siemens EDA at #59DAC
by Daniel Payne on 08-01-2022 at 10:00 am

craig johnson

Tuesday at DAC I had the pleasure of attending the Design on Cloud Theatre where experts from Siemens EDA gave an update on what they’ve been offering to IC and systems designers. I remember attending a cloud presentation from Craig Johnson in 2021, so I was keen to note what had changed in the past 12 months.

Craig Johnson, Siemens EDA – VP Cloud Solutions

Industry Trends, Design on Cloud

Craig shared that there’s been increased adoption over the past 5 years, abd more deployments of EDA design on cloud. Why? The improved throughput, cost, and security are all factors, and it depends on the goals of each team.

The EDA requirements have been unique per design team, due to the different size of projects, so no one cloud solution fits all EDA users. Siemens EDA has responded by offering a range of five cloud choices:

  • Cloud ready
  • Managed cloud
  • Cloud connected
  • Cloud native
  • Veloce cloud

Cloud choices

Craig had Brian Ramirez, from the Design Verification Technology group join him, then Brian shared some conclusions from the latest Wilson Research Group survey about biggest verification challenges:

1) Coverage closure
2) Isolate and fix bugs
3) Find the next bug

Verification, Design on Cloud

Brian Ramirez, Design Verification Technology

Reaching more verification requires more compute resources, and often that’s only available in the cloud, where teams can get more verification cycles quicker. Engineers can find and fix these verification challenges quicker in the cloud. Or, run more cycles to uncover more bugs.

Brian said that cloud users are seeing extreme peaks and valleys in their workloads , and that capacity was greater than licenses owned. There are often just too many options with so many cloud configurations, so that knowing what is the right balance to achieve cost and turn around times is a challenge.

There’s also some data overload, where too much time is spent to isolate and then resolve a bug. Verification engineers are spending most of their time just in debug, so how should we make the debug journey more efficient?

There are a lack of standards, as so many cloud suppliers create incompatible flows, so it’s not easy to move from one cloud provider to another one. What will solve these four issues?

Questa in the cloud can help, at a high scale, reaching up to a million hours in a day. Verification teams can maintain their workflows, be flexible, and not shoe horn into just one flow. Hybrid flows are offered in the cloud, with optimized cost, and faster time to closure. Functional verification questions, talk to Brian.

AMS Verification, Design on Cloud

At DAC  2021 I heard a presentation from Nebabie Kebebew on process variability, and this year at DAC she addressed the topic of AMS verification in the cloud. AMS flows have moved to the cloud, as they are often quite compute intensive, while on-premise verification can simply run out of steam, because of time. AMS workflows on the could are now supported, like circuit simulation with Analog FastSPICE.

Nebabie Kebebew. Senior Principal Product Engineer

With cloud simulation you can explore more, and actually meet PPA requirements.  The cloud flight plans are provided by Siemens EDA, which are reference architectures for designers, and they have methodology white papers, and application notes to help AMS designers get started quickly. The flight plans are tuned per cloud provider. There are Analog FastSPICE papers for how to best use the cloud. There are many AMS tools from Siemens EDA, ready for cloud use, like: Eldo, Symphony, Solido Variation, Solido Characterization Suite.

Physical Verification

Michael White, from the Physical Verification Calibre team, noted that they have been using large-scale compute and cloud for about 15 years now. Internally they were using the cloud for EDA tool development, and now they are allowing customers to also use the cloud, so why consider using the cloud now for physical verification?

Michael White, Sr. Director

The compute requirement for DRC/LVS has exploded with smaller nodes – double patterning, multi patterning, EUV. EDA vendors have turned to the major cloud providers to enable EDA in the cloud: AWS, Google, MS Azure.

Examples of physical verification in the cloud include: Calibre PERC- reliability checking, Calibre DRC – design rule checking. Both of these EDA tools scale very well up to thousands of physical cores. The EDA technology must be cloud ready, optimized with foundry decks, and engineers must have the freedom to choose both their foundry and cloud vendors. There are reference architectures for each cloud vendor, so Siemens EDA is really taking a cloud vendor agnostic approach. Using cloud flight plans, with the best known methods, and even infrastructure as code for Calibre, you get a tailored experience for each cloud provider.

Systems, Design on Cloud

Craig introduced the next presenter, Paul Welch, from the Electronic Board Systems group. Paul sees that the typical boards and systems engineers are in industries, like: mil-aero, medical, automotive, IoT. These system flows are different than IC flows. For a systems design there is a BOM, the board is checked to be manufacturable, prototypes are built, and and engineer has to verify prototypes, finally they have to iterate the system and manufacture process again.

Paul Welch, Electronic Board Systems

For systems design the lead time, and number of iterations is too long, causing an over run of the budget. So this digital transformation is a challenge, namely how to realize the digital twin.

Boards are being designed in the cloud with several tools: Xpedition, HyperLynx, PADS Pro Premium. A new board design may start on the designers desktop, and then the supply chain is a challenge, because how does an engineer find parts from the supply chain?

The cloud approach delivers real time component insight on the desktop, along with cost, lifecycle management, predictive risk factors, and the early feedback to reduce supply chain risks. Parts not selected that are on back ordered can be dealt with. For collaboration a team member can choose to work from home, or work from anywhere, and there’s a cloud product called Connect for PADS Professional .

Cloud visualization using a web browser allows you to see the schematic, and layout, and BOM online, all without sending an email update to everyone else. Coming attractions: elastic compute around advanced packaging to solve challenges more quickly. Blockchain – trust and traceability in the supply chain, provenance of each part.

Consulting and Learning Services

The final presenter was Sam George, Consulting and Learning Services, and he started in the industry back in 2001 where they pioneered the hosted model. Yes, there were early doubts of feasibility for cloud design, but now it’s much easier with 3rd party cloud vendors, plus the applications are ready and cloud friendly, the engineers are accustomed to using cloud apps.

Sam George

The Managed Cloud is an approach where Siemens manages all of the IT aspects for a design team, and provides access to the cloud-based tools required. This is a  full-flow, where a thin client accesses the cloud for schematic, PCB layout, analysis, plus interactive and batch tool usages.

With the Hybrid cloud where there are both peak and batch usages, the engineering team has their day to day workload on premise, but then uses cloud just for the peak licenses. This is making it easier to use tools and services, in just the way you want to work. Security is on the top of people’s mind, along with access to all the apps, and finally the cost of tools. Engineers want to get their job done, so cloud usage is a big deal, with both interactive and batch submissions being supported.

Smaller design teams may want all of their CAD tasks managed in the cloud to start with, and then learn how to manage cloud usage themself.

Emulation in the cloud enables faster and earlier SW development. HW-assisted verification allows a team to do concurrent SW development, using real power profiles. An emulation service allows lower costs to a team than ownership. Siemens configures the emulator to start out with, and then helps teams to be successful with emulation.

Other teams purchase or lease an emulator, but still want Siemens to host the cloud and perform any maintenance tasks. Prototyping is offered in the cloud too, just like emulation.

Summary

Siemens EDA has really started to build out a rather impressive suite of cloud flows for IC and systems . There are five levels of using the cloud now, so there’s a size that fits the culture and budget of your engineering team.  The big picture goal of design on cloud is to speed up projects.

Related Blogs


Post-quantum cryptography steps on the field

Post-quantum cryptography steps on the field
by Don Dingee on 08-01-2022 at 6:00 am

PQSubSys post quantum cryptography IP

In cybersecurity circles, the elephant in the room is a quantum computer in the hands of nefarious actors. A day is coming, soon, when well-funded organizations will be able to rent time on, or maybe even build or buy a quantum machine. Then, if data is valuable enough, people will hunt for it. Two or three months of compute time on a quantum computer will break any asymmetric encryption algorithm (such as elliptic-curve and RSA) in any device that exists today. The longer devices with these dated asymmetric algorithms hang around, in some cases 10 or 15 years, the more vulnerable they get. But the game is changing with post-quantum cryptography stepping on the field, with new algorithms and hardware and software IP.

Six-year NIST competition pares down candidates

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) began its search for quantum-resistant successors to methods defined in three cryptographic standards with a call for proposals in December 2016.

Currently, FIPS 186-4 defines encryption methods, NIST SP 800-56A deals with key establishment, and NIST SP 800-56B covers digital signatures. All these standards rely on various public key algorithms with pairs of very large numbers, which get much larger when multiplied together. These pairs are safely out of reach for even today’s supercomputers, but a quantum computer can find them in a reasonable amount of time. With a device’s pairs exposed, it becomes compromised since the algorithms are public.

After several rounds of proposals, reviews, and revisions, on July 5th, 2022, NIST pared down the candidates to four algorithms:

  • For general encryption, such as accessing secure websites, CRYSTALS-Kyber is a clear winner in manageable key sizes and speed of operation. It uses structured lattice technology, with hyper-dimensional grids (tens of thousands of axes, or more) and long basis vectors making it very difficult to find a small coordinate in the grid.
  • For digital signatures, essential for digital transactions or signing documents, three algorithms won. CRYSTALS- Dilithium is the primary choice, with Falcon a second option for more compact signatures. Both also use structured lattices. A more compute-intensive algorithm, SPHINCS+, uses hash functions in an alternative approach.

PQShield teams helped define device-ready algorithms

These algorithms came from teaming arrangements of experts around the globe. We’re introducing PQShield to readers as a hardware and software IP company. Before their products arrived, they invested years of research into these four algorithms. A PQShield researcher is development leader and co-author of Falcon. The other three algorithms, CRYSTALS-Kyber, CRYSTALS-Dilithium, and SPHINCS+, are co-authored by a PQShield advisory board member.

This first-hand insight is invaluable when putting complex algorithms into smaller, resource-bound devices. Anybody can run algorithms like these on a server. Realizing them in an SoC or a high-end microcontroller is another story. For post-quantum cryptography IP, PQShield developed its PQSubSys (short for Post Quantum Sub System), a co-designed, upgradable hardware/firmware coprocessor core.

Two options exist, one focusing on the post-quantum crypto IP elements shown within the dotted orange line above. PQShield can also provide a full solution shown within the blue line above including an entropy source, a RISC-V core and memory for an integrated cryptography subsystem. This integration leverages the Zkr Entropy Source extension PQShield defined as part of the RISC-V Scalar Cryptography Specification released in October 2021.

Giving SoC architects a new path forward

Before NIST announced its finalists, it would have been tough to give guidance to SoC architects. Building more devices with pre-quantum cryptography IP is far better than designing in no security, even knowing the vulnerabilities about to come. But now, the arrival of finalist post-quantum algorithms and optimized IP create a new path forward.

And soon, full-custom SoCs won’t be the only option for working with this IP. Microchip Technology has cut an IP licensing deal with PQShield, no product announced yet. Another clue is a new partnership between PQShield and Collins Aerospace, a long-time PolarFire SoC customer. The PolarFire SoC combines RISC-V cores with FPGA gates on one chip, and the PQShield IP seems like a fit there.

There’s still some work ahead on the details of post-quantum cryptography. Like any standards work, early adopters get a leg up but may have to withstand some minor changes between first and final versions. With algorithms and configurable IP in place, PQShield has the knowledge and tools chip designers need to create more secure devices.

For more info, PQShield is publishing their thoughts in an open newsletter:

PQShield Newsletter, July 2022

Also Read:

CEO Interviews: Dr Ali El Kaafarani of PQShield

NIST Standardizes PQShield Algorithms for International Post-Quantum Cryptography

WEBINAR: Secure messaging in a post-quantum world

 


Intel & Chips Act Passage Juxtaposition

Intel & Chips Act Passage Juxtaposition
by Robert Maire on 07-31-2022 at 6:00 am

Chips Act Corporate Welfare

-Need more/less spend & more/fewer chips
-The irony of chips act passage & Intel stumble on same day
-Due to excess supply of chips, Intel cuts spending
-Due to shortage of chips, the government increases spending
-How did this happen on the same day? Cosmic Coincidence?

Timing is everything

The irony of intel cutting spending as demand falls amid too many chips and the same day having the CHIPS act finally passed by congress to do more spending to make more chips to satisfy demand is nothing short of priceless.

To be truly fair, the CHIPS act is much more about onshoring and anti China and less so about fixing a shortage that is already dead and buried. Also to be fair, Intel sounds still committed to catching up from its technology stumble and spending aggressively over the long term to regain manufacturing prowess. The only thing that may change is the timetable as the spending curtailment may slow some of the progress and stretch the timeline out a bit.

The CHIPS act took so long the immediate crisis was already over

The shortage of chips that prevented Americans from getting their beloved new cars was the genesis of the chips act. Anti China sentiment was there all along and we have been talking about the Chinese threat to the US Chip industry for about 7 years now but it took a Covid caused shortage that impacted the auto industry in the heartland to wake politicians out of their slumber.

Then it took so long for the politicians to argue over the obvious solution that the problem has already gone away through the natural action of industry participants (mainly from outside the US) So we are solving a problem that we no longer have. Talk about closing the barn door after the cows have long since skipped town.

We are past the shortage being over and already into a glut of chips

We heard from Micron that they were not only cutting production but warehousing chips so they didn’t hit the market and further reduce prices. We have heard the bad news in memory repeated from all players and now the glut of memory appears even worse.

Article on chip stockpile in Korea

Its unclear if CHIPS act will have the desired impact

We now want to reshore and regain the US preeminence in the semiconductor industry. We doubt that the $52B will do it but at least its a start and a good try.
The potential embargo of below 14NM in China may actually be more effective.

Will Intel move ahead with Ohio as quickly as previously anticipated? doesn’t sound like it today. Intel will likely use their new, purpose built 3NM fab at TSMC in Taiwan for a bit longer. We don’t see GloFo building a new fab in New York as their current fab utilization doesn’t support it but they are clearly moving forward with a new fab outside of the US and just doing maintenance spend in Malta.

Micron has already started cutting capex in Boise, so exactly where will we see the desired impact?

Four nodes in Five years…probably not

We had suggested that Intel promising four nodes in five years was very unrealistic. With a curtailment in spend and reduced demand its likely that not just capacity spend will be slowed but also R&D and technology spend will also be reduced and slowed. We think the previous promise is likely off the table now.

Intel will get better financials

Even though Intel may not spend as much on capacity and technology due to economic headwinds and results in the near term are obviously below expectation, Intel will get a huge financial boost from not the $52B but the associated tax credit in the CHIPS Act which will certainly boost their after tax earnings and offset some of the economic weakness. So Intel can win and get higher earnings (after tax) without spending more or increasing capacity which is not currently needed. Sounds like a potential good outcome out of a weak economic condition.

Waiting to hear about sub 14NM restrictions on China & CHIPS Act strings

Perhaps even more crucial than the $52B or even tax credits it may be more interesting and impactful to see what kind of restrictions are slapped on China’s chip tool purchases and what sort of impact the strings that are attached to the CHIPS act has on US companies chip business in China.
$52B spread over five years is a little more than $10B a year which is chump change in the chip industry so negative regulation may have more impact than positive spending on the competitive positioning between the US and China.

The stocks

It goes without saying that Intel’s stock is going to get trashed after tonight’s performance. We don’t even want to review the results as there is not a lot to add. We certainly hope that the company threw everything including the kitchen sink in with all the bad news to try and get it behind them. We also hope they reset numbers low enough that they can make them.

The collateral impact should be significant. This is obviously a negative for chip equipment makers. The only saving grace in tech land is Apple but that doesn’t help the chips stocks enough to offset bad news out of Intel.

We also think the Intel news more than offsets any positive spin from the CHIPS Act passage as the passage seemed already baked into the chip stock prices whereas Intel’s large miss was not.

Buckle up …it will get ugly!

About Semiconductor Advisors LLC‌

Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also read:

ASML Business is so Great it Looks Bad

SEMICON West the Calm Before Storm? CHIPS Act Hail Mary? Old China Embargo New Again?

ASML- US Seeks to Halt DUV China Sales


Podcast EP96: The History, Reach and Impact of Accellera with Lynn Garibaldi

Podcast EP96: The History, Reach and Impact of Accellera with Lynn Garibaldi
by Daniel Nenni on 07-29-2022 at 10:00 am

Dan is joined by Lynn Garibaldi, Executive Director, Accellera Systems Initiative. Lynn is the recipient of the Accellera 2022 Leadership Award. Dan and Lynn explore the history of Accellera, its beginnings and growth to a multi-standard organization and the impact of DVCon events around the world.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Turn of Moore’s Law from Space to Time

The Turn of Moore’s Law from Space to Time
by Daniel Nenni on 07-29-2022 at 6:00 am

Book Front Wide

“It’s time for the big ship of Moore’s Law to make a turn from space to time” is a nonconformist message that surprised me when I read the book “The Turn of Moore’s Law from Space to Time” by Mr. Liming Xiu.

As a forty-year veteran and as an author of several books on the semiconductor industry myself, I am quite familiar with all contemporary debates on Moore’s Law. In this book, Moore’s Law is investigated from the two fundamental constituents of our universe: space and time. For the first time, microelectronics is inspected at its root for a potential detour to circumvent the space crisis that this field is currently facing.

Moore’s Law 1.0 is about space, Moore’s Law 2.0 focuses on time. The takeaways from the book are: 1) Microelectronics is currently facing a crisis caused by over-exploiting space; 2) A new avenue is along the dimension of time; 3) In the new battle, circuit professionals will take the lead.

Starting around the 2000s, people were aware of an alarming problem with Moore’s Law. Various proposals have been suggested to save its life. Two well-known genres are More Than Moore and Beyond Moore; each of them has numerous variants.

However, no one has pointed out that the root-of-problem of this crisis is space-over-exploitation. No matter how sophisticated our engineering skills become, we cannot keep making the basic switch-for-information (i.e., transistor) ever smaller. We must stop somewhere along this path. Semiconductor insiders know this, however, few are willing to make the call of jam-on-the-brakes for various reasons. The most compelling one is financial detriment since heavy investment has already been placed in this direction. Like the child in “The Emperor’s New Clothes”, this book blurts out that this route will meet a dead-end. The author asserts that now is the time for a major overhaul. The new route is along the direction of time, the only rival and equal weight to space.

To exploit time, this book further advocates two interesting ideas of “adopting
nonuniform-flow-of-time” and “using time as the medium for information-coding (switching the role of voltage and time)”. I have seen people using fixed-frequency clock to drive electronics from the outset of microelectronics. This naturally leads to uniform-flow-of-time inside the electronic world. But, as Xiu asked, is this efficient? Can the electronic world employ nonuniform-flow-of-time as the biological world does?

He argues that uniform-flow-of-time is just a dogma for the convenience of our engineering. It does not have to be that way if there is a need to change it. The second idea of explicitly using time as message is even more eccentric. Voltage has always been used as the medium for representing information, either in analog or digital style. As transistor becomes smaller, it switches faster. However, it tends to be much noisier. Further, the supply voltage is made ever lower as process progresses. All those factors are unfavorable to the continuous use of voltage as message carrier.

On the other hand, the ever-faster-switching transistor offers us an opportunity of using rate-of-switching as an alternative for expressing our thought. This idea, although not being practical yet as admitted by the author, also sounds legitimate to me.

An exciting aspect of this book is the author’s investigation of microelectronics through the lens of scientific revolution. Employing Thomas Kuhn’s technique of examining abnormal and crisis in the development of science, the author scrutinizes the semiconductor industry using a similar tactics albeit microelectronics is not a branch of science but engineering. It recognizes the current space-related problem in microelectronics as a crisis, like the ones we have seen in the evolution of a pure science (e.g., physics). Crisis leads to revolution. Following this logic, Xiu describes the past activities of microelectronics as occurring inside a Space- Dominant Paradigm. A paradigm shift is called for to overcome the crisis. The new one is defined as Time-Oriented Paradigm. I feel that this proposition is thought-provoking.

As history taught us, new thinking often meets with resistance at its debuts.
Recognizing the difficulty in change-of-mindset, a brief review of the semiconductor industry is presented in chapter two, an in-depth discussion on several key notions (space, time, change and motion) is carried out in chapter three. In section 4.1, a discussion of philosophical nature is included to persuade reader that the turn from space to time is requisite and is the only worthwhile alternative. All those efforts are helpful in making the process of change-of-mindset easier and smoother.

The author of this book has an unusual sophisticated approach (as you will see in the preface). He is an industry scholar. It is unique for a circuit design professional to have such a deep academic consciousness on foundational issues. This scholastic perception, on top of the battle-hardened industry experience, enables him to view the industry from an idiosyncratic perspective and create a distinctive book like this one. Overall, this is an inspiring book. The thesis is well supported by the tactics used throughout the book: learning from history, reasoning through philosophical contemplation, describing tools for materialization, and demonstrating plausibility through real cases. This is a book of vision. It has something for everyone involved in the semiconductor industry, absolutely.

The Turn of Moore’s Law from Space to Time — The Crisis, The Perspective
and The Strategy Springer, 2022, ISBN-10: 9811690642, ISBN-13: 978-9811690648 https://link.springer.com/book/10.1007/978-981-16-9065-5

Also Read:

Calibre, Google and AMD Talk about Surge Compute at #59DAC

Future Semiconductor Technology Innovations

Solve Embedded Development Challenges With IP-Centric Planning


Wireless Carrier Moment of Truth

Wireless Carrier Moment of Truth
by Roger C. Lanctot on 07-28-2022 at 10:00 am

Wireless Carrier Moment of Truth

When Cruise Automation’s cars recently began coming to a stop jamming up San Francisco streets senior wireless and automotive executives worldwide began shifting uneasily in their suites. In spite of demanding that General Motors build them an expensive telematics control unit (TCU) with four-carrier (at the time) connectivity, the Cruise vehicles had managed to find a coverage gap – despite operating in the dead of night.

The widespread conclusion is that on June 21 Cruise’s vehicles discovered a wireless dead zone. This is different from the May Cruise system failure which reportedly took all Cruise vehicles offline for 20 minutes.

The shocking development, which tied up traffic and amused native San Franciscans, marked a turning point for the autonomous vehicle industry. Developers of autonomous vehicles – particularly those connected to auto makers, like Cruise (GM) – have long eschewed connectivity, preferring to craft their systems to operate independently of wireless connections.

These executives and engineers might welcome connectivity for software updates or traffic alerts, but they were loathe to create wireless dependencies. In order to deliver an actual robotaxi service, though, operators recognized that a connection was no longer optional.

When General Motors first introduced automotive connectivity in the form of OnStar telematics, the main concern was that wireless connectivity was available in and around GM franchised dealerships. The service needed to be available at dealerships so that service could be activated in new cars. Neither GM nor Cruise ever seriously took up the question of wireless coverage availability and quality.

In retrospect, an observer might imagine that Waymo chose Arizona for its first robotaxi service delivery location because the flat terrain and reliably clear weather would guarantee durable wireless connections. Now, all bets are off.

Connectivity is clearly necessary in order to deliver robotaxi services. Of course, multiple country and state jurisdictions around the world have called for remote teleoperation as a requirement for autonomous vehicle testing and deployment. Cruise’s high-profile failure now raises questions as to whether a requirement for “connectivity” is sufficient. Regulators may soon require satellite connectivity to ensure a more robust vehicle connection – especially in emergency circumstances.

Cruise’s failure was not the only failure to call attention to the potential vulnerabilities of cellular links. Canadian carrier Rogers Communications suffered a 24-hour service outage on July 8th which knocked out all wireless services including 911 and payment networks.

Legislators and regulators in Canada have predictably commenced a round of investigations and Rogers has announced plans to spend $10B on artificial intelligence and testing in response to the failure. Some regulators called for greater cooperation between carriers in the event of future emergencies. Rogers took the added step of replacing its technology chief.

The Rogers and Cruise debacles are worst case scenarios for wireless connectivity. They both highlight the need for greater cooperation between carriers and greater due diligence in deploying and maintaining wireless service.

The onset of 5G practically requires a greater “densification” of wireless networks – i.e. more small cells – in order to reap the full benefits of what 5G has to offer in terms of faster speeds, greater capacity, and lower latency. The dirty little secret of cellular wireless coverage has long been the dead or “white” zones where coverage fails. T-Mobile likes to display a completely magenta-colored map in its advertising in the U.S. and in its stores to portray ubiquitous coverage, but the reality is something different.

Companies from Ericsson and HERE to Ookla, Root Metrics, Umlaut, and Continual have emerged to monitor and manage evolving coverage issues. For auto makers, for which safety systems such as GM’s Super Cruise and the soon-to-be-mandated Intelligent Speed Assistant require connectivity, predictive wireless coverage maps and models have suddenly become a necessity. Vehicles need to be “aware” of when and where they can count on available wireless connections to deliver safe vehicle operation.

The fact that Cruise launched its robotaxi service – after extensive testing – without previously identifying and measuring coverage issues within the challenging urban environment of San Francisco is a shock in and of itself. The result poses catastrophic implications for the concept of deploying robotaxis in cities around the world.

Wireless robotaxi connections will be vulnerable to the effects of urban canyons on wireless connections as well as the demands placed on even the most robust urban wireless networks by massive user populations competing for service. The only solution may be to look skyward to the introduction of satellite connectivity to fill in the gaps within existing wireless network coverage. Politicians, consumers, and investors will not stomach amusing but embarrassing failures such as that suffered by Cruise last month.

Also read:

Ecomotion: Engendering Change in Transportation

Connecting Everything, Everywhere, All at Once

Radiodays Europe: Emotional Keynote


Scalability – A Looming Problem in Safety Analysis

Scalability – A Looming Problem in Safety Analysis
by Stefano Lorenzini on 07-28-2022 at 6:00 am

Figure 2 FMEDA white paper

Scalability – A Looming Problem in Safety Analysis

The boundless possibilities of automation in cars and other vehicles have captivated designers to the point that electronic content is now a stronger driver of differentiation than any other factor. It accounts for a substantial fraction of material cost in any of these vehicles. But this revolution in automotive technology comes with a caveat. In other applications, an electronics problem may be corrected with a shutdown or a reboot. The same resolution, however, does not work well for cars. Misbehavior in the electronics can lead to accidents, even fatalities.

To address this real concern, the ISO 26262 standard was crafted to set guidelines for electronics safety in cars. This context details the characterization and measurement during automotive electronics design. One of the most important analyses in the standard is Failure Modes, Effects and Diagnostic Analysis (FMEDA) for each component. It lists potential failure modes with the corresponding impact on the system’s safety and methods to mitigate such failures. These reports communicate safety characterization through the value chain, from IPs to automotive OEMs, as shown in Figure 1.

Figure 1 is an example of the FMEDA supply chain flow.

Generating FMEDA takes significant effort per automotive system-on-chip (SoC), and that task is compounded when those parts are configurable. This responsibility adds to the burden on the integrator rather than the supplier since only the designer can know which configurations are needed. As a further complication, the standard defines only intent for these analysis reports, not detailed format. Inconsistencies in these formats impede productivity in safety analysis up the value chain. This situation is not scalable and requires more standardization and intelligence.

Issues in the Current Process

Figure 2 demonstrates the multiple challenges in creating FMEDAs.

Safety evaluation starts with a Failure Mode and Effect Analysis (FMEA) based on system design experience in the potential ways, causes and effects a system might fail. This becomes the starting point for a systematic FMEDA captured in reports for each component in a design. Listed for each failure mode is the potential impact on the system’s safety along with methods to prevent, detect and correct such breakdowns. Random failures, perhaps triggered through ionization by cosmic radiation, are of particular concern. The analysis is based on lengthy simulations of faults, determining how or if those malfunctioning behaviors propagate through the circuit.

FMEDA at a given level of design demonstrates rigor in planning and testing for failure modes at a detailed level. Moving up to the next level in the system design, FMEDAs are typically abstracted for aggregation into higher levels. Abstraction trims down the failure modes to those relevant to system analysis while preserving safety analysis coverage. Each use case drives the performance and may require building different abstractions during system-level analysis.

Within SoC design, the process suffers from scalability problems in three important ways, as highlighted in Figure 2. It is not designed to deal efficiently with highly configurable IP. The network-on-chip (NoC) provides a clear example. Each NoC configuration is unique to the designated SoC in the endpoint IPs it connects and quality of service and power goals. As the design changes prior to tapeout, so must the NoC. Each instantiation requires an independent analysis performed by the SoC integrator who knows the needed NoC configuration.

A natural question is whether at least some of this analysis could be reused between different configurations. Reuse is already successful in accelerating SoC design and plays a significant role in functional verification. In contrast, FMEDA is a relatively recent addition to design requirements and has yet to evolve a reuse strategy. Every analysis at a given level must be from scratch, consuming significant time and resources. A reuse strategy could make an enormous difference to design schedules and avoid errors if a solution was available.

The lack of a standard format for FMEDA is also an efficiency drain. SoC integrators using IPs from multiple suppliers must contend with different formats, requirements and assumptions on use-case compatibility and, therefore, other ways to derive abstractions. Today, these disconnects are resolved manually between integrators and suppliers, but the process is not scalable. There are too many points at which mistakes could occur.

Aligning FMEDA With Reuse

A reuse-centric methodology cannot be based on flat analysis at each stage. The essential failure modes of a configurable IP do not vary between configurations. These should be interpretable in parametric instantiations of the RTL, allowing the generation of an FMEDA for a particular layout. In this flow, failure modes and safety mitigation would be model-oriented rather than report-oriented. A model-based approach allows for generating and delivering an FMEDA model for an IP. The significant gain is that the SoC integrator no longer needs to run a full flat analysis for each configuration change during design development.

The next logical advance would be to extend this capability to SoC FMEDA build. A generator for an SoC-level analysis could read traditional FMEDA reports for IPs and apply in-context requirements and assumptions of use. This would optimize that detail down to a few essential failure modes relevant to that purpose per IP. The generator could then build the appropriate SoC FMEDA for that use model from this input. Generating a new analysis for a different set of assumptions would require no more effort than dialing in those new parameters and re-running the generator. Since the tool used is ISO 26262 certified, additional analysis is unnecessary before tapeout because the compliance is already built-in. Figure 3 illustrates the full proposed flow, from FMEDA generation at the IP level to FMEDA generation at the SoC level.

A methodology like this could greatly simplify safety analysis for an SoC development team, even if only one IP supplier endorsed the model-based capability. If each IP supplier supported a standard for safety data interchange, such as the IEEE P2851 standard currently in development, the value to the SoC safety analysis team would be amplified even further. Encouraging tooling to aggregate and abstract IP models for the SoC might depend more on the completion and adoption of IEEE P2851. However, given there are already solutions of this nature in some automotive SoC suppliers, this goal seems very achievable.

Traceability and FMEDA

Whenever requirements must be exchanged between integrators and suppliers, traceability becomes essential. The most important requirement in design for automotive applications is safety, as documented in the FMEDA. Requirements, implementation, testing and FMEDAs are closely interlinked. Changes in any of these must be correctly tracked in the others if the integrity of the whole process is to be maintained, as illustrated in Figure 4 below.

Figure 4 highlights that traceability between requirements, implementation, test and FMEDA is closely coupled.

There is another compelling reason to consider traceability here. At each level of integration, FMEDAs are abstracted from detailed structural-level failure modes to a much smaller number of system failure modes. This abstraction is performed based on use cases and system design experience. Mistakes are possible but can be mitigated through careful traceability from system failure modes down through component failure abstractions to more detailed component analyses.

Traceability is valuable for problem diagnosis and abstraction support against different use cases. An integrator may decide for one use case that certain failure modes are more important than others. Whereas in another situation, that decision might change. Given the ability to examine the full set of failure modes, an integrator can choose what to prioritize and ignore. With the support of a generator, as described in the previous section, an integrator would enjoy more flexibility to explore options.

A Call to Action

A move to reuse practices for FMEDA seems both logical and unavoidable. Reuse practices are already amply proven in design and verification. Now it is time for safety analyses to move up to that level. It would be natural also to align these interfaces with the planned IEEE P2851 standard as that starts to emerge. In the meantime, suppliers of highly configurable IP should craft solutions to better serve integrator customers. Automotive semiconductor solutions for aggregation and abstraction can help define a more complete solution at the SoC level. That approach must recognize the need for traceability through FMEDA.

Only through advances of this nature is it possible to jump past the looming problem in safety analysis scalability.

For more information about FMEDA, click HERE.

Mr. Stefano Lorenzini has more than 25 years of safe and secure SoC design and architecture experience spanning Arteris IP, Alcatel Microelectronics, Cadence Design Systems, Ericsson, Intel, ST Microelectronics, and Yogitech. He has spent the last 18 years managing SoC functional safety applications regulated by IEC 61508 and ISO 26262 standards. He holds a master’s degree in electronic engineering from the University of Pisa, Italy.

Also read:

Scaling Safety Analysis. Reusability for FMEDA

Why Traceability Now? Blame Custom SoC Demand

Assembly Automation. Repair or Replace?