100X800 Banner (1)

Post-quantum cryptography steps on the field

Post-quantum cryptography steps on the field
by Don Dingee on 08-01-2022 at 6:00 am

PQSubSys post quantum cryptography IP

In cybersecurity circles, the elephant in the room is a quantum computer in the hands of nefarious actors. A day is coming, soon, when well-funded organizations will be able to rent time on, or maybe even build or buy a quantum machine. Then, if data is valuable enough, people will hunt for it. Two or three months of compute time on a quantum computer will break any asymmetric encryption algorithm (such as elliptic-curve and RSA) in any device that exists today. The longer devices with these dated asymmetric algorithms hang around, in some cases 10 or 15 years, the more vulnerable they get. But the game is changing with post-quantum cryptography stepping on the field, with new algorithms and hardware and software IP.

Six-year NIST competition pares down candidates

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) began its search for quantum-resistant successors to methods defined in three cryptographic standards with a call for proposals in December 2016.

Currently, FIPS 186-4 defines encryption methods, NIST SP 800-56A deals with key establishment, and NIST SP 800-56B covers digital signatures. All these standards rely on various public key algorithms with pairs of very large numbers, which get much larger when multiplied together. These pairs are safely out of reach for even today’s supercomputers, but a quantum computer can find them in a reasonable amount of time. With a device’s pairs exposed, it becomes compromised since the algorithms are public.

After several rounds of proposals, reviews, and revisions, on July 5th, 2022, NIST pared down the candidates to four algorithms:

  • For general encryption, such as accessing secure websites, CRYSTALS-Kyber is a clear winner in manageable key sizes and speed of operation. It uses structured lattice technology, with hyper-dimensional grids (tens of thousands of axes, or more) and long basis vectors making it very difficult to find a small coordinate in the grid.
  • For digital signatures, essential for digital transactions or signing documents, three algorithms won. CRYSTALS- Dilithium is the primary choice, with Falcon a second option for more compact signatures. Both also use structured lattices. A more compute-intensive algorithm, SPHINCS+, uses hash functions in an alternative approach.

PQShield teams helped define device-ready algorithms

These algorithms came from teaming arrangements of experts around the globe. We’re introducing PQShield to readers as a hardware and software IP company. Before their products arrived, they invested years of research into these four algorithms. A PQShield researcher is development leader and co-author of Falcon. The other three algorithms, CRYSTALS-Kyber, CRYSTALS-Dilithium, and SPHINCS+, are co-authored by a PQShield advisory board member.

This first-hand insight is invaluable when putting complex algorithms into smaller, resource-bound devices. Anybody can run algorithms like these on a server. Realizing them in an SoC or a high-end microcontroller is another story. For post-quantum cryptography IP, PQShield developed its PQSubSys (short for Post Quantum Sub System), a co-designed, upgradable hardware/firmware coprocessor core.

Two options exist, one focusing on the post-quantum crypto IP elements shown within the dotted orange line above. PQShield can also provide a full solution shown within the blue line above including an entropy source, a RISC-V core and memory for an integrated cryptography subsystem. This integration leverages the Zkr Entropy Source extension PQShield defined as part of the RISC-V Scalar Cryptography Specification released in October 2021.

Giving SoC architects a new path forward

Before NIST announced its finalists, it would have been tough to give guidance to SoC architects. Building more devices with pre-quantum cryptography IP is far better than designing in no security, even knowing the vulnerabilities about to come. But now, the arrival of finalist post-quantum algorithms and optimized IP create a new path forward.

And soon, full-custom SoCs won’t be the only option for working with this IP. Microchip Technology has cut an IP licensing deal with PQShield, no product announced yet. Another clue is a new partnership between PQShield and Collins Aerospace, a long-time PolarFire SoC customer. The PolarFire SoC combines RISC-V cores with FPGA gates on one chip, and the PQShield IP seems like a fit there.

There’s still some work ahead on the details of post-quantum cryptography. Like any standards work, early adopters get a leg up but may have to withstand some minor changes between first and final versions. With algorithms and configurable IP in place, PQShield has the knowledge and tools chip designers need to create more secure devices.

For more info, PQShield is publishing their thoughts in an open newsletter:

PQShield Newsletter, July 2022

Also Read:

CEO Interviews: Dr Ali El Kaafarani of PQShield

NIST Standardizes PQShield Algorithms for International Post-Quantum Cryptography

WEBINAR: Secure messaging in a post-quantum world

 


Intel & Chips Act Passage Juxtaposition

Intel & Chips Act Passage Juxtaposition
by Robert Maire on 07-31-2022 at 6:00 am

Chips Act Corporate Welfare

-Need more/less spend & more/fewer chips
-The irony of chips act passage & Intel stumble on same day
-Due to excess supply of chips, Intel cuts spending
-Due to shortage of chips, the government increases spending
-How did this happen on the same day? Cosmic Coincidence?

Timing is everything

The irony of intel cutting spending as demand falls amid too many chips and the same day having the CHIPS act finally passed by congress to do more spending to make more chips to satisfy demand is nothing short of priceless.

To be truly fair, the CHIPS act is much more about onshoring and anti China and less so about fixing a shortage that is already dead and buried. Also to be fair, Intel sounds still committed to catching up from its technology stumble and spending aggressively over the long term to regain manufacturing prowess. The only thing that may change is the timetable as the spending curtailment may slow some of the progress and stretch the timeline out a bit.

The CHIPS act took so long the immediate crisis was already over

The shortage of chips that prevented Americans from getting their beloved new cars was the genesis of the chips act. Anti China sentiment was there all along and we have been talking about the Chinese threat to the US Chip industry for about 7 years now but it took a Covid caused shortage that impacted the auto industry in the heartland to wake politicians out of their slumber.

Then it took so long for the politicians to argue over the obvious solution that the problem has already gone away through the natural action of industry participants (mainly from outside the US) So we are solving a problem that we no longer have. Talk about closing the barn door after the cows have long since skipped town.

We are past the shortage being over and already into a glut of chips

We heard from Micron that they were not only cutting production but warehousing chips so they didn’t hit the market and further reduce prices. We have heard the bad news in memory repeated from all players and now the glut of memory appears even worse.

Article on chip stockpile in Korea

Its unclear if CHIPS act will have the desired impact

We now want to reshore and regain the US preeminence in the semiconductor industry. We doubt that the $52B will do it but at least its a start and a good try.
The potential embargo of below 14NM in China may actually be more effective.

Will Intel move ahead with Ohio as quickly as previously anticipated? doesn’t sound like it today. Intel will likely use their new, purpose built 3NM fab at TSMC in Taiwan for a bit longer. We don’t see GloFo building a new fab in New York as their current fab utilization doesn’t support it but they are clearly moving forward with a new fab outside of the US and just doing maintenance spend in Malta.

Micron has already started cutting capex in Boise, so exactly where will we see the desired impact?

Four nodes in Five years…probably not

We had suggested that Intel promising four nodes in five years was very unrealistic. With a curtailment in spend and reduced demand its likely that not just capacity spend will be slowed but also R&D and technology spend will also be reduced and slowed. We think the previous promise is likely off the table now.

Intel will get better financials

Even though Intel may not spend as much on capacity and technology due to economic headwinds and results in the near term are obviously below expectation, Intel will get a huge financial boost from not the $52B but the associated tax credit in the CHIPS Act which will certainly boost their after tax earnings and offset some of the economic weakness. So Intel can win and get higher earnings (after tax) without spending more or increasing capacity which is not currently needed. Sounds like a potential good outcome out of a weak economic condition.

Waiting to hear about sub 14NM restrictions on China & CHIPS Act strings

Perhaps even more crucial than the $52B or even tax credits it may be more interesting and impactful to see what kind of restrictions are slapped on China’s chip tool purchases and what sort of impact the strings that are attached to the CHIPS act has on US companies chip business in China.
$52B spread over five years is a little more than $10B a year which is chump change in the chip industry so negative regulation may have more impact than positive spending on the competitive positioning between the US and China.

The stocks

It goes without saying that Intel’s stock is going to get trashed after tonight’s performance. We don’t even want to review the results as there is not a lot to add. We certainly hope that the company threw everything including the kitchen sink in with all the bad news to try and get it behind them. We also hope they reset numbers low enough that they can make them.

The collateral impact should be significant. This is obviously a negative for chip equipment makers. The only saving grace in tech land is Apple but that doesn’t help the chips stocks enough to offset bad news out of Intel.

We also think the Intel news more than offsets any positive spin from the CHIPS Act passage as the passage seemed already baked into the chip stock prices whereas Intel’s large miss was not.

Buckle up …it will get ugly!

About Semiconductor Advisors LLC‌

Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also read:

ASML Business is so Great it Looks Bad

SEMICON West the Calm Before Storm? CHIPS Act Hail Mary? Old China Embargo New Again?

ASML- US Seeks to Halt DUV China Sales


Podcast EP96: The History, Reach and Impact of Accellera with Lynn Garibaldi

Podcast EP96: The History, Reach and Impact of Accellera with Lynn Garibaldi
by Daniel Nenni on 07-29-2022 at 10:00 am

Dan is joined by Lynn Garibaldi, Executive Director, Accellera Systems Initiative. Lynn is the recipient of the Accellera 2022 Leadership Award. Dan and Lynn explore the history of Accellera, its beginnings and growth to a multi-standard organization and the impact of DVCon events around the world.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Turn of Moore’s Law from Space to Time

The Turn of Moore’s Law from Space to Time
by Daniel Nenni on 07-29-2022 at 6:00 am

Book Front Wide

“It’s time for the big ship of Moore’s Law to make a turn from space to time” is a nonconformist message that surprised me when I read the book “The Turn of Moore’s Law from Space to Time” by Mr. Liming Xiu.

As a forty-year veteran and as an author of several books on the semiconductor industry myself, I am quite familiar with all contemporary debates on Moore’s Law. In this book, Moore’s Law is investigated from the two fundamental constituents of our universe: space and time. For the first time, microelectronics is inspected at its root for a potential detour to circumvent the space crisis that this field is currently facing.

Moore’s Law 1.0 is about space, Moore’s Law 2.0 focuses on time. The takeaways from the book are: 1) Microelectronics is currently facing a crisis caused by over-exploiting space; 2) A new avenue is along the dimension of time; 3) In the new battle, circuit professionals will take the lead.

Starting around the 2000s, people were aware of an alarming problem with Moore’s Law. Various proposals have been suggested to save its life. Two well-known genres are More Than Moore and Beyond Moore; each of them has numerous variants.

However, no one has pointed out that the root-of-problem of this crisis is space-over-exploitation. No matter how sophisticated our engineering skills become, we cannot keep making the basic switch-for-information (i.e., transistor) ever smaller. We must stop somewhere along this path. Semiconductor insiders know this, however, few are willing to make the call of jam-on-the-brakes for various reasons. The most compelling one is financial detriment since heavy investment has already been placed in this direction. Like the child in “The Emperor’s New Clothes”, this book blurts out that this route will meet a dead-end. The author asserts that now is the time for a major overhaul. The new route is along the direction of time, the only rival and equal weight to space.

To exploit time, this book further advocates two interesting ideas of “adopting
nonuniform-flow-of-time” and “using time as the medium for information-coding (switching the role of voltage and time)”. I have seen people using fixed-frequency clock to drive electronics from the outset of microelectronics. This naturally leads to uniform-flow-of-time inside the electronic world. But, as Xiu asked, is this efficient? Can the electronic world employ nonuniform-flow-of-time as the biological world does?

He argues that uniform-flow-of-time is just a dogma for the convenience of our engineering. It does not have to be that way if there is a need to change it. The second idea of explicitly using time as message is even more eccentric. Voltage has always been used as the medium for representing information, either in analog or digital style. As transistor becomes smaller, it switches faster. However, it tends to be much noisier. Further, the supply voltage is made ever lower as process progresses. All those factors are unfavorable to the continuous use of voltage as message carrier.

On the other hand, the ever-faster-switching transistor offers us an opportunity of using rate-of-switching as an alternative for expressing our thought. This idea, although not being practical yet as admitted by the author, also sounds legitimate to me.

An exciting aspect of this book is the author’s investigation of microelectronics through the lens of scientific revolution. Employing Thomas Kuhn’s technique of examining abnormal and crisis in the development of science, the author scrutinizes the semiconductor industry using a similar tactics albeit microelectronics is not a branch of science but engineering. It recognizes the current space-related problem in microelectronics as a crisis, like the ones we have seen in the evolution of a pure science (e.g., physics). Crisis leads to revolution. Following this logic, Xiu describes the past activities of microelectronics as occurring inside a Space- Dominant Paradigm. A paradigm shift is called for to overcome the crisis. The new one is defined as Time-Oriented Paradigm. I feel that this proposition is thought-provoking.

As history taught us, new thinking often meets with resistance at its debuts.
Recognizing the difficulty in change-of-mindset, a brief review of the semiconductor industry is presented in chapter two, an in-depth discussion on several key notions (space, time, change and motion) is carried out in chapter three. In section 4.1, a discussion of philosophical nature is included to persuade reader that the turn from space to time is requisite and is the only worthwhile alternative. All those efforts are helpful in making the process of change-of-mindset easier and smoother.

The author of this book has an unusual sophisticated approach (as you will see in the preface). He is an industry scholar. It is unique for a circuit design professional to have such a deep academic consciousness on foundational issues. This scholastic perception, on top of the battle-hardened industry experience, enables him to view the industry from an idiosyncratic perspective and create a distinctive book like this one. Overall, this is an inspiring book. The thesis is well supported by the tactics used throughout the book: learning from history, reasoning through philosophical contemplation, describing tools for materialization, and demonstrating plausibility through real cases. This is a book of vision. It has something for everyone involved in the semiconductor industry, absolutely.

The Turn of Moore’s Law from Space to Time — The Crisis, The Perspective
and The Strategy Springer, 2022, ISBN-10: 9811690642, ISBN-13: 978-9811690648 https://link.springer.com/book/10.1007/978-981-16-9065-5

Also Read:

Calibre, Google and AMD Talk about Surge Compute at #59DAC

Future Semiconductor Technology Innovations

Solve Embedded Development Challenges With IP-Centric Planning


Wireless Carrier Moment of Truth

Wireless Carrier Moment of Truth
by Roger C. Lanctot on 07-28-2022 at 10:00 am

Wireless Carrier Moment of Truth

When Cruise Automation’s cars recently began coming to a stop jamming up San Francisco streets senior wireless and automotive executives worldwide began shifting uneasily in their suites. In spite of demanding that General Motors build them an expensive telematics control unit (TCU) with four-carrier (at the time) connectivity, the Cruise vehicles had managed to find a coverage gap – despite operating in the dead of night.

The widespread conclusion is that on June 21 Cruise’s vehicles discovered a wireless dead zone. This is different from the May Cruise system failure which reportedly took all Cruise vehicles offline for 20 minutes.

The shocking development, which tied up traffic and amused native San Franciscans, marked a turning point for the autonomous vehicle industry. Developers of autonomous vehicles – particularly those connected to auto makers, like Cruise (GM) – have long eschewed connectivity, preferring to craft their systems to operate independently of wireless connections.

These executives and engineers might welcome connectivity for software updates or traffic alerts, but they were loathe to create wireless dependencies. In order to deliver an actual robotaxi service, though, operators recognized that a connection was no longer optional.

When General Motors first introduced automotive connectivity in the form of OnStar telematics, the main concern was that wireless connectivity was available in and around GM franchised dealerships. The service needed to be available at dealerships so that service could be activated in new cars. Neither GM nor Cruise ever seriously took up the question of wireless coverage availability and quality.

In retrospect, an observer might imagine that Waymo chose Arizona for its first robotaxi service delivery location because the flat terrain and reliably clear weather would guarantee durable wireless connections. Now, all bets are off.

Connectivity is clearly necessary in order to deliver robotaxi services. Of course, multiple country and state jurisdictions around the world have called for remote teleoperation as a requirement for autonomous vehicle testing and deployment. Cruise’s high-profile failure now raises questions as to whether a requirement for “connectivity” is sufficient. Regulators may soon require satellite connectivity to ensure a more robust vehicle connection – especially in emergency circumstances.

Cruise’s failure was not the only failure to call attention to the potential vulnerabilities of cellular links. Canadian carrier Rogers Communications suffered a 24-hour service outage on July 8th which knocked out all wireless services including 911 and payment networks.

Legislators and regulators in Canada have predictably commenced a round of investigations and Rogers has announced plans to spend $10B on artificial intelligence and testing in response to the failure. Some regulators called for greater cooperation between carriers in the event of future emergencies. Rogers took the added step of replacing its technology chief.

The Rogers and Cruise debacles are worst case scenarios for wireless connectivity. They both highlight the need for greater cooperation between carriers and greater due diligence in deploying and maintaining wireless service.

The onset of 5G practically requires a greater “densification” of wireless networks – i.e. more small cells – in order to reap the full benefits of what 5G has to offer in terms of faster speeds, greater capacity, and lower latency. The dirty little secret of cellular wireless coverage has long been the dead or “white” zones where coverage fails. T-Mobile likes to display a completely magenta-colored map in its advertising in the U.S. and in its stores to portray ubiquitous coverage, but the reality is something different.

Companies from Ericsson and HERE to Ookla, Root Metrics, Umlaut, and Continual have emerged to monitor and manage evolving coverage issues. For auto makers, for which safety systems such as GM’s Super Cruise and the soon-to-be-mandated Intelligent Speed Assistant require connectivity, predictive wireless coverage maps and models have suddenly become a necessity. Vehicles need to be “aware” of when and where they can count on available wireless connections to deliver safe vehicle operation.

The fact that Cruise launched its robotaxi service – after extensive testing – without previously identifying and measuring coverage issues within the challenging urban environment of San Francisco is a shock in and of itself. The result poses catastrophic implications for the concept of deploying robotaxis in cities around the world.

Wireless robotaxi connections will be vulnerable to the effects of urban canyons on wireless connections as well as the demands placed on even the most robust urban wireless networks by massive user populations competing for service. The only solution may be to look skyward to the introduction of satellite connectivity to fill in the gaps within existing wireless network coverage. Politicians, consumers, and investors will not stomach amusing but embarrassing failures such as that suffered by Cruise last month.

Also read:

Ecomotion: Engendering Change in Transportation

Connecting Everything, Everywhere, All at Once

Radiodays Europe: Emotional Keynote


Scalability – A Looming Problem in Safety Analysis

Scalability – A Looming Problem in Safety Analysis
by Stefano Lorenzini on 07-28-2022 at 6:00 am

Figure 2 FMEDA white paper

Scalability – A Looming Problem in Safety Analysis

The boundless possibilities of automation in cars and other vehicles have captivated designers to the point that electronic content is now a stronger driver of differentiation than any other factor. It accounts for a substantial fraction of material cost in any of these vehicles. But this revolution in automotive technology comes with a caveat. In other applications, an electronics problem may be corrected with a shutdown or a reboot. The same resolution, however, does not work well for cars. Misbehavior in the electronics can lead to accidents, even fatalities.

To address this real concern, the ISO 26262 standard was crafted to set guidelines for electronics safety in cars. This context details the characterization and measurement during automotive electronics design. One of the most important analyses in the standard is Failure Modes, Effects and Diagnostic Analysis (FMEDA) for each component. It lists potential failure modes with the corresponding impact on the system’s safety and methods to mitigate such failures. These reports communicate safety characterization through the value chain, from IPs to automotive OEMs, as shown in Figure 1.

Figure 1 is an example of the FMEDA supply chain flow.

Generating FMEDA takes significant effort per automotive system-on-chip (SoC), and that task is compounded when those parts are configurable. This responsibility adds to the burden on the integrator rather than the supplier since only the designer can know which configurations are needed. As a further complication, the standard defines only intent for these analysis reports, not detailed format. Inconsistencies in these formats impede productivity in safety analysis up the value chain. This situation is not scalable and requires more standardization and intelligence.

Issues in the Current Process

Figure 2 demonstrates the multiple challenges in creating FMEDAs.

Safety evaluation starts with a Failure Mode and Effect Analysis (FMEA) based on system design experience in the potential ways, causes and effects a system might fail. This becomes the starting point for a systematic FMEDA captured in reports for each component in a design. Listed for each failure mode is the potential impact on the system’s safety along with methods to prevent, detect and correct such breakdowns. Random failures, perhaps triggered through ionization by cosmic radiation, are of particular concern. The analysis is based on lengthy simulations of faults, determining how or if those malfunctioning behaviors propagate through the circuit.

FMEDA at a given level of design demonstrates rigor in planning and testing for failure modes at a detailed level. Moving up to the next level in the system design, FMEDAs are typically abstracted for aggregation into higher levels. Abstraction trims down the failure modes to those relevant to system analysis while preserving safety analysis coverage. Each use case drives the performance and may require building different abstractions during system-level analysis.

Within SoC design, the process suffers from scalability problems in three important ways, as highlighted in Figure 2. It is not designed to deal efficiently with highly configurable IP. The network-on-chip (NoC) provides a clear example. Each NoC configuration is unique to the designated SoC in the endpoint IPs it connects and quality of service and power goals. As the design changes prior to tapeout, so must the NoC. Each instantiation requires an independent analysis performed by the SoC integrator who knows the needed NoC configuration.

A natural question is whether at least some of this analysis could be reused between different configurations. Reuse is already successful in accelerating SoC design and plays a significant role in functional verification. In contrast, FMEDA is a relatively recent addition to design requirements and has yet to evolve a reuse strategy. Every analysis at a given level must be from scratch, consuming significant time and resources. A reuse strategy could make an enormous difference to design schedules and avoid errors if a solution was available.

The lack of a standard format for FMEDA is also an efficiency drain. SoC integrators using IPs from multiple suppliers must contend with different formats, requirements and assumptions on use-case compatibility and, therefore, other ways to derive abstractions. Today, these disconnects are resolved manually between integrators and suppliers, but the process is not scalable. There are too many points at which mistakes could occur.

Aligning FMEDA With Reuse

A reuse-centric methodology cannot be based on flat analysis at each stage. The essential failure modes of a configurable IP do not vary between configurations. These should be interpretable in parametric instantiations of the RTL, allowing the generation of an FMEDA for a particular layout. In this flow, failure modes and safety mitigation would be model-oriented rather than report-oriented. A model-based approach allows for generating and delivering an FMEDA model for an IP. The significant gain is that the SoC integrator no longer needs to run a full flat analysis for each configuration change during design development.

The next logical advance would be to extend this capability to SoC FMEDA build. A generator for an SoC-level analysis could read traditional FMEDA reports for IPs and apply in-context requirements and assumptions of use. This would optimize that detail down to a few essential failure modes relevant to that purpose per IP. The generator could then build the appropriate SoC FMEDA for that use model from this input. Generating a new analysis for a different set of assumptions would require no more effort than dialing in those new parameters and re-running the generator. Since the tool used is ISO 26262 certified, additional analysis is unnecessary before tapeout because the compliance is already built-in. Figure 3 illustrates the full proposed flow, from FMEDA generation at the IP level to FMEDA generation at the SoC level.

A methodology like this could greatly simplify safety analysis for an SoC development team, even if only one IP supplier endorsed the model-based capability. If each IP supplier supported a standard for safety data interchange, such as the IEEE P2851 standard currently in development, the value to the SoC safety analysis team would be amplified even further. Encouraging tooling to aggregate and abstract IP models for the SoC might depend more on the completion and adoption of IEEE P2851. However, given there are already solutions of this nature in some automotive SoC suppliers, this goal seems very achievable.

Traceability and FMEDA

Whenever requirements must be exchanged between integrators and suppliers, traceability becomes essential. The most important requirement in design for automotive applications is safety, as documented in the FMEDA. Requirements, implementation, testing and FMEDAs are closely interlinked. Changes in any of these must be correctly tracked in the others if the integrity of the whole process is to be maintained, as illustrated in Figure 4 below.

Figure 4 highlights that traceability between requirements, implementation, test and FMEDA is closely coupled.

There is another compelling reason to consider traceability here. At each level of integration, FMEDAs are abstracted from detailed structural-level failure modes to a much smaller number of system failure modes. This abstraction is performed based on use cases and system design experience. Mistakes are possible but can be mitigated through careful traceability from system failure modes down through component failure abstractions to more detailed component analyses.

Traceability is valuable for problem diagnosis and abstraction support against different use cases. An integrator may decide for one use case that certain failure modes are more important than others. Whereas in another situation, that decision might change. Given the ability to examine the full set of failure modes, an integrator can choose what to prioritize and ignore. With the support of a generator, as described in the previous section, an integrator would enjoy more flexibility to explore options.

A Call to Action

A move to reuse practices for FMEDA seems both logical and unavoidable. Reuse practices are already amply proven in design and verification. Now it is time for safety analyses to move up to that level. It would be natural also to align these interfaces with the planned IEEE P2851 standard as that starts to emerge. In the meantime, suppliers of highly configurable IP should craft solutions to better serve integrator customers. Automotive semiconductor solutions for aggregation and abstraction can help define a more complete solution at the SoC level. That approach must recognize the need for traceability through FMEDA.

Only through advances of this nature is it possible to jump past the looming problem in safety analysis scalability.

For more information about FMEDA, click HERE.

Mr. Stefano Lorenzini has more than 25 years of safe and secure SoC design and architecture experience spanning Arteris IP, Alcatel Microelectronics, Cadence Design Systems, Ericsson, Intel, ST Microelectronics, and Yogitech. He has spent the last 18 years managing SoC functional safety applications regulated by IEC 61508 and ISO 26262 standards. He holds a master’s degree in electronic engineering from the University of Pisa, Italy.

Also read:

Scaling Safety Analysis. Reusability for FMEDA

Why Traceability Now? Blame Custom SoC Demand

Assembly Automation. Repair or Replace?


Electronics is Slowing

Electronics is Slowing
by Bill Jewell on 07-27-2022 at 2:00 pm

Electronics is Slowing 2

Key semiconductor market drivers PCs and smartphones are both showing declines in shipments in the first half of 2022. According to IDC, PC shipments in 2Q 2022 were down 15% from a year earlier. 2Q 2022 PC shipments of 71.3 million units were at the lowest level in almost three years since 70.9 million units were shipped in 3Q 2019. In June, prior to the 2Q 2022 PC shipment data, IDC forecast a decline of 8.2% in PC shipments for the year 2022. Based on the 2Q 2022 data, the forecast will probably be lowered to a double-digit decline.

Smartphone shipments 1Q 2022 were down 9% from a year ago, according to IDC. IDC’s 2Q 2022 smartphone data has not yet been released, but Canalys estimated smartphone shipments were down another 9% in 2Q 2022 versus a year ago. IDC’s June forecast called for a 3.5% decline in smartphone shipments in 2022, but based on 2Q 2022 data the decline should be at least double that rate, in the -7% to -10% range.

Electronics production in the key Asian countries is mixed. China, the largest producer, showed three-month-average change versus a year ago (3/12) of 7.7% in June, a slowing from double-digit growth from January 2021 through April 2022. Much of the slowdown in China electronics production was due to COVID-19 related shutdowns in April and May. Japan electronics production has been declining since October 2021, with May 2022 3/12 change down 13%. South Korea, Vietnam and Taiwan have shown strong growth in the last few months, with 3/12 change around 20%.

In the U.S and Europe, electronics production trends are also mixed. U.S. 3/12 change was 4.7% in May, in line with the trend over the last year. UK 3/12 change was 4.0% in May, the sixth straight positive month. UK electronics declined significantly in 2020 mostly due to production shifts from the UK to European Union (EU) countries after the UK withdrew from the EU (Brexit). The 27 countries of the EU showed healthy electronic production growth in most of 2021 due to Brexit and recovery from the COVID-19 pandemic. In the last six months, EU 27 3/12 change has been negative, with a 9% decline in May.

The bright spot for the semiconductor market is the automotive sector. LMC Automotive forecast for 2022 light vehicle production is 81.7 million units, up 6% from 2021. LMC projects growth of 5% in 2023 and 7% in 2024. However, the July numbers have been revised downward from the April forecast by 0.8 million in 2022 and 4 million in both 2023 and 2024. The downward revisions were due to continued shortages of semiconductors and other components, the China lockdown in April and May, the war in Ukraine, and worries over inflation and interest rates.


The overall outlook for electronics production is uncertain. Most countries are showing growth in production, with the exceptions of Japan and the EU. However, declines in shipments of PCs and smartphones are a cause for concern. Although automotive production is growing, growth may be limited by the factors listed above. A global recession in 2023 is increasingly likely. The International Monetary Fund (IMF) puts the chance of a recession at 15%. Citigroup and Deutsche Bank each see about a 50% chance. A Wall Street Journal survey of economists has the risk of a U.S. recession at 44%. The semiconductor industry needs to exercise caution in light of these factors.

Also Read:

Semiconductors Weakening in 2022

Semiconductor CapEx Warning

Electronics, COVID-19, and Ukraine


Axiomise at #59DAC, Formal Update

Axiomise at #59DAC, Formal Update
by Daniel Payne on 07-27-2022 at 10:00 am

Dr. Ashish Darbari min 1

Monday at DAC I was able to meet with Dr. Ashish Darbari, the CEO and founder of Axiomise. Ashish had a busy DAC, appearing as a panelist at,  “Those Darn Bugs! When Will They be Exterminated for Good?”; and then presenting,  “Taming the Beast: RISC-V Formal Verification Made Easy.”

Dr. Ashish Darbari, CEO
Axiomise

I had read a bit about Axiomise as a formal verification training and consulting services company on SemiWiki, and this was my first meeting with Dr. Darbari. With 46 patents in the field of formal verification, I knew that he was an expert in this area. Formal verification techniques have been used across many safety-critical market design segments: Automotive, security, healthcare, aerospace, ML, IoT and mobile computing.

Safety-critical design markets

I recalled that in the early days of formal verification that new users were almost required to have a PhD. in order to use and interpret the results, so I wanted to learn why formal techniques have not been widely adopted yet. Some of the larger design groups often want to become better trained in using formal tools, but may not have developed the training resources quite yet, so taking a training course from Axiomise is a quick way to get trained in the best practices.

Functional verification has been in use ever since digital simulation was invented, yet that was not sufficient to detect the famous Intel floating-point division bug back in 1994. Formal techniques would catch that bug today. Processor design companies are big adopters of formal to ensure that what is specified is what gets designed. When Ashish worked at Imagination Technologies,  a team of four formal experts supported 51 projects over a three year time span, training almost 100 engineers. Imagination Technologies is well-know for developing sophisticated IP, like: GPU, CPU, AI, and Ethernet.

What sets Axiomise apart is their training and consulting approach is tool vendor agnostic, so they don’t prefer one vendor over another one, the more formal tools to choose from the better. They basically have a very symbiotic relationship with EDA vendors. The actual training in formal can be done either in person or online, and engineers can make a class purchase using credit card. There are seven levels of online courses offered so far, including: theory, labs, demos, case studies, theorem proving, property checking, and equivalency checking.

Teams doing RISC-V designs should know that obtaining exhaustive ISA compliance is a big task, and that Axiomise has an app called formalISA to prove and cover, quite quickly, and without having to:

  • Write a test case
  • Write text sequences
  • Write a scoreboard or checkers
  • Write constraints
  • Randomize stimulus

On premise training is an option for larger clients, which makes it easier for engineers to get up to speed without taking out time to travel for training.

For modern processor designs there can be 5X more verification engineers than design engineers, as the verification challenges have become so much larger. Using a formal approach for verification to complement functional verification and hardware emulation can save time.

Summary

Dr. Ashish Darbari is outgoing, affable, confident, experienced and a formal expert with decades of experience. What sets him apart is the unique combination of industry experience and a passion for all things formal. If you attend DAC or DVCon you will likely see him on an organizing committee. I look forward to following his career, and that of Axiomise for years to come, as they make verification more manageable by working smarter.

Related Blogs


Formal at System Level. Innovation in Verification

Formal at System Level. Innovation in Verification
by Bernard Murphy on 07-27-2022 at 6:00 am

Innovation New

Formal verification at the SoC level has long seemed an unapproachable requirement. Maybe we should change our approach. Could formal be practical on a suitable abstraction of the SoC? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Path Predicate Abstraction for Sound System-Level Models of RT-Level Circuit Designs. The paper published in the 2014 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. The authors are from the University of Kaiserslautern in Germany.

Though the paper is old, this is a worthy goal and has a respectable number of citations. While there has been limited commercial development, the topic is still unfamiliar to most of us. The authors argue for a different kind of abstraction from familiar methods in conventional formal. They propose Path Predicate Abstraction (PPA). This centers around important states in the stage transition graph and operations which are multicycle transitions between important states.

The paper illustrates top-down construction of an example PPA as an FSM, with macros defining states and properties defining operations as sequences of transitions between states. Macros connect to the RTL implementation through signal name references. They check these formally against the RTL. Further, they show how such an PPA state machine can be proven sound and complete. They claim this demonstrates formally complete coverage on the abstract graph for the SoC.

Paul’s view

This paper is a heavy read but is an important contribution that has been well cited. The concept is easy to appreciate. First create an abstract state machine to describe design functionality at a system level and then prove that an RTL implementation of the design conforms to its abstraction. The abstract state machine is represented as a bunch of temporal logic properties, e.g. System Verilog Assertions (SVAs).

Proving that some RTL conforms to an SVA is of course not new and is not the contribution of the paper. The contribution is a method to prove the reverse: that a set of SVAs completely covers all possible behaviors of some RTL, relative to a particular level of abstraction.

This is a pretty cool concept, and the authors’ open with an elegant way to visualize their approach as a form of coloring of the states in the RTL implementation of a design. They tie off their work with two worked examples – a simple bus protocol and a serial IO data packet framer.

I would have liked to see one of the examples being a compute centric design vs. a protocol centric design. Protocol centric designs always have a clear and obvious system level state machine to verify against, and indeed at Cadence we offer a library of formal protocol proof kits very much along the lines of this paper which we refer to as Assertion-Based Verification IPs (AB-VIPs for short).

But for a compute centric design we’ve found it much harder to represent the intended behavior using assertions and state machines. A sequential equivalence checking approach – either between two versions of RTL or between a C/C++ model and RTL implementation has generally proved to be more scalable.

Raúl’s view

The core concept is to define the semantics of a system model by formulating properties in a standard property language, such as SystemVerilog assertions (SVA). If these properties can be proven using standard property checking techniques, then the system model is a sound abstraction of an RTL design. It is important to note that the objective of the paper is show a method to establish equivalence between an abstraction and the ground RTL. It does not offer innovation in arriving at a sound abstraction, for example finding optimal ways to color the graph.

The proposed methodology is applied to two designs. A flexible peripheral interconnect (FPI) bus with a total of ~16,000 Lines of Code. The number of state bits reduced from over 1500 to just 38, in 1850 lines of code (LoC). A property checker proved all properties in 90 seconds. The authors estimated effort to create a complete set of properties for SoC module verification to be around 2000 LoC per person month, meaning approx. 8-person month in total.

A second example is a SONET/SDH Framer (Table III) of about 27,000 LoC. This shows equally impressive results, reducing the number of state bits from over 47,000 to just 11. The total manual effort including formal verification of 27k LoC in VHDL was about six person months in this case. Properties checked in less than two minutes.

Establishing “sound abstractions” at levels above RTL is key to raise the level of abstraction. The paper is an important contribution in this area. Surprisingly, the cost in term of proving the soundness of these abstractions is negligible, just minutes using a model checker. What is not negligible however, is the non-automated effort of coming up with these abstractions. Many months by experts familiar with the non-trivial methodology. It is also not clear how expressive these abstractions are. The author’s touch on this, “We find this abstract graph by constructing a mapping function… In this context, we may observe that trivial coloring functions always exist that assign every node in the graph a different color or that assign the same color to all nodes (the resulting path predicate abstractions are, of course, meaningless) “. If and how this methodology makes its way into practical design tools is an interesting question.

My view

A note on how I approach math-heavy papers like this. Our primary interest is engineering value so, to first order, I focus on opening proposition, experiments and results. Dense algebraic justification I treat as an appendix, interesting maybe to read later if warranted. This makes for a much easier read!


Stand-Out Veteran Provider of FPGA Prototyping Solutions at #59DAC

Stand-Out Veteran Provider of FPGA Prototyping Solutions at #59DAC
by Steve Walters on 07-26-2022 at 10:00 am

S2C EDA Solutions 2022

S2C’s Shines at DAC 2022 with its New Prodigy Player Pro-7 Prototyping Software, Multi-FPGA Prototype Hardware Platforms, and Complete Prototyping Solutions

The 59th Design Automation Conference returned to San Francisco’s Moscone Center this year to notch almost six decades of week-long immersion in EDA technology and market trends, combining keynote presentations by industry luminaries with the “DAC Engineering Track” technical presentations and the EDA tool-provider exhibits for in-person exchanges of EDA user-needs and the latest EDA solutions.  Attendance by exhibitors, and EDA tool end-users alike, was noticeably improved from last year’s conference but still below pre-COVID levels.  The Moscone Center neighborhood provided a less than inviting convention venue as San Francisco recovers from COVID’s decimation of the convention-generated commerce around the Center marred by heavily littered streets, a very noticeable presence of “street people”, and the closure of many name-brand businesses that are normally sustained by the “collateral business” generated by convention attendees.

Despite the lower DAC attendance, S2C saw a marked improvement in the quantity and quality of visitors to the S2C booth.  S2C highlighted its latest hardware and software and provided interactive demonstrations of its Prodigy MDM Pro multi-FPGA debug tools and its Prodigy ProtoBridge high-throughput channel for the transfer of large amounts of transaction-level data between the FPGA prototype and a host computer – both demonstrations running on S2C’s Quad Logic System prototyping hardware featuring Intel’s massive Stratix GX 10M FPGAs.

S2C took the opportunity at DAC to roll out its newest version of its prototyping software Prodigy Player Pro-7.  The new software suite includes Player Pro-RunTime, for prototype platform control and hardware test; Player Pro-CompileTime, with enhanced automation of multi-FPGA partitioning and pre/post-partition timing analysis; and Player Pro-DebugTime, for multi-FPGA debug probing and trace viewing with S2C’s class-leading MDM Pro debug tools.

With an emphasis on large-scale SoC design prototyping, Player Pro-7 offers enhanced support for multi-FPGA implementations, including:

  • RTL Partitioning and Module Replication to support Parallel Design Compilation and reduce Time-to-Implementation
  • Pre/Post-Partition System-Level Timing Analysis for Increased Prototyping Productivity
  • SerDes TDM Mode for Optimal Multi-FPGA Partition Interconnect and Higher Prototype Performance

Prodigy Player Pro-7 Prototyping Software Suite

S2C displayed a number of its latest prototyping products in its DAC booth this year, including the Prodigy Logic System 10M based on the industry’s largest FPGA, Intel’s Stratix 10 GX 10M. Also on display were S2C’s Xilinx-based prototyping hardware, the Prodigy S7-19P Logic System, and the S7-9P Logic System, both getting their fair share of DAC attendee attention.

The highlight of the S2C booth was the new Prodigy Logic Matrix LX2.  Based on Xilinx’s largest Virtex Ultrascale+ FPGA, the LX2 boasts eight VU19P;for expansion beyond eight FPGAs, up to eight LX2s can be housed in a single standard server rack, extending prototyping gate-capacity up to sixty-four VU19P FPGAs. For expansion beyond eight FPGAs, the LX2 architecture is designed for prototyping with up to eight LX2’s in a single standard server rack, extending prototyping gate-capacity up to sixty-four VU19P FPGAs.  At this level of FPGA prototyping density, hardware quality and reliability become first-order considerations, and S2C’s 18+ year proven track record of delivering high-quality prototyping hardware sets a high bar for other prototyping solutions.

S2C DAC 2022 Booth at Moscone Center in San Francisco

To enable users to configure prototyping platforms quickly and reliably, S2C displayed a sampling of its Prototype Ready IP in the booth.  Prototype Ready IP are off-the-shelf daughter cards designed by S2C to plug-and-play with S2C prototyping hardware platforms.  The daughter cards are designed to attach reliably to the FPGA prototype hardware and compose a rich collection of prototyping functions, including High-Speed GT Peripherals (Ethernet, PCIe, MIPI, SATA, high-performance cables, etc.), General Peripherals (GPIO, USB, mini-SAS, JTAG, RS232, etc.), Memory Modules (EMMC, DDR, SRAM, etc.), ARM Processor Interface Modules, Embedded and Multimedia modules (DVI, HDMI, MIPI, etc.), and Expansion and Accessories modules (FMC-HPC Converters, Level Shifters, I/O Test Modules, DDR Memory Modules for user-supplied external memory, Interconnect Cables, Clock Modules, etc.).

The S2C Prodigy Multi-Debug Module Pro demonstrations at the booth showcased the implementation of S2C’s multi-FPGA debug tools for prototyping with a combination of external hardware, soft IP implemented in the FPGA, high-speed FPGA I/O, and debug configuration software (Player Pro-DebugTime)MDM Pro was designed specifically to support multi-FPGA prototype implementations – with support for high probe-counts, deep-trace debug data storage, optimization of debugging reconfiguration compiles, and with the ability to choose debug configuration tradeoffs to optimize prototype performance.  The Player Pro-DebugTime software supports user-friendly debug configuration, complex trace-data capture triggering, and single-window viewing on the user console of simultaneous streams of trace-data from multiple FPGAs.  MDM Pro hardware supports high-performance deep-trace debug data storage without consuming internal FPGA storage resources.

S2C Prodigy Multi-Debug Module (MDM)

S2C also demonstrated its Prodigy ProtoBridge in the DAC booth to showcase its off-the-shelf solution for a high-throughput channel (4GB/second) between the FPGA prototype and a host computer for the application of large amounts of transaction-level test data to the FPGA prototype – such as processor bus transactions, video data streams, communications channel transactions, etc.  ProtoBridge uses a PCI-to-AXI interface implemented in the FPGA and connected to the user’s RTL as an AXI-4 bus.  ProtoBridge includes a set of C-API function calls to perform AXI bus transactions in the FPGA prototype, a PCIe3 driver for Linux or Windows operating systems to control Logic System operations, C-API reference operations with sample access to FPGA internal memory, and an integration guide on how to connect the user’s RTL code to the ProtoBridge AXI-4 bus module.

S2C Prodigy ProtoBridge

Overall, DAC 2022 was a successful conference for S2C, firmly establishing S2C as the leading independent FPGA prototyping supplier, with the strongest track record of delivering complete prototyping solutions worldwide.

The FPGA prototyping hardware and software displayed at DAC are available now. For more information, please contact your local S2C sales representative, or visit www.s2cinc.com

Also read:

Multi-FPGA Prototyping Software – Never Enough of a Good Thing

Flexible prototyping for validation and firmware workflows

White Paper: Advanced SoC Debug with Multi-FPGA Prototyping