RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Verification Completion: When is Enough Enough?  Part II

Verification Completion: When is Enough Enough?  Part II
by Dusica Glisic on 10-25-2021 at 10:00 am

Tunnel min

Verification is a complex task that takes the majority of time and effort in chip design. At Veriest, as an ASIC services company, we have the opportunity to work on multiple projects and methodologies, interfacing with different experts.

In this “Verification Talks” series of articles, we aim to leverage this unique position to analyze various approaches to common verification challenges, and to contribute to make the verification process more efficient.

The first part of my discussion with experts in the field is presented in the previous article. It deals with the way in which the criteria for completion are formed and what exactly those criteria are. The experts I talked with were Mirella Negro from STMicroelectronics in Italy, Mike Thompson from the OpenHW Group in the Canada and Elihai Maicas from NVIDIA in Israel.

Tracking the development process

Of course, everyone uses tools from well-known vendors (Synopsys Verdi or Cadence vManager) since unfortunately, as Mike notes, “there is a general lack of open-source tools for generating and collecting usable verification quality metrics.”

On top of this, Elihai uses locally developed tools and scripts to send automated reports in the e-mails. “It’s convenient if all the criteria are formalized to a number that can be collected like coverage percentage. For flows/features, I usually also add coverage item (SVA or just test-pass hit). This way it is visible in all the automatic reports,” he explains.

Mike is a big fan of verification plans and uses their items to track the progress. This is how the process looks like for him: firstly, write and review a detailed verification plan, then assign a qualitative metric to each item in the plan (a cover point, a testcase, an assertion or code coverage) and afterwards develop tests and run regressions until your metrics show that all items in the verification plan are covered.

Mirella, on the other hand, relies a lot on the members of her teams and trusts their assessments and reports related to the planning, as well as to the monitoring of progress. They have crafted a spreadsheet that is simple for engineers to fill out although it has some complex formulas behind it. Each team member reports his own progress using this form. And, as a manager, Mirella uses a business data intelligence tool to analyze all these spreadsheets, which gives her a clear overview of the status. It can give a nice graphical representation of the status of tasks, resources or whatever needed.

Major causes of oversights and problems

So, what can go wrong? Mike believes that “the bulk of the complexity in ASIC verification is simply dealing with the extremely large number of features to cover.  It is all too easy to miss an important detail.  A big source of this can be insufficient information in either a specification or verification plan.  Another significant source of test escapes comes from a lack of time and engineering resources.  There will never be enough time and never be enough people, so it’s important to prioritize the items in your verification plan. Some features simply must work, or the project fails. These should be done as early as possible.”

Elihai says that major pitfalls may occur when not documented well in three situations: specs are changed during the development, new information generated during design review meetings and decision made by architect and designer and not communicated to the verification team.

In Mirella’s view, the major problems are caused by lack of the time which comes from bad planning and “invisible tasks” (sick leave, trainings, holidays, assisting a colleague, meetings…). She overcomes this issue by calculating in the plans that an engineer has less than five working days a week. Also how effectively they work depends on the seniority level. Mirella also adds a risk margin which is usually 10% of the project duration, but it can vary based on the risk analysis.

Finally, how do you cope with the anxiety that comes with “pressing the signoff button”?

According to Elihai, he never finishes all his plans for verification. But he tries to find out what are the things that must be verified, such as important flows or end-to-end data transactions. He regularly maintains a tasks list and constantly prioritizes them with relevant stakeholders (architects, designers, managers). Besides some strict rules, this process also has some “intangible” factor that comes from experience and intuition. And then you just have to trust what you did was enough.

Luckily, from Mike’s experience it usually is: “Create a plan, follow the plan, track progress according to the plan.  When the device is taped out, you’ll know what you’ve verified and how well it’s verified. Any time I have done this, the results have been positive.”

But he still admits:” Having said that, it’s always stressful.”

Takeaways

So, to make a sign-off less stressful, we need to do detailed planning, follow that plan, have well-organized communication between all relevant stakeholders, establish a trust within the team, but also not to be afraid to follow the intuition in some situations.

To view more about Veriest, please visit our web site.

Also Read:

Verification Completion: When is enough enough?  Part I

On Standards and Open-Sourcing. Verification Talks

Agile and DevOps for Hardware. Keynotes at DVCon Europe


Design Planning and Optimization for 3D and 2.5D Packaging

Design Planning and Optimization for 3D and 2.5D Packaging
by Tom Dillinger on 10-25-2021 at 6:00 am

platform

Introduction

Frequent SemiWiki readers are aware of the growing significance of heterogeneous multi-die packaging technologies, offering a unique opportunity to optimize system-level architectures and implementations. The system performance, power dissipation, and area/volume (PPA/V) characteristics of a multi-die package integration are vastly improved over board-level designs with discrete parts.

The ability to select different technologies for various system functions (as “chiplets”) in the composite 2.5D/3D package adds the dimension of overall product cost to the PPA/V optimization parameters. System development costs are addressed by the potential to leverage chiplet reuse. Production cost assessments address the tradeoff between the additional complexity of 2.5D/3D package design/assembly versus the yield impact of integrating functionality into a single larger die. This tradeoff is influenced strongly if the PPA goals of architectural blocks can be achieved with existing chiplets in older process technologies.

Given these opportunities for system optimization, the diversity of 2.5D implementations (with area >>1X the maximum reticle size) will continue to grow. Similarly, the complexity of 3D stacked die topologies will also increase, with connectivity between the die transitioning from using microbumps to a bumpless, thermo-compression bonded connection between die pads and through-silicon vias (TSVs).

With the emergence of these system-level implementations, there has been a corresponding focus on the requisite EDA flows to support the design planning and configuration management steps. Initially, the 2.5D/3D product teams incorporated a mix of traditional package and SoC implementation tools, passing connectivity and physical models back and forth. The partitioning of the system architecture into chiplets was somewhat ad hoc, often requiring multiple iterations between disparate tools to achieve a routable solution.

The increasing demand for chiplet interface bandwidth and the complexity of the (“short reach”) chiplet interface timing meant that corresponding timing and signal integrity analysis steps need to be an integral part of the initial design process. The higher power dissipation density associated with 3D die configurations also requires thermal analysis to be an early design convergence evaluation.

In short, the growing importance of architectures pursuing 2.5D/3D package implementations necessitates a unified EDA platform, spanning the tasks of system planning to preliminary electrothermal analysis closure.

Cadence Integrity 3D-IC Platform

To address the needs of advanced package design, Cadence recently announced their Integrity 3D-IC platform. I had the opportunity to chat briefly with Vinay Patwardhan, product management group director in the Digital & Signoff Group, about the development and key features of the platform.

The figure below provides an overview of the platform functionality.

Vinay indicated, “The heart of the Integrity 3D-IC platform is the unified database. The sheer data volume associated with a 3D system design, combined with the tools needed for physical implementation and design rule verification, meant building the Integrity database from IC-based roots. The Cadence Innovus data model served as the foundation, with specific enhancements for Integrity 3D-IC.”

Vinay highlighted the following database features:

  • Representation of the partitioned 2.5D/3D model hierarchy
  • Support for multiple technology files for the heterogeneous process models for various chiplets
  • Integrated version and configuration management to support the system architecture decomposition and optimization
  • Maintaining cross-correlation links between physical, timing, and electrical data for pins/nets, pads/bumps, TSVs, and chiplet models

With regards to chiplet models, I asked Vinay about the integration of existing chiplet IP into the system design. He replied, “The Integrity 3D-IC database supports multiple views for a node in the design hierarchy. There are interfaces to import/export standard netlists and model formats, such as Verilog, DEF, LEF, SDCs, boundary model formats like ILM timing abstracts and OpenAccess physical boundary data. Special formats in which package layouts are represented can be read in seamlessly as well.” (More details about the Integrity 3D-IC database are provided in a Cadence white paper. Please refer to the link at the bottom of this article.)

As mentioned above, the confidence in the 3D implementation requires a platform that enables timing, electrical, and thermal analyses to be pursued during the initial system partitioning phase. The figure below depicts the Integrity 3D-IC system-level flow manager, with the corresponding Cadence tool interfaces.

Yet, how do the system architect and packaging engineering teams get started? How can they quickly iterate on early partitioning activities, confident that the bump/pad planning and inter-chiplet connectivity will be realizable before pursuing the analysis flows?

Vinay highlighted a specific feature of the Integrity 3D-IC platform.  The system planner GUI is illustrated below, depicting the system architecture in multiple physical and connectivity views.

Vinay delved into a key innovation in Cadence’s Tempus Static Timing Analysis (STA) that can be called through the Integrity 3D-IC platform. Specifically, with many discrete heterogeneous die incorporated into the package, the potential number of timing analysis “corners” multiplies quickly, potentially in the thousands. The figure below highlights the issue along with the technology developed to address this specific challenge.

Vinay said, “The Tempus STA tool supports boundary models in conjunction with concurrent, multi-mode, multi-corner (C-MMMC) analysis for data reduction and to simplify job management. On top of that, we have added a special feature called rapid, automated inter-die (RAID) analysis, which is a ‘smart pruning’ feature to reduce the number of analysis corners designers need to consider when performing 3D timing analysis in Integrity 3D-IC.”

I asked about the availability of Integrity 3D-IC. Vinay replied, “We have multiple customers who have helped with the evolution of the platform and are using it now.” (The Cadence press release includes several reference testimonials – see the link below.) Vinay added, “And, we are closely engaged with all the leading silicon foundries and advanced packaging technology providers.”

Summary

The growing adoption of 2.5D/3D package technologies offers unique “More than Moore” opportunities for PPA/V and cost-optimized system implementations. Early 2.5D/3D designs used disjoint tool flows, within limited system planning exploration options. A unified platform, flow manager and model database are needed to provide users with the ability to manage heterogeneous chiplets and enable analysis flows to be a fundamental part of the initial system partitioning. The recently announced Cadence Integrity 3D-IC platform addresses those requirements.

For additional information, please follow the links below.

Cadence Integrity 3D-IC home page

Cadence Integrity 3D-IC press release

Cadence Integrity 3D-IC white paper

-chipguy

 


LRCX- Good Results Despite Supply Chain “Headwinds”- Is Memory Market OK?

LRCX- Good Results Despite Supply Chain “Headwinds”- Is Memory Market OK?
by Robert Maire on 10-24-2021 at 10:00 am

Lam Research Report 2021 SemiWiki

Lam- good quarter but supply chain headwinds limit upside
Memory seems OK for now but watch pricing
China will also weaken which may add caution
Performance remains solid as does technology prowess

The yellow caution flag in the Semi race impacts Lam as well

As we suggested two weeks ago and saw with ASML this morning, supply chain issues are coming home to the semiconductor industry itself. While not yet putting a major dent in business we are seeing some weakness or caution in outlook.

Despite all this, Lam put up great numbers. Revenues at $4.3B were a tad on the light side but still resulted in good earnings at $8.36/share. Street numbers were $4.32B and EPS of $8.21. Guidance is for $4.4B +-$250M and EPS of $8.45 +- $0.50 versus street of $4.4B and $8.47…so more or less in line. A half beat and an “in-line”.

Lam remains the “poster child” for memory sector

Memory was 64% of Lams business with NAND being about 70% of that. Although Lam management went to ;lengths to talk about foundry/logic they are still inextricably tied to memory as their mainstay.

Memory pricing weakness could be an early negative sign

We have started to see some weakness in memory pricing with the weakness already reaching some retail products in the form of discounted pricing on SSD’s.

We are likely at or past the peak demand season for memory on a seasonal basis. Much memory related product bound for holiday sales is already built and on a boat to the US.

The memory industry usually sees its weakest period in the post holiday period of Q1.

Memory has not been the cause of most of the reported semiconductor shortages and has remained in good supply.

We need to continue focus on the memory space as memory makers are usually the fastest at shutting off the spigots of new capacity and tool buying in the chip industry as they tend to have the fastest reaction and shortest outlook radar. While ASML has an 18 month backlog, Lam is more of a turns business.

Supply Chain headwinds are less complex

While Lam talks about supply chain headwinds like shipping costs and issues we do not think they are nearly as complex as ASML which makes a much more complex tool with a much more complex and international supply chain.
Lam also spoke about some margin pressure due to the start up ,of its new Asian manufacturing whose costs are not yet amortized.

Lam still says it is supply constrained and it does not appear that the constraint is going away and it obviously is weighing down the future outlook.

Despite the caution business is at record levels- Imagine a Duck

Despite all the concerns business is at record levels. We may be seeing a bit of slowing as you can’t grow that fast forever but its still great.

Shareholder returns and financials are all in fantastic shape. We imagine the image of a duck everything appears calm and beautiful on the surface but likely furiously paddling away underneath the water to make shipments and get components.

The stock

Given the conservative outlook coupled with the softness of revenues, investors are not going to be happy. Add to that some concerns about supply chain headwinds and a sprinkling of memory nervousness and you will see stock weakness.

As with others in the sector it appears that the stocks may have rolled over with no real catalyst to get them moving again.

The dreaded “supply chain” words have gone from a positive influence of chip shortages to a negative influence of sand in the gears of production. While the numbers are still great the “spin” is not at all positive.

It would be nice to be able to bottle up all the great results to save them for a time when they would be more of a positive influence on investors but alas that is not the case.

Also Read:

ASML- Speed Limits in an Overheated Market- Supply Chain Kinks- Long Term Intact

Its all about the Transistors- 57 Billion reasons why Apple/TSMC are crushing it

Semiconductors – Limiting Factors; Supply Chain & Talent- Will Limit Stock Upside


ASML- Speed Limits in an Overheated Market- Supply Chain Kinks- Long Term Intact

ASML- Speed Limits in an Overheated Market- Supply Chain Kinks- Long Term Intact
by Robert Maire on 10-24-2021 at 6:00 am

ASML Report 2021

ASML great QTR but supply chain will limit acceleration
Products are most complex with most extensive supply chain
Long term position fantastic but investors will be nervous
300M pushouts in DUV with EUV still on track

Good quarter but yellow caution flag is out for supply chain concerns

ASML reported great revenues of Euro5.2B and EPS of Euro4.27 per share. Most importantly the order book was Euro6.2B with Euro2.9B of EUV bookings.

It was reported on the call that roughly $300M of DUV sales slipped due to supply chain issues combined with issues with ASML’s new logistics center. Revenue was a slight miss versus street expectations while earnings exceeded expectations.

The bigger question is going forward expectations which may have to come down based on supply chain constraints. EUV expectations for 2022 appear steady at 55 units with the orders for EUV tools booked up into 2023. DUV expectations may be more of an issue going forward.

We warned investors of supply chain issues in Q3 & beyond 2 weeks ago

We had been hearing more and more about strains in the supply chain for semiconductor equipment tools and materials. On Oct 4th we put out a newsletter in which we warned of risk to the semi stocks based on these supply issues;

Semiconductors- Limiting factors; supply chain & talent -Will limit stock upside

We specifically mentioned ASML as one of the risks in the supply chain itself. The semiconductor supply chain is by far the most global and most complex and thus the most exposed to disruptions.

ASML tools make the moonshot look easy

We have mentioned many times about the fantastic complexity of ASML tools especially EUV which has taken literally over 30 years of global effort to bring about. The supply chain is very long and far reaching across the globe.

Some components such as the lenses have a very finite supply that is very difficult and time consuming to expand upon.

At one point, back a while ago, one of the limiting factors was not enough young Germans wanted apprentice for many years to learn how to polish the glass used in the lenses. Obviously much of that has been automated but limits still exist due to the highly specialized nature.

ASML stated on the call that they had eaten into their “safety stock” (kind of like strategic reserves) of DUV components and the cupboard was now bare. Basically we have stretched the supply chain to the point where we just can’t easily get anymore out of it. Any flexibility of supply is fading.

This underscores our warning of two weeks ago that the upside from here will be more limited as we asymptotically reach an upper bound of growth in the near term.

What if its not just a near term limit or plateau but a cyclical peak?

While it certainly feels like a near term constraint it could also be a cyclical peak. The stocks certainly seem to be behaving as if its a cyclical peak that we are bouncing off of.

The question circles back around to how bad the hangover will be after he current party is over… Do we way overshoot and build so much excess capacity into the industry that prices collapse for years?

Do we slowly slow down into a soft landing where capacity finally catches up with demand that continues to grow at an accelerating pace? Its still too early to tell but investors will be very, very nervous that they have seen the cyclical movie before and it never ends well.

EUV remains very solid

On the plus side, demand and orders for EUV tools as well as production seem somewhat largely unaffected, which is all important as the ASML story is all about EUV. We see no reason to expect any change in EUV demand for the foreseeable future as Moore’s Law demands the shift to EUV.

DUV demand likely exceeded previous plans as in a “normal” semiconductor industry with litho steps moving to a new technology you are usually less inclined to go out and buy more tools of the older technology as the mix is declining.

Obviously the unexpected demand for older semiconductor capacity turned the normal Moore’s Law flow and strategy on its head, hence the bigger than planned demand for DUV tools which ASML wasn’t counting on.

Still a monopoly with great financials

We have to remember not to lose sight of the reality that ASML remains a virtual monopoly in a fast growing industry with technology on its side.
The fundamentals have nor changed at all, just some relatively insignificant timing due primarily than higher than expected growth. Its a very high class problem to have, more growth than you can satisfy.

I am getting tired of two words “supply chain”

The buzz words of the month are clearly “supply chain”. These two words have supplanted many other words in the nightly newscast. Next I am sure that QAnon will take up some wild theory about a supply chain conspiracy to take down the global economy…film at 11.

Yes, indeed, the supply chain has impacted many industries and the catalyst of “supply chain” issues started with Covid but was clearly brewing for a very long time.

There are many supply chains; food, energy, pharmaceuticals, all of which have become overly complex, global and therefore vulnerable.

While its clearly not good to take an isolationist view, like the hermit kingdom, there certainly needs to be an awareness and planning for the exposure

The stocks

Obviously ASML has gotten whacked as it is no longer a “perfect” story. While other semiconductor equipment makers do not have as complex a supply chain they nonetheless remain exposed to potential disruptions.

ASML’s stock was priced for perfection and we had warned that anything less than perfect performance would cause a sell off and that’s what we have seen.
We would expect similar reactions to any other stock that reports weakness or exposure. We also think the the probability of such an event is increasing as supply chains get stretched further.

Other companies have more diverse product lines that may be able to absorb shortages in one area with strong demand in other areas.

As collateral impact from ASML we would point to semiconductor companies that won’t be able to ramp expansion as quickly as expected. We have gone into detail previously on the risks to companies like Intel and their need for ASML tools.

ASML is the key to Intel’s resurrection

ASML also pointed out that they and other equipment makers need to get semiconductors in order to build more tools to make more semiconductors. Right now these seem more like inconveniences than major problems.

While its less likely that the long term positive trend will be impacted, we think that the stocks, which are always more skittish, will continue to be soft as investors get more nervous and risk averse until we stop hearing “supply chain” in the nightly news.

Clearly the news flow slowed down about semiconductor shortages (which was a positive for chip stocks) while the news about supply chain has turned into a negative for chip stocks.

Also Read:

Its all about the Transistors- 57 Billion reasons why Apple/TSMC are crushing it

Semiconductors – Limiting Factors; Supply Chain & Talent- Will Limit Stock Upside

ASML is the key to Intel’s Resurrection Just like ASML helped TSMC beat Intel


Podcast EP44: Open Hardware Diversity Alliance

Podcast EP44: Open Hardware Diversity Alliance
by Daniel Nenni on 10-22-2021 at 10:00 am

Dan and Mike are joined by Kim McMahon, Director of Visibility & Community Engagement, RISC-V International and Rob Mains Executive Director, CHIPS Alliance. Kim and Rob are working with individuals and companies to promote diversity and inclusion in the open hardware industry. We explore their strategies, goals and plans to increase participation by women and under-represented individuals in the open source community.

https://riscv.org/

https://chipsalliance.org/

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Jothy Rosenberg of Dover Microsystems

CEO Interview: Jothy Rosenberg of Dover Microsystems
by Daniel Nenni on 10-22-2021 at 6:00 am

Jothy Rosenberg 1

Jothy Rosenberg is a serial entrepreneur, founding nine different startups since 1988, two of which sold for over $100M. Currently, he’s the Founder & CEO of Dover Microsystems, the first oversight system company. Earlier in his career, Jothy ran Borland’s Languages division where he managed languages like Delphi, C++, and JBuilder. He earned his BA in Mathematics from Kalamazoo College and his Ph.D. in Computer Science from Duke University (where he started his career as a professor of Computer Science).

Jothy has written three technical books, but his pride and joy is his memoir, Who Says I Can’t; it tells his inspiring story of using extreme sports to regain self-esteem after losing one of his legs and a lung to cancer, as a teenager. Bearing the same name as his memoir, Jothy founded and runs The Who Says I Can’t Foundation, a non-profit organization with a mission to help disabled individuals get back into sports. He also created and hosted the Who Says I Can’t TV series on YouTube and is TEDx speaker.

What’s the backstory behind Dover Microsystems?

While Dover Microsystems may have been founded in 2017, the core components of our CoreGuard® technology have been in development since 2010. CoreGuard started as a DARPA CRASH program proposal, submitted by Dover’s Chief Scientist, Greg Sullivan, who became the Principal Investigator, assisted by me.

This CRASH program was created in direct response to the infamous Stuxnet attack—the first cyberattack to prove you could create something in the digital world and use it to cause physical destruction, half a world away. Dover was the largest component of DARPA’s CRASH program, being awarded $25M of the total $100M amount.

With the funds we won from that program, we were able to turn our proposal into a reality over the next five years. After that, we searched for a place to incubate this basic research and turn it into a viable company. This landed us at Draper, a nonprofit engineering services company, before ultimately spinning out in 2017.

What problem is Dover Microsystems addressing?

There are two critical problems that Dover is addressing. The first issue was created back in 1945—the Von Neumann architecture. Processors are still based on this architecture, today, and it’s designed to simply execute instructions as quickly as possible. But, it has no way to determine whether an instruction is good or bad.

The second problem was identified fifty years later by Steve McConnell in his book Code Complete, and it only exacerbates the issue of the Von Neumann architecture. McConnell found that all complex software inevitably contains bugs. He found on average there are 15-50 bugs per 1,000 lines of source code, and according to the FBI approximately 2% of those bugs are exploitable. That means, in a Ford F150 truck, which contains 150 million lines of code, there are potentially 45,000 different ways to take over the vehicle or steal private data.

Historically, companies have relied on building defensive software around networks and applications to protect embedded systems. However, this “solution” isn’t a solution at all. Rather than securing the system, this approach can potentially increase a system’s vulnerability by adding yet another layer of inherently flawed software.

The cybersecurity problem needs to be addressed at the root cause: the attacker’s ability to take over the processor in the first place. Dover’s CoreGuard IP is hardwired directly into the silicon, next to the host processor. It acts as an oversight system, monitoring every instruction, at every layer of the software stack, to ensure it complies with a set of security, safety, and privacy rules. These rules are designed to prevent the exploitation of entire classes of software vulnerabilities. Thus, with CoreGuard, processors can determine whether an instruction is good or bad, and they are no longer vulnerable to 94% of network-based attacks due to the inherently flawed software they run.

Cybersecurity is arguably an oversaturated market. What sets Dover apart?

Two things. First, our solution is enforced in hardware (not software) enabling it to keep up with the host processor and preventing it from being compromised over the network. Second, we focus on protecting against the exploitation of entire classes of software vulnerabilities, not just specific vulnerabilities. Let’s take buffer overflows as an example—there are over 24,000 individual buffer overflow vulnerabilities recorded in MITRE’s CVE database and new ones are being discovered every day. In fact, the recent zero-day bug for which Apple had to issue a security patch was a buffer overflow vulnerability in their OS. CoreGuard protects against all buffer overflows, including zero-days. That means if the buffer overflow was discovered yesterday, today, or ten years from now, CoreGuard would stop it, no patches or updates necessary.

What kind of cyberattack trends are you seeing? What should we be worried about?

We’re seeing an uptick in attacks on critical infrastructure, like the attack on the water treatment facility in Florida and the Colonial pipeline attack from earlier this year. Obviously, this is really concerning from a health and safety standpoint. Similarly, we need to be concerned about AI and machine learning. Increasingly, AI and ML capabilities are being adopted into our embedded systems, which offers a lot of incredible benefits. However, it also comes with potentially dangerous consequences. Late last year, we hosted a webinar and published a white paper on this topic, highlighting the biggest threats to AI & ML systems and how CoreGuard can help protect against them.

What applications are the best fit for your technology?

Of course, we believe every embedded system can benefit from CoreGuard’s level of protection—from the IoT to medical devices to fintech to aerospace and defense. And from a technical standpoint, CoreGuard is compatible with any RISC-based processor, including Arm, MIPs, ARC, Tensilica, and RISC-V.

In terms of specific applications, we’ve seen particular interest from the Industrial IoT market, where embedded systems operate side-by-side with people on a factory floor, and a successful cyberattack could be life-threatening. We’ve also seen a lot of interest in automotive functional safety, as well as military and defense applications. In fact, we just recently won a contract to work with the Air Force Nuclear Weapons Center to provide hardware-based enforcement of the correct operation of safety-critical systems. And of course, semiconductor manufacturers are very interested in our technology, with NXP being our first publicly announced customer.

Where can someone go to learn more about CoreGuard?

You can always visit our website to learn more about CoreGuard. We also update our blog frequently and post about things like recent cyberattacks and trends we see in cybersecurity. If you’d like to see CoreGuard in action, you can also request a live demo.

Also Read:

CEO Interview: Mike Wishart of Efabless

CEO Interview: Maxim Ershov of Diakopto

CEO Interview: Gireesh Rajendran CEO of Steradian Semiconductors


Webinar on Protecting Against Side Channel Attacks

Webinar on Protecting Against Side Channel Attacks
by Tom Simon on 10-21-2021 at 10:00 am

Side channel attack protection

SoC design for security has grown and evolved over time to address numerous potential threat sources. Many countermeasures have arisen to deal with ways hackers can gain control of systems through software or hardware design flaws. The results are things like improved random number generators, secure key storage, crypto, and memory protection. Also, SoCs have added hardware security modules, secure boot chain, dedicated privileged processors, etc. However, one method of attack is often overlooked – side channel attacks. Perhaps this is because the relative risk and difficulty of such attacks has been underestimated.

 Here is the REPLAY.

In an upcoming webinar Tim Ramsdale, CEO of Agile Analog, offers a sober look at the threat from side-channel attacks on SoCs. The Webinar titled “Why should I care about Side-Channel Attacks on my SoC?” not only explains that they are a greater threat than often believed, but it also offers an effective solution to the problem.

Side channel attack protection

You only need to look at YouTube to find presentations made at Defcon events that illustrate how RISC-V based SoCs, Apple Airtags or ARM Trustzone-M devices are vulnerable to glitching attacks. Your first thought is that this might be done with some lucky timing achieved with touching bare wires, pushing a button randomly or requires a completely instrumented forensics lab. If that were the case, the threat would be so minimal that it might be possible to ignore. Tim points out that there is an open source kit available from Mouser to automate and make these attacks systematic. This kit comes with a microprocessor & easy to use UI, and is capable of clock and power attacks.

The webinar explains how these attacks can be carried out and why they represent a bigger threat than you might think. Imagine, if in your device an attacker can randomly flip the state of any one single register – such as a security bit? Suppose the result of a BootROM checksum can be corrupted? Varying voltage and clock signals for extremely short periods of time can cause otherwise undetectable changes in state leading to access that could allow running malicious code in privileged mode.

Additionally, access gained through these techniques can allow hackers to explore other weaknesses in your system. With the knowledge gained through a one-off side channel attack, a more easily repeated exploit could be discovered – one that does not require direct contact with the targeted device. IoT devices are also particularly vulnerable, as they are connected and often exposed to physical contact.

Agile Analog has developed a solution to detect side-channel attacks. They have sets of sensors that are capable of detecting the kinds of effects that occur when clocks or power pins are tampered with. Their side channel attack protection blocks have their own internal LDO and clock generators to ensure they can operate during attacks. The control, analysis and monitoring logic is easy to integrate with SoC security modules.

During the webinar Tim explains the details of how their solution can monitor and report attacks or even attempted attacks. This can include attacks that occur in the supply chain before the device is added to the finished system. This webinar is informative and provides useful information on enhancing SoC security. Here is the REPLAY.

Also read:

CEO Interview: Barry Paterson of Agile Analog

Counter-Measures for Voltage Side-Channel Attacks

Agile Analog Visit at #60DAC


Successful SoC Debug with FPGA Prototyping – It’s Really All About Planning and Good Judgement

Successful SoC Debug with FPGA Prototyping – It’s Really All About Planning and Good Judgement
by Daniel Nenni on 10-21-2021 at 6:00 am

ProtoBridge Debug Blog 181021

Using FPGAs to prototype and debug SoCs as part of the SoC design verification hierarchy was pioneered by Quickturn Design Systems in the late 1980’s, and I have observed a wide variety of FPGA prototyping projects over the years.  In retrospect, three factors have determined the success of the FPGA prototyping project;

  1. A good plan
  2. A proven platform (hardware and software)
  3. Experienced project leadership

This may sound painfully obvious to most, but it deserves respectful consideration – it’s so fundamental to a successful FPGA prototyping experience that its worth emphasizing.  One of my favorite action movie heroes was once asked in a film how he learned “good judgment” as a top international assassin.  The answer was unhesitating and profound;

“Good judgment comes from experience, and most of that comes from bad judgement”

So it is with FPGA prototyping, there is just no substitute for FPGA prototyping experience – together with a good plan, and a proven FPGA prototyping platform.  Some adventurous souls still build their own FPGA prototyping platforms from scratch with today’s colossal FPGAs – in reality, the “real costs” associated with build-your-own FPGA prototyping platform are frequently underestimated, and, in the worst case, can result in a delayed tapeout.  It’s instructive to keep in mind that a working FPGA prototype is not the end-goal – the end-goal is working silicon in the shortest time.

A Good Plan starts by involving all the FPGA prototype “stakeholders”, a written test plan, setting expectations, getting buy-in, rationalizing schedules, and practicing disciplined follow-up/follow-through.  SoC design debug with FPGA prototypes should be part of a holistic, unified SoC verification plan and specifically purposed to cover those SoC design operating cases that are not practical, or even not possible, with software simulation or Emulation – before silicon.  The role of FPGA prototyping for design debug, as part of the verification plan, should be well defined, specific verification tasks that can vary from early architecture exploration, to RTL development, to pre-silicon software development, and silicon bring-up.  Integration of the FPGA prototyping platform into the SoC design/verification flow is essential for smooth interdisciplinary exchanges of SoC design data and verification results with the prototyping platform.  Timely release of the latest SoC design version for use on the FPGA prototyping platform, integration into the bug tracking system, and design-fix feedback protocols will all contribute to a smooth SoC verification experience.

The FPGA prototyping platform setup should be tailored to the platform user.  A good Plan should comprehend the need for frequent FPGA reconfigurations if prototyping is used early in the SoC development process when design changes are expected to occur frequently – if the FPGAs are programmed with very high FPGA resource utilization, timing closure of the FPGA prototype with design changes will take longer than if FPGA utilization is limited to easily accommodate the design change (and debug probes).  Similarly, it would be unacceptable for software developers to have to contend with a prototype platform that did not run the most rudimentary firmware and software – software developers will not want to deal with hardware that doesn’t “work”.

A Proven FPGA Prototyping Platform will definitely increase the chances for a successful FPGA prototyping experience.  FPGA prototyping platforms that come ready to deploy with minimal/reliable “assembly” will minimize the prototyping effort and maintenance.  This includes proven FPGA hardware and software, integrated ready-to-use debug features, and plug-and-play prototyping platform infrastructure hardware (daughter cards, cables/connectors, etc.).  For the past 15 years S2C has focused on building cost effective, reliable FPGA prototype platform hardware, with support for Xilinx or Intel FPGAs, to meet the needs of its discerning global prototyping community.

S2C offers its MDM Pro integrated debug capability that provides for tens of thousands of debug probes into the FPGA, probe insertion at FPGA compile-time, debug trigger/trace features, a large off-FPGA debug data storage memory, and the ability to view trace data from multiple FPGAs within a single debug viewing window.

S2C also facilitates the application of large amounts of prototype verification date to the FPGA prototype from a host CPU with its integrated ProtoBridge.  The user developed verification data can take the form of a stream of processor transactions, video data, Wi-Fi/Bluetooth radio data, or directed test patterns.  ProtoBridge interfaces with the FPGA prototype over Ethernet from the host CPU through an AXI-4 master/slave interface embedded in the FPGA, and transfers data to the FPGA prototype at 4GBS using API function calls running on the host CPU.

S2C simplifies quick implementation of the prototyping platform infrastructure with a library of what it refers to as Prototype-Ready IP daughter cards, cables, and connector adapters that support standards-based I/O (PCI, USB, SATA, HDMI, MIPI, GPIO, etc.), adapters for ARM processors (Juno and Zynq), and additional system memory – see the S2C website at http://s2ceda.com/en/product-prototyping-prip.

If you are contemplating an FPGA prototyping project for SoC design development, take a look at S2C’s complete FPGA prototyping solutions – and take the time up-front for some careful thought to a good Plan, choosing a proven platform (like S2C), and experienced prototyping project leadership.

Also Read:

S2C FPGA Prototyping solutions help accelerate 3D visual AI chip

Prototypical II PDF is now available!

StarFive Surpasses Development Goal with the Prodigy Rapid Prototyping System from S2C


Samtec, Otava and Avnet Team Up to Tame 5G Deployment Hurdles

Samtec, Otava and Avnet Team Up to Tame 5G Deployment Hurdles
by Mike Gianfagna on 10-20-2021 at 10:00 am

Samtec Otava and Avnet Team Up to Tame 5G Deployment Hurdles

Everyone is talking about 5G deployment. The promises and the hype are finally turning into reality and products. While excitement is appropriate, victory is not yet in hand. There are still technical hurdles to conquer before the full potential of 5G is realized. In this post, I’ll explore one such challenge – the reliable use of millimeter wave (mmWave) technology for beamforming. You will see how Samtec, Otava and Avnet team up to tame 5G deployment hurdles.

Profile of the Technology

A recent article from Electronic Products does a good job profiling the challenges being addressed by Samtec and Otava. The lead statement in the article summarizes it well:

Unlocking the potential of 5G everywhere with ultra-fast mmWave speeds and low latency requires solving fundamental challenges around range, signal blockers, and proximity to a 5G tower or small cell.

The article goes on to discuss how mmWave frequencies from 24-GHz to 40-GHz hold promise for fast, low latency 5G networks. This technology does present substantial RF propagation challenges.  To realize super-fast 5G network deployment at scale, designers must solve these problems.  The challenges of deployment here can be summarized in one statement, mmWave 5G signals are fragile.

By fragile, I mean they are very short-range. The referenced article explains that, to receive mmWave signals, you need to be within a block or two of a 5G tower with no line-of-sight obstructions. Signals are easily blocked by buildings, walls, windows, and trees. The whole thing can give designers a headache very quickly.  It turns out a promising method to deal with these issues is to use something called beamforming.

If you’re thinking the problem is solved by beamforming, think again. A bit of background is useful for those not following the technology. Beamforming is a technique that focuses a wireless signal at a target as opposed to having the signal radiate in all directions from a broadcast antenna. This results in a more direct connection that is faster and more reliable than it would be without beamforming. The science of beamforming is quite complex. There are many design challenges to be tamed here, and a lot of them are caused by real-world topology. If you want to dig into these challenges, this article from Avnet is a good place to start.

Profile of the Players

With that background out of the way, let’s examine the companies who are collaborating to deliver a solution.

SemiWiki readers should already know about Samtec. They are the company that develops the physical elements of high-performance channels – cables built out of all kinds of materials and connectors with the same broad pedigree. You can catch up on SemiWiki coverage of Samtec here. If you’re trying to build high-performance communication systems (like 5G networks), you will definitely need Samtec’s products, models and technical support.

Otava is a company focused on end-to-end development of technologies used by advanced 5G commercial and DoD applications. The organization’s core competence is phased array system and electronic design. Their portfolio includes transceivers, tunable filters, switches and, you guessed it by now, beamformer technology.

To complete the picture is Avnet, the 900-pound gorilla of distribution and support for a broad range of technology. Both Samtec and Otava are partners of Avent.

Profile of the Solution

By now, you might be thinking that the only way to tame the challenges of 5G transmission is to develop a beamforming approach that is adapted to the conditions being observed. The ability to prototype various approaches to solve the problem would be very useful. The Otava Beamformer IC Evaluation Kit, available from Avnet, is just what you should be looking for. The kit contains everything you need to prototype various beamforming solutions and test them in a real-world setting. Components in the kit include:

  • Otava Beamformer IC Eval Board
  • MicroZed 7010 Xilinx Zynq SOM
  • Modified MicroZed I/O Carrier Card
  • Two custom cable assemblies from Samtec
  • C# GUI

To Learn More

There is a great video on the Avnet solution page where Samtec and Otava explain the capabilities of the kit, with suggested use models.  You can also learn more about Samtec’s precision RF capabilities here. Matt Burns of Samtec recently wrote a great blog that provides another level of detail on the solutions available from Samtec and Otava, complete with photos of the boards. You can check out this informative blog here. Now you know how Samtec, Otava and Avnet team up to tame 5G deployment hurdles.


Its all about the Transistors- 57 Billion reasons why Apple/TSMC are crushing it

Its all about the Transistors- 57 Billion reasons why Apple/TSMC are crushing it
by Robert Maire on 10-20-2021 at 8:00 am

Apple M1

The Apple event today was essentially a reveal of the latest and greatest silicon coming out of the Apple/TSMC partnership and how far ahead of everything else it is. The Mac was simply an aluminum container for the new silicon.

More importantly the event and specs of the TWO new chips, the M1 Pro and the M1 Max demonstrate that the M1 was just the beginning of a large suite of silicon much as we saw in the iPhone lineup. The newly announced silicon is not just a small incremental increase in capability over the original M1 that we have seen out of Intel’s recent product line announcements but a Moore’s Law leap that we are used to from the past generational silicon changes. It harkens back to the type of performance increases we used to see out of Intel before they stumbled.

Catching an accelerating Apple/TSMC just got harder

We have pointed out in prior articles that Intel catching TSMC is going to be tough as TSMC really hasn’t stumbled at all. Given the incremental increase over the Apple M1 announced a year ago, it feels as if the Apple/TSMC partnership may be extending its lead not just keeping pace with innovation.
It may look very ugly if AMD and Intel are fighting over second place with tit for tat incremental changes while Apple accelerates away.

Apple has more than proven its chops in silicon design

Much was said about the legendary “Tick Tock” (no, not the phone app) of Intel’s alternating design and manufacturing enhancement cadence, alternating with the two year lock step of Moore’s Law.

We have also heard more recently about AMD’s design prowess and thinking outside the box.

Jim Keller, the Guru/pied piper genius of CPU design has bounced around the industry from DEC to AMD, Tesla, Intel and Apple sprinkling the fairy dust of magical CPU design wide and far.

Apple has proven that their chip design is not beginners luck nor a one hit wonder but rather a deep bench of capability and expertise and perhaps most importantly the ability to work with a manufacturing partner as if they were a seamless “IDM” .

This also underscores that while design is important its all about manufacturing and Moore’s Law that makes the difference…. but then again we and Intel have known this for a very long time.

Apple Silicon helped it win phone wars… Will it win laptop wars?

The Apple “A” series of processors used in its iPhones are the most advanced in the industry by far. They are perhaps the key factor enabling the leading performance, features and battery life that keep Apple products at a price premium in a leadership, position.

As the “M” series of processors rolls out across the spectrum of Mac computers we will likely see a similar performance advantage that will keep Apple computers in the forefront.

We would also remind everyone that all of Apples products, its watch, earpods, speakers etc; etc; all have smart silicon that differentiates them from otherwise pedestrian versions of similar products.

The Intel doth protest too much….

Intel has been on an anti-Apple PR campaign recently trying unsuccessfully to dull the impact of Apple’s switch away from Intel. There has been significantly blowback and negative reaction from the campaign which was perhaps looking to head off today’s launch of new Macs/Apple silicon. Unfortunately it just underscores the switch and Apple’s success in the silicon business.

Apple Semiconductor

We had suggested a long time ago that perhaps Apple should be considered a semiconductor company or perhaps consider finding a way to monetize that expertise in addition to inside its current product line.

Why wouldn’t it make sense to see Apple chips in servers (running at very low power which is critical in data centers) or selling its chips/expertise to Facebook or “frenemy” google .

Don’t be surprised if we hear about Apple chips going into Apple’s own vast cloud computing complex. it would be a great proving ground.

The Stocks

The event today was a great demonstration of why Apple is where it is and where its going, and why its staying ahead.

They obviously very much understand the importance of silicon, both design and manufacturing and are working hard to use it to their advantage.

Investors need to look beyond this being a Mac roll out and more of a demonstration of their underlying technology advantage and commitment which will keep them ahead.

It should also been seen by Intel that they have their work cut out for them. Apple and TSMC are fast moving targets that have both very deep resources and don’t make a lot of mistakes. They are well managed with the right underlying strategy that understand what makes the markets tick. This is going to be a long hard struggle over many years.

Everyone just has to remember….

“its all about the transistors”

Also Read:

Semiconductors – Limiting Factors; Supply Chain & Talent- Will Limit Stock Upside

ASML is the key to Intel’s Resurrection Just like ASML helped TSMC beat Intel

KLA – Chip process control outgrowing fabrication tools as capacity needs grow