SNPS1670747138 DAC 2025 800x100px HRes

TSMC 2022 Open Innovation Platform Ecosystem Forum Preview

TSMC 2022 Open Innovation Platform Ecosystem Forum Preview
by Daniel Nenni on 10-14-2022 at 6:00 am

image002 2

One of my favorite events is just around the corner and that is the TSMC OIP Ecosystem Forum and it’s at my favorite Silicon Valley venue the Santa Clara Convention Center. Nobody knows more about the inner workings of the ecosystem than TSMC so this is the premier semiconductor collaboration event, absolutely.

In my 40 years as a semiconductor professional I cannot think of a more exciting time for our industry and TSMC is one of the reasons why. The ecosystem they have built is a force of nature that may never be replicated in the semiconductor industry or any other industry for that matter. Hundreds of thousands of people all working together for a common goal of silicon that could change the world!

The guest speaker for the Silicon Valley event will be none other than Jim Keller of Apple, AMD, Tesla, and Intel fame. Jim is an amazing speaker so you definitely do NOT want to miss this one.

REGISTER NOW

Learn About:

  • Emerging advanced node design challenges and corresponding design flows and methodologies for N3/N3E, N4/N4P, N5/N5A, N6/N7, N12e, N22, and 28eF technologies
  • Latest 3DIC chip stacking and advanced packaging processes, and innovative 3DIC design enablement technologies and solutions targeting HPC and mobile applications
  • Updated design solutions for specialty technologies enabling ultra-low voltage, analog migration, mmWave RF, and automotive designs targeting automotive and IoT designs
  • Ecosystem-specific TSMC reference flow implementations, P& R optimization, machine learning to improve design quality and productivity, and cloud-based design solutions
  • Successful, real-life applications of design technologies and IP solutions from ecosystem members and TSMC customers

For more information on the TSMC OIP Ecosystem Forum, e-mail us at: tsmcevents@tsmc.com.

Here is the agenda as of today:

Time Plenary Session
08:00 – 09:00 Registration & Ecosystem Pavilion
09:00 – 09:15 Welcome Remarks
09:15 – 10:10 Enabling System Innovation & Guest Speaker
10:10 – 10:30 Coffee Break & Ecosystem Pavilion
TSMC Technical Talks
10:30 – 11:00 TSMC N3E FinFlex™ Technology: Motivation, Design Challenges, and Solutions
TSMC
TSMC 3Dblox™: Unleashing The Ultimate 3DIC Design Productivity
TSMC
TSMC Analog Migration Talk
TSMC
HPC & 3DIC Track Mobile & Automotive Track IoT, RF & Other Track
11:00 – 11:30 GUC’s 2.5D/3D Chiplets, Interconnect Solutions and Trends
GUC
Analog Design Optimization by Integrating MediaTek’s ML-based Engine within the Virtuoso’s Analog Design Environment
MediaTek / Cadence
Synopsys / Ansys / Keysight mmWave Reference Design Flow on TSMC N16FFC
Synopsys / Ansys / Keysight
11:30 – 12:00 A Unified Approach to 3DIC Power and Thermal Integrity Analysis Through TSMC 3Dblox Architecture and Ansys RedHawk-SC Platform
Ansys
Achieving Best Performance-per-Watt at TSMC’s N2 and N3E Hybrid-Row Process Technology Nodes using Fusion Compiler and the Fusion Design Platform
Synopsys
Breakthrough platform for AIoT markets
Dolphin Design
12:00 – 13:00 Lunch & Ecosystem Pavilion
13:00 – 13:30 SerDes clocking catered to robust noise handling in advanced process technologies for HPC, Datacenter, 5G and AI applications
eTopus Technologies / Siemens EDA
An Accurate and Low-Cost Flow for Aging-Aware Static Timing Analysis
Synopsys / TSMC
Cadence mmWave Solutions Support TSMC N16 Design Reference Flow
Cadence
13:30 – 14:00 Advanced Assembly Verification for TSMC 3DFabric™ Packages
Broadcom / Siemens EDA
Analog Design Migration Flow from TSMC N5/N4 to N3E with Synopsys Case Study
Synopsys
Analysis of Design Timing Effects of Threshold Voltage Mistracking between Cells
Synopsys
14:00 – 14:30 Simplifying Multi-chiplet design with a unified 3D-IC platform solution for 3Dblox technology
Cadence
Low power high density design implementation for AI chip
Hailo Technologies / Siemens EDA
RISC-V is delivering performance and power efficiency from Embedded to Automotive to HPC
SiFive
14:30 – 15:00 Advanced Auto-Routing for TSMC® InFO™ Technologies
Cadence
Reliable compute – taming the soft errors
Arm
TSMC, Microsoft Azure and Siemens EDA Collaboration – Enabling Your Jump to N3E using the Cloud and Calibre nmDRC
Siemens EDA / Microsoft
15:00 – 15:30 Coffee Break & Ecosystem Pavilion
15:30 – 16:00 3D System Integration and Advanced Packaging for next-generation multi-die system design using Synopsys 3DIC Compiler with TSMC 3DBlox and 3DFabric
Synopsys
Self-testing PLLs for advanced SoCs
Silicon Creations
HPC & Networking Trends Influencing High-Speed SerDes Requirements
Synopsys
16:00 – 16:30 TSMC 3DBlox Simplifies Calibre Verification and Analysis
Siemens EDA
Cadence Cerebrus AI driven design optimization pushes PPA on TSMC 3nm node
Cadence
Integration Methodology of High-End SerDes IP into FPGAs based on Early Technology Model Availability
Achronix / Alphawave IP
16:30 – 17:00 GUC’s GLink case study: Performance and reliability monitoring for heterogeneous packaging, combining deep data with machine learning algorithms
proteanTecs
Kick-off your design success with Automated Migration of Virtuoso Schematics
Cadence
Pinless Clocking and Sensing
Analog Bits
17:00 – 17:30 Achieve 400W Thermal Envelope for AI-Enabled Data Center SoCs – Challenge Accepted
Alchip / Synopsys
Delivering best TSMC 3nm power and performance with Cadence digital full flow
Cadence
Understanding UCIe for Multi-Die Systems Leveraging CoWoS and Substrate Packaging Technologies
Synopsys
17:30 – 18:30 Networking and Reception

REGISTER NOW

Abut TSMC

TSMC (TWSE: 2330, NYSE: TSM) created the semiconductor Dedicated IC Foundry business model when it was founded in 1987. TSMC served about 535 customers and manufactured more than 12,302 products for various applications covering a variety of end markets including smartphones, high performance computing, the Internet of Things (IoT), automotive, and digital consumer electronics.

Annual capacity of the manufacturing facilities managed by TSMC and its subsidiaries exceeded 13 million 12-inch equivalent wafers in 2021. These facilities include four 12-inch wafer GIGAFAB® fabs, four 8-inch wafer fabs, and one 6-inch wafer fab – all in Taiwan – as well as one 12-inch wafer fab at a wholly owned subsidiary, TSMC Nanjing Company Limited, and two 8-inch wafer fabs at wholly owned subsidiaries, WaferTech in the United States and TSMC China Company Limited.

In December 2021, TSMC established a subsidiary, Japan Advanced Semiconductor Manufacturing, Inc. (JASM), in Kumamoto, Japan. JASM will construct and operate a 12-inch wafer, with production targeted to begin by the end of 2024. Meanwhile, the Company continued to execute its plan for an advanced semiconductor fab in Arizona, the United States, with production targeted for 2024. www.tsmc.com

Also Read:

Future Semiconductor Technology Innovations

TSMC 2022 Technology Symposium Review – Advanced Packaging Development

TSMC 2022 Technology Symposium Review – Process Technology Development


The CHIPS and Science Act, Cybersecurity, and Semiconductor Manufacturing

The CHIPS and Science Act, Cybersecurity, and Semiconductor Manufacturing
by Simon Butler on 10-13-2022 at 10:00 am

CHIPS Act Logo

This year is proving to be a momentous one for U.S. semiconductor manufacturing. During a global chip shortage and record inflation, President Biden signed into effect the CHIPS and Science Act – which so far is the greatest boon to U.S. semiconductor manufacturing in history, with $52 billion in subsidies for chip manufacturers to build fabrication plants in the U.S.

The CHIPS Act seems like a green light for domestic manufacturing. However, another piece of legislation passed earlier in the year may be a stumbling block for semiconductor design shops eager to serve national security projects. Enter Executive Order 14028, “Improving the Nation’s Cybersecurity.”

Rolled out several months before the CHIPS Act was signed, this Executive Order defines parameters that will force U.S-based software companies to change long-established development and design processes if they want to comply with federal regulations regarding information-sharing between the government and the private sector.

Here we examine how these two pieces of legislation relate, what they mean for semiconductor companies, and why the highs and lows of American semiconductor manufacturing boil down to one thing: security.

Protect Your IPMethodics IPLM is the single source of truth for all your chip IP. See how you can protect your team’s IP from design through verification.

LEARN MORE

The CHIPS and Science Act of 2022

The CHIPS and Science Act of 2022 provides $52 billion in subsidies for chip manufacturers to build fabrication plants in the U.S. For reference, currently only 12% of all semiconductor chips are made in the U.S.

This Act comes amidst a global economic downturn, with lawmakers hoping that American-made chips will solve security and supply chain issues. In short, this is something the U.S. needs to reassert its historical influence on semiconductor manufacturing.

Security Considerations

One of the biggest considerations, and benefits, to domestic-made semiconductors is national security. Recent geopolitical instability has caused concern over potential IP leakage and theft. For the U.S. Department of Defense (DoD), it is imperative to have a secure and trusted ecosystem for the design and manufacture of semiconductors. But with most of today’s manufacturing done overseas, the DoD have had major challenges executing their national security-related projects.

The automotive industry is another area that will benefit from a trusted domestic ecosystem and a more resilient supply chain. As we progress towards autonomous vehicles, compromised components could be used by malicious parties to take control of the system and cause damage and injury.

In these cases (and others), it’s clear that there is a need for component and IP provenance, along with geofencing, to reduce the likelihood of security breaches. More competitive and accessible domestic manufacturing can help solve this by keeping sensitive IP within the borders of the U.S.

Executive Order 14028: “Improving the Nation’s Cybersecurity”

The Executive Order on cybersecurity stemmed from recent data breaches, with the attempt to patch vulnerabilities in sharing between the private sector and the U.S. government. For companies, this means a brighter light will now be shone on security throughout the embedded software development process. For developers, this signifies a greater need to maintain visibility into their code and keep track of any vulnerabilities throughout the lifecycle.

To tackle this, a number of recommendations/requirements have been put forward by this Executive Order, including better defined processes around cyber security incidents, a higher level of awareness around permissions (“Zero Trust”) and the concept of a Software Bill of Materials (SBOM), which should be delivered as part of the software implementation to enable higher levels of traceability and provenance.

This SBOM should enable system integrators to understand their exposure to security concerns in delivered code via documentation of the software versions delivered, their provenance, and the originating supply chain source, all of which allow for better traceability in the design.

The Unified BOM

An SBOM will take the form of a hierarchical tree of components where each component includes the versioned implementation and important metadata that infer its state, license, compliance with standards, and other pieces of data. This SBOM should be in machine-readable format for integration into development and test traceability methodologies.

In short, the SBOM should be a complete manifest of the software delivered with the project and its current state. With the advent of IP-centric design practices in the semiconductor space, we have already seen widespread adoption of the hardware BOM (HBOM) that records the IP component versions that implement an SoC and material metadata.

Since a large portion of today’s SoCs include an embedded software component, this new governmental SBOM requirement suggests SoC developers should be managing the unified platform SBOM/HBOM as part of the development life cycle, and in some cases delivering with the final product shipment to facilitate traceability and threat detection in the target system integration.

The “Unified” BOM: A Complete Software/Hardware Manifest

The U.S. government has started two important initiatives with the CHIPS and Science Act and Executive Order 14028. The CHIPS Act will revitalize U.S.-based semiconductor manufacturing to secure the domestic semiconductor supply chain and mitigate concerns with national security related designs, while Executive Order 14028 enforces software development practices that reduce the likelihood of cyberattacks.

Software needs hardware to run and understanding the interdependence of software and hardware is important. By applying the SBOM mandate to the entire system on a chip (SoC) manifest with a unified software/hardware BOM, we can help to ensure that the best practices outlined in the Executive Order will be applied to the entire component tree for a given SoC.

This is something that many companies have started to adopt anyway, independent of any government initiative. Although, Executive Order 14028 now mandates this as a requirement to be able to engage in DoD software development projects. One could argue that without a complete BOM to reflect the full set of software and hardware components in an SoC, we’re not fully addressing provenance and security issues in the design.

Wrap-up: Improving Cybersecurity Through Secured Supply Chain

In summary, the hope is that the $52 billion CHIPS Act will help mitigate the supply chain bottleneck plaguing the semiconductor industry. By combining secure manufacturing with secure development best practices, we have a much higher likelihood of improving our semiconductor supply chain and providing a trusted source of components for our national security projects.

Leverage CHIPS Funding With Methodics IPLM

Methodics IPLM provides a scalable IP lifecycle management platform that tracks IP and its metadata across projects, providing end-to-end traceability, and facilitating IP reuse. With a tool such as Methodics IPLM in hand, companies can setup the infrastructure called for by the CHIPS Act and smooth the transition to state-of-the-art U.S.-based semiconductor manufacturing.

Connect with Perforce IP experts to learn more about Methodics.

CONNECT WITH US

Originally published on Perforce.com blog.

Also Read:

Solve Embedded Development Challenges With IP-Centric Planning

WEBINAR: How to Improve IP Quality for Compliance

Future of Semiconductor Design: 2022 Predictions and Trends


VeriSilicon’s AI-ISP Breaks the Limits of Traditional Computer Vision Technologies

VeriSilicon’s AI-ISP Breaks the Limits of Traditional Computer Vision Technologies
by Kalar Rajendiran on 10-13-2022 at 10:00 am

VeriSilicons NPU Offerings

The tremendous growth in edge devices has focused the spotlight on Edge-AI processing for low latency, low power and low-DDR bandwidth compute needs. Many of these Edge-AI applications depend on effective and efficient processing of image and video streams which in turn relies on computer vision technology. In early September, VeriSilicon announced the launch of AI-ISP, an innovative AI image enhancement technology that the company claims that it can surpass what traditional computer vision technologies offer. The company credited its Glass to Glass (from camera-in to display-out) intelligent pixel processing IP portfolio and its innovative FLEXA™ IP interconnection technology for this achievement. This blog will look into the nuts and bolts behind that claim.

About VeriSilicon

Many may already be familiar with VeriSilicon but a refresher will serve well as a backdrop for this blog. From more than two decades ago when the company started as a design service and a turnkey service provider, VeriSilicon has expanded and evolved a lot as well. The company is committed to providing customers one-stop custom silicon solutions through its silicon services and IP licensing services of in-house semiconductor IP. Customers benefit from its “Silicon Platform as a Service (SiPaaS®) model that enables design efficiencies and higher quality while lowering product risk and development costs. VeriSilicon can create custom silicon products from definition to test and package within short cycle times.

The company has delivered a variety of custom silicon solutions supporting applications such as high-definition audio, video, high-end processing, video surveillance, IoT connectivity, smart wearable, and many others. It leverages an in-house IP portfolio of more than 1,400 analog and mixed-signal IPs and RF IPs along with processor IPs. Its processor IPs fall into the following main types: GPU, NPU, VPU, DSP, ISP and Display Processor, plus VeriSilicon FLEXA™ IP fusion technology.

AI-ISP Technology

Under its platform model, VeriSilicon continues to fuse multiple technologies to address the industry challenges by breaking the limit of the traditional approaches. The VeriSilicon AI-ISP technology is a result of such a push to support the Edge-AI processing domain. The technology combines VeriSilicon’s Neural Network Processing (NPU) technology with its Image Signal Processing (ISP) technology to deliver innovative image quality enhancement for computer vision. The AI-ISP is built on an intelligent workload balancing architecture that optimizes power consumption and memory access. It is built for applications that demand ultra-low power consumption under near-zero illuminance conditions. VeriSilicon’s AI-ISP can be leveraged to benefit smartphones, automotive electronics, surveillance camera systems, and Industrial Internet of Things (IIoT) among many other applications.

AI-ISP Leverages Already Proven Technologies

VeriSilicon develops its various IPs with the SiPaaS model in mind. Its various IP technologies support each other to deliver enhanced results. For example, its Image Signal Processing (ISP) IP focuses the target area to obtain a clearer image and set things up for its Network Processing Unit (NPU) to perform detection and recognition functions. On the other hand, its NPU is capable of performing dark light, noise reduction during the ISP processing, for further enhancement of image quality. Following are the underlying technologies that the AI-ISP offering leverages.

VeriSilicon FLEXA™

VeriSilicon’s FLEXA™ is an innovative, low-power and low latency interface communication technology that allows ISPs to read, write and access data directly from the NPU. The FLEXA API is built around a hardware and software protocol that enables efficient data communication between multiple pixel processing IP blocks. Systems built with FLEXA compliant IPs can leverage the API to run AI applications to reduce DDR traffic and achieve low pixel processing latencies.

Image Signal Processing (ISP) Technology

VeriSilicon’s ISP technology is already market proven through customer adoption of various cores from its ISP product portfolio.

It’s worth mentioning that VeriSilicon’s ISP8000L-FS V5.0.0 has achieved both ISO 26262 and IEC 61508 functional safety standards, which is the company’s first IP that aligns with dual international functional safety standards. The ISP8000L-FS V5.0.0 is designed for advanced and high-performance camera-based applications, which supports dual cameras with single 4K@60fps or dual 4K@30fps video capturing. It also integrates HDR (High Dynamic Range) processing, 2D/3D noise reduction technologies, and built-in functional safety mechanisms. The ISP8000L-FS V5.0.0 has been certified by both ISO 26262 and IEC 61508 functional safety standards, which marks a significant milestone in VeriSilicon’s expansion of its functional safety IP portfolio. Adopting the certified ISP IP will help customers accelerate their product development process with reduced risk of systematic failures and random hardware failures in safety-critical automotive and industrial applications.

To read the press announcement, go here.

Neural Network Processor (NPU) Technology

VeriSilicon’s NPU technology is already market proven through customer adoption of various cores from its NPU product portfolio. It incorporates self-adaptive resolution calculation and multi-frame fusion function, as well as excellent noise reduction performance even in low light conditions. The technology comes with a complete software stack and software development kit (SDK) that supports deep learning frameworks including Tensorflow, PyTorch, ONNX, TVM, and IREE. For the specific cores that support various applications from IoT and Wearables to Automotive and Data Centers, refer to the Figure below.

Summary

As a SiPaaS company, VeriSilicon continues to bring valuable IP cores and integration services to benefit its customer base. Customers are enabled to implement efficient, low-power integrated solutions that can perform beyond the limitations of traditional approaches. Its customer base covers consumer electronics, automotive, computer and peripheral, data processing, IoT and other applications.

To learn more about VeriSilicon, visit their website.

To read the press announcement about AI-ISP, go here.


Semiconductor China Syndrome Meltdown and Mayhem

Semiconductor China Syndrome Meltdown and Mayhem
by Robert Maire on 10-13-2022 at 6:00 am

The China Syndrome

-Commerce Dept drops a 100 page nuke on the Semi industry
-Many words but not a lot of clarity on exact impact
-Implementation & interpretation will be key to quantify impact
-It’s all bad, just a question of how bad

China is the industry’s biggest customer

We all know that China uses most of the world’s semiconductors but certainly does not produce enough internally to satiate that need. China buys $300B+ of semiconductors and is the worlds largest buyer of semiconductor equipment.

On the equipment side China produces very little equipment and imports the vast majority from the US, Japan, The Netherlands, Korea and other countries.
Restricting China’s access is a huge impact on the global market in both directions akin to an oil embargo but even worse.

It is also akin to going from an economic cold war to a hot, live war.
It is also unclear how much is real and how much is posturing much like Russia’s posturing on nuclear weapons.

It could be the US sending a message to China regarding Taiwan, that its not kidding and will take off the gloves to prove it.

Cutting China out of the semiconductor industry is a lot like cutting Russia out of the global economy.

What reaction does this provoke? An escalation or negotiation?

Much of China’s prosperity is linked to both doing business with the US as well as producing increasingly technical goods with advanced semiconductors and this could attack both.

Does China get pissed off and decide to encircle Taiwan? Does China back off and perhaps not support Russia as much? Does that even matter if the US is serious about crippling China’s effort in semiconductors.

In our view we don’t see the US backing away but the US could modulate how strictly it enforces potential restrictions depending upon China’s reaction.

Its open to implementation

The commerce department document issued on Friday is huge, 100 pages long. It says a lot but there is a lot more that it doesn’t say and it is unclear in many instances how to interpret what is said.

Link to Commerce China Semiconductor Document

Back to the future

As we have pointed out many, many times in past notes, licensing of semiconductor technology is not new at all. We recall over 20 years ago when we worked on China’s SMIC IPO that China was restricted to N-2 in technology (staying two technology nodes behind) . Over time that restrict faded away but China remains roughly two or mores nodes behind just due to market forces and speed.

The US government is just putting into rules, what exists already in the market today and what existed in the past…this is far from the earthquake that investors and analysts imply that it is. However, the definitive announcement is what has made the difference in perception.

14NM is a very fuzzy line- its not a binary decision

Semiconductor equipment that can be used to make 14NM chips is a very broad definition….

It can include equipment and technology decades old or the definition could be very limited to litho related equipment that defines line width.
The implementation could be limited to process equipment which actually makes the 14NM lines or grow to include metrology and inspection equipment which measures and controls the process tools.

It is unclear from the document how far and wide a net the commerce department will cast in covering both equipment and chips themselves.
In building a super computer its not just advanced CPU and GPUs that are needed but lots a generic glue that binds the whole system together.

Thus the Department of Commerce has an infinite amount of latitude and discretion to implement the new rules and we have no idea on which end of the spectrum they will come down

The stock market hates uncertainty and we have a huge amount

Perhaps the main reason that the semiconductor stocks have cratered so badly is the amount of uncertainty caused by Friday’s release.

We simply don’t know and won’t know for quite a while how this will be implemented. Anyone who says they know the exact impact on the industry is lying. We can speculate as to the range of potential impact but that could range from little to no impact all the way to a virtual ban on doing any business at all with China in the semiconductor industry, and everything in between. Somewhere in between is obviously the right guess.

We just won’t know, and neither will the companies know until licenses are either approved or denied, so its going to take months to get a handle on the actual impact. The only thing we know is that it will be negative but not how negative.

Meanwhile, while we are in the throes of uncertainty the stocks will behave badly as we have already seen.

This should not be a surprise as it was long in coming

We have been both writing about and publicly speaking about the US versus China in the semiconductor industry for 7-8 years now, longer than anyone we know of and writing more about it. We were perhaps a little ahead of reality buts its not like this issue has sprung up out of nowhere. Sooner or later the US had to do something or watch China eat its lunch in yet another critical industry, with semiconductors perhaps being the most critical industry of all given the defense and intelligence aspects

This does not force China’s hand on Taiwan

As we have said before, we are sure that China well understands that taking Taiwan by force would result in the decimation of semiconductor capacity there and would be a very hollow victory. China has no choice but to peacefully embrace Taiwan. If not , the fabs would stop working in a few weeks due to lack of critical support from foreign equipment vendors, that is assuming that the fabs were captured undamaged and satchel charges of C4 were not left behind under $150M litho tools.

So we don’t see this new rule making pushing any agenda on Taiwan. The status quo still exists

The stocks

We have been clearly negative on the group for the past over 9 months and have sited China as an additional risk on top of the general economic risk. All the risk factors are far from over. We have not bottomed in the economy or the semiconductor cycle. The semiconductor down cycle just started a few months ago. We won’t know the impact of this new China department fo commerce policy for several months and quarters reporting.

The only assumption we can make is negative.

As we have also said before, we would avoid value traps…. the stock is down XX% and too cheap to ignore or similar refrains.

We have no visibility on a turn and certainly no turn in fortunes in what’s left of 2022. 2023 certainly looks like a significantly down year versus 2022, but we don’t yet have a handle on how far down.

Part of the issue remains that the semiconductor industry has been so strong for so long that it may take time for investors to adjust to something other than just a blip of bad news….this is clearly more than a blip…..

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Micron and Memory – Slamming on brakes after going off the cliff without skidmarks

The Semiconductor Cycle Snowballs Down the Food Chain – Gravitational Cognizance

KLAC same triple threat headwinds Supply, Economy & China


Podcast EP112: How Cadence is Revolutionizing Full-Chip Signoff with Certus

Podcast EP112: How Cadence is Revolutionizing Full-Chip Signoff with Certus
by Daniel Nenni on 10-12-2022 at 10:00 am

Dan is joined by Brandon Bautz, Sr. Group Director of Product Management, responsible for the Cadence silicon signoff and verification product lines in the Digital & Signoff Group.

Dan and Brandon explore the substantial challenges faced by design teams needing to perform full-chip signoff at an accelerated pace for advanced nodes. Brandon details the unique capabilities of Certus and how it addresses the challenges customers face, making full-chip signoff far more efficient and predictable.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Measuring Success in Semiconductor Design Optimization: What Metrics Matter?

Measuring Success in Semiconductor Design Optimization: What Metrics Matter?
by Kalar Rajendiran on 10-12-2022 at 6:00 am

Altair Generic Image for SDO Metrics

When it comes to electronic design automation (EDA), there are two aspects to this technologically challenging and highly competitive field. First, there is the task of designing very complex chips for which a full suite of various software tools are needed. Then there is the task of managing extremely complex EDA workflows and valuable licenses for maximizing productivity and quality. With chip development teams running tens of millions of compute jobs every day in a race to beat aggressive time-to-market schedules, workflow management takes on greater importance. Achieving profitability boils down to optimal use of compute-hardware resources and valuable software licenses. Gaining market share boils down to bringing a high quality product ahead of the competition.

Altair possesses deep knowledge and expertise in EDA workflow management and is a fast growing leader in this space. It recently hosted a round table discussion among a few of its own experts to spotlight the various value-added functionality delivered by various Altair products. The panel was represented by Dr. Rosemary Francis, chief scientist for high performance computing (HPC), Ketan Kulkarni, team leader for supportive services, and Stuart Taylor, senior director for product management for Altair Accelerator products. The session was moderated by Peter Dobzsai, head of sales operations for their EMEA region. The panelists discussed the metrics that leading semiconductor companies consider in their optimization strategies on chip development projects. The following is a synthesis of that round table discussion.

Why should customers choose Altair for EDA solutions?

The tools are very specifically designed for EDA’s complex workflow requirements based on Altair’s deep EDA insights and expertise. They are very easy to setup and run and customers love the simplicity, flexibility and power behind Altair’s products. The tools include the Accelerator and the Monitor and all other workflow management tools that are part of the full Altair EDA suite of tools. Altair solutions are so widely used that, to quote an example, at least some part of almost every cell phone is designed using Altair EDA solutions.

What are some key Altair tools for EDA workflow management?

The Altair Accelerator is for high throughput scheduling for short and very fast EDA workloads. The Rapid Scaling feature is a core component of the Accelerator, leveraged heavily for Cloud Bursting purposes. The Altair License Monitor provides historical as well as real time view of licenses being used and comes in handy for optimizing the use of valuable licenses. The Altair FlowTracer helps automate the EDA workflows, making it easy for new engineers to start running flows and iterating within a few days of their onboarding. There are other productivity tools as well, such as the Altair Breeze, which is an application discovery tool. The Breeze collects application dependencies which is very important when trouble shooting issues. Other tools are also available to help monitor CPU, memory as well as I/Os for optimizing EDA work flow management.

Why is the Altair Accelerator a cut above the competition?

Accelerator Scheduler is simpler than many of the solutions out there.  While it may take 15 seconds to a minute for a typical scheduler to schedule a job after resources become available, it takes milliseconds with the Altair Accelerator.

When using other schedulers, companies find 50% license utilization levels at the same time they notice jobs waiting for licenses. Based on these statistics companies end up procuring twice the number of licenses than actually needed. Accelerator helps manage the software licenses more efficiently by handing off when not needed and tapping licenses only when needed.

Another important difference is that the Accelerator checks upfront for license availability before it schedules a job that will lock up hardware resources. This aspect has beneficial cost implications when running jobs on the cloud using on-demand expensive compute resources.

Altair also provides a lot of interfaces to the Accelerator and the Monitor. Without such interfaces, it may not be obvious when secondary licenses are in use, which are needed to run some jobs. This functionality makes it easier for system administrators and end users to know the license usage and what jobs are using what licenses.

How does Altair offering compare against other tools out in the market?

What customers have learned is that average license utilization levels of 35% is not uncommon even for a frequently used tool license. When Altair Accelerator/Scheduler is used, customers see an immediate 20%-30% boost in license utilization levels. To take it from 60% to 90% utilization level requires concerted effort from system administrators and CAD teams and is certainly achievable. Altair customers have been able to achieve 97%-98% utilization levels. Higher utilization of valuable licenses enables more iterations within a shorter time, faster time to market and a better quality, higher-yielding product. The Altair FlowTracer enables reproducibility of a workflow, which is important for propagating optimized flows to other teams without an organization.

Overall, Altair’s products also help enhance engineering productivity by maximizing the use of engineers’ time on the right flow and right tools.

How does a new customer get started with a few EDA workflows?

Altair is skilled in workflow management, not chip design.  If a customer can describe their workflow, Altair’s consulting team can help encode it into the FlowTracer. Once that is done, the FlowTracer encapsulates the customer’s EDA flow. It may sound like a big investment but once done the payoff is huge.

Are there Altair tools to help make a move to Cloud Bursting?

Yes, there are some profiling and analysis tool that can be run on the flow meant for the cloud. This process identifies the resources that are needed to execute a design. The next step will be to benchmark these cloud resources to make sure the workflow would run cost efficiently. Once these steps are completed, the flow is established and ready for engineers to run as and when Cloud Bursting is called for.

You can listen to the full round table discussion from here. Anyone designing semiconductor chips can benefit from Altair’s suite of EDA workflow management tools. For more details, visit Altair’s website.

Also Read:

Load-Managing Verification Hardware Acceleration in the Cloud

Altair at #59DAC with the Concept Engineering Acquisition

Future.HPC is Coming!


Microchips in Humans: Consumer-Friendly App, or New Frontier in Surveillance?

Microchips in Humans: Consumer-Friendly App, or New Frontier in Surveillance?
by Ahmed Banafa on 10-11-2022 at 10:00 am

Microchips in humans

On 2021, a British/Polish firm known as Walletmor announced that it had become the first company to sell implantable payment microchips to everyday consumers. While the first microchip was implanted into a human way back in 1998, says the BBC News—so long ago it might as well be the Dark Ages in the world of computing—it is only recently that the technology has become commercially available (Latham 2022). People are voluntarily having these chips—technically known as “radio frequency identification chips” (#RFIDs)—injected under their skin, because these microscopic chips of silicon allow them to pay for purchases at a brick and mortar store just by hovering their hand over a scanner at a checkout counter, entirely skipping the use of any kind of a credit card, debit card, or cell phone app.

While many people may initially recoil from the idea of having a #microchip inserted into their body, a 2021 survey of more than 4,000 people in Europe found that more than 51 percent of respondents said that they would consider this latest form of contactless payment for everything from buying a subway Metro card to using it in place of the key fob to unlock a car door. (Marqeta/Consult Hyperion 2021).

In some ways, the use of RFID chips in this manner is merely an extension of what has been going on before; the chips are already widely used among pet-owners to identify their pet when it is lost. The chips come in many sizes and versions and are far more common than most consumers realize—they are sometimes sewn into articles of clothing so that retailers can monitor the buying habits of their customers long after a purchase has been made. And Amazon has now come out with its button-sized RFID chips, which it dubs “air tags”: Clip one onto your keys, and the air tag will help you find where you accidentally dropped them—as well as making it simple to track anyone, said the Washington Post in “Apple’s AirTag trackers made it frighteningly easy to ‘stalk’ me in a test” (Fowler 2021). All for less $30 per air tag.

So, to some extent, human-machine products and the use of RFID chips is old hat; the underlying driver has always been the goal of expanding the abilities and powers of humans by making certain tasks easier and less time-consuming.

Consequently, such consumer technology can look like the next logical step—especially among those who already favor piercings and tattoos. But on second glance, the insertion of identifying microchips in humans would also seem to bear the seeds of a particularly intrusive form of surveillance, especially at a time when authorities in some parts of the world have been forcibly collecting DNA and other biological data—including blood samples, fingerprints, voice recordings, iris scans, and other unique identifiers—from all their citizens, in an extreme form of the #surveillance state. Before deciding what to think of the tech, we ought to look under the hood, and find out more about some of the nuts and bolts of this hybrid human-machine technology.

Read the full article in this link : https://thebulletin.org/premium/2022-09/microchips-in-humans-consumer-friendly-app-or-new-frontier-in-surveillance/

Also Read:

Intellectual Abilities of Artificial Intelligence (AI)

The Metaverse: Myths and Facts

Quantum Computing Trends


Where Are EUV Doses Headed?

Where Are EUV Doses Headed?
by Fred Chen on 10-11-2022 at 6:00 am

Where Are EUV Doses Headed 1

In spite of increasing usage of EUV lithography, stochastic defects have not gone away. What’s becoming clearer is that EUV doses must be managed to minimize the impact from such defects. The 2022 edition of the International Roadmap for Devices and Systems has updated its Lithography portion [1]. An upward trend with decreasing feature size has been revealed (Figure 1).

Figure 1. Increasing EUV doses are projected by the IRDS 2022 Lithography Chapter for decreasing diameter. These plotted doses give photon numbers of 4000-7000 within +/-5% CD of the edge. The photon number is decreasing with decreasing diameter.

The occurrence of stochastic defects actually defines an EUV dose window [2]. The consequences of going outside this window are shown in Figure 2.

Figure 2. 40 nm pitch contact holes have dose windows defined by the occurrence of stochastics. Too low a dose (left) results in insufficient photon absorption within the target circular area (example: encircled blue spots). Too high a dose (right) results in narrow gaps between features in which bridges (encircled adjacent pixels partly filled with orange) may form due to excessive photon absorption. The pixel size is 1 nm x 1 nm.

Too low a dose results in too few photons absorbed which leads to underexposure-type defects, such as missing, misshapen or undersized contacts. On the other hand, too high a dose results in overexposure-type defects, where gaps between exposed areas are accidentally bridged. From a multitude of studies on this topic, it is understood that the occurrence of defects is minimized (if not completely eliminated) within some range in between the two limits. We may expect that this dose window will shift toward higher values as feature sizes shrink.

The trend toward higher doses will obviously drive source power toward higher targets [3]. However, even at 500 W, doses going over 100 mJ/cm2 will drive throughput below 100 wafers per hour (Figure 3).

Figure 3. Throughput vs dose, as a function of source power. The calibration is based on Fig. 15 from Ref. 3.

Increasing source power is also an issue for environmental impact. Already EUV machines consume over a MW each [4]. In order to be able pass more wafers per day through each machine, multipatterning may have to be considered [5]. Lower doses would be ok for larger exposed features, but these then need post-litho shrink and have to be packed successively into the tighter pitches, as already practiced with DUV lithography.

References

[1] https://irds.ieee.org/editions/2022/irds%E2%84%A2-2022-lithography

[2] J. van Schoot et al., “”High-NA EUVL exposure tool: key advantages and program status”, Proc. SPIE 11854, 1185403 (2021).

[3] H. Levinson, “High-NA EUV lithography: current status and outlook for the future,” Jpn. J. Appl. Phys. 61 SD0803 (2022).

[4] P. van Gerven, https://bits-chips.nl/artikel/hyper-na-after-high-na-asml-cto-van-den-brink-isnt-convinced/

[5] A. Raley et al., “Outlook for high-NA EUV patterning: a holistic patterning approach to address upcoming challenges,” Proc. SPIE 12056, 120560A (2022).

This article first appeared in LinkedIn Pulse: Where are EUV Doses Headed?

Also Read:

Application-Specific Lithography: 5nm Node Gate Patterning

Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists

EUV’s Pupil Fill and Resist Limitations at 3nm


The Increasing Gaps in PLM Systems with Handling Electronics

The Increasing Gaps in PLM Systems with Handling Electronics
by Rahul Razdan on 10-10-2022 at 6:00 am

figure1 3

Product LifeCycle Management (PLM) systems have shown incredible value for integrating the enterprise with a single view of the product design, deployment, maintenance, and end-of-life processes.  PLM systems have traditionally grown from the mechanical design space, and this still forms their strength.

Meanwhile, due to the revolution in semiconductors, electronics systems have become increasingly integrated within system designs in nearly all industrial segments. To date, PLM systems have handled electronics systems largely as pseudo-mechanical components. However, with the rapid increase in electronic value (example: over 40% of automotive cost), this treatment of electronics within PLM systems is breaking down the fundamental value of PLM for their customers. This article outlines the increasing gaps created by electronics in PLM systems, and the nature of the required solutions.

What is the value of PLM?

Figure 1:  PLM

Most product development teams use PLM systems from companies such as PTC, Siemens, Dassault, Zuken, Aras and others to integrate major functions of the enterprise (figure one).  Underlying technologies of data vaulting, structured workflows, collaboration, and analytics provide a coherent view of the state of a project and the significant value delivered is a streamlined product development and lifecycle management capability. PLM infrastructure intersects with design through domain specific design tools (mechanical, electronic, software, and more). The semantic understanding of any underlying data held by PLM is actually contained in these domain specific design tools.  All the significant parts of the enterprise (design, manufacturing, field, product definition) use these design domain specific tools to interact with the underlying PLM data.

System PCB Electronic Design:

Figure 2:  Electronics Design Process for System PCB customers in non-consumer markets

 

The economics of semiconductor design imply that custom semiconductors only make sense for markets with high volume. Today, this consists largely of the consumer (cell phone, laptop, tablet, cloud, etc) marketplace. In the consumer marketplace,  a co-design of semiconductor and system model has evolved and this model is well supported by the Electronics Design Automation (EDA) industry.  Given the size of the markets involved, these projects are also typically very well resourced.  However, for every other market,  the electronics design flow follows the pattern shown in figure 2.

In this non-consumer electronics flow, the electronic design steps consist of the following stages:

  1. System Design:  In this phase, a senior system designer is mapping their idea of function to key electronics components.  In picking these key components, the system designer is often making these choices with the following considerations:
    1. Do these components conform to any certification requirements in my application?
    2. Is there a software (SW) ecosystem which provides so much value that I must pick hardware (HW) components in a specific software architecture?
    3. Are there AI/ML components which are critical to my application which imply choice of an optimal HW and SW stack most suited for my end application?
    4. Do these components fit in my operational domain of space, power, and performance at a feasibility level of analysis.
    5. Observation: This stage of design determines the vast majority of immediate and lifecycle cost. This stage is the critical selection point for semiconductor systems.
    6. Today, this stage of design is largely unstructured with the use of generic personal productivity tools such as XL, Word, PDF (for reading 200+ page data sheets), and of course google search. Within PLM, at best the raw data in the form of text is stored.
  2. System Implementation:  In this phase,  the key components from the system design phase must be refined into a physical PCB design.  Typically driven by electrical engineers (vs system engineers) within the organization or sourced by external design services companies,  this stage of design has the following considerations:
    1. PCB Plumbing:  Combining the requirements from the key components with the external facing aspects of the PCB is the key job at this stage of design.  This often involves a physical layout of the PCB, defining power, and clock architecture, and any signal level electrical work (high speed, EMR, and more). This phase also involves part selection, but typically of the low complexity (microcontrollers) and analog nature.
    2. PCB Plumbing Support: Today, this stage of design is reasonably well supported by the physical design, signal integrity, and electrical simulation tools from the traditional EDA Vendors such as Cadence, Zuken and Mentor-Graphics. Part Selection is also reasonably well supported by web interfaces from companies such as Mouser and Digikey.  Also, PLM systems do a decent job of capturing and tracking these components as a part of Bill of Materials (BOM). While the design intent is not necessarily captured,  the range of analysis is limited (to plumbing) and can be recreated by another competent electrical engineer.
    3. Bootup Architecture:  As the physical design is being put together, a bootup architecture for the system is defined. This typically proceeds with a series of stages starting with electrical stability (DC_OK) on power-up, self-test processes for the component chips, microcontroller/fpga programming from non-volatile memory sources, and finally to the booting of a live operating system. Typically, connected to this work are a large range of tools to help debug the PCB board (memory lookup, injection of bus instructions, etc) The combination of all of these capabilities is referred to as the Board Support Package (BSP).  BSPs must span across all the abstraction levels of the System PCB, so today, often they are “cobbled” together from a base of tools with the information sitting dynamically  on various disparate websites. Today, PLM systems may or may not capture the broad design chain implied by BSP systems. Also, BSP components move at the rate of SW, and must be managed within that operational domain.

Overall, from a PLM point-of-view, large parts of the most critical parts of the current electronics design flow are built within an unstructured design process.  This is a problem for all products, but especially for a class of  non-consumer system designs with long lifecycle properties. Let’s discuss these now…

Electronics LLC Markets:

Long Lifecycle (LLC) products are typically defined as products with an expected “active” life in excess of 5 years. LLC markets include Aerospace and Defense (A&D), Energy, Industrial, Medical Devices, Transportation Infrastructure and more.  Even mid-volume markets such as networking and auto exhibit LLC properties. Table 1 below outlines the critical differences between short and long cycle products.

Short Lifecycle Products (SLC) Long Lifecycle Products (LLC)
Useful life 1-2 Years  Useful life 5+ Years
Short Warranty Model Significant Maintenance commitment
Fast technology adoption/transition/disposal Slow technology adoption/transition/disposal
Focus on Time-to-Market , Performance, Features, Price Focus on Life-time Revenue, Reliability, Supply Chain
Maintenance = replacement Maintenance = repair

 

Table 1:  Market Segmentation Comparison

From a design point-of-view, this manifests itself three specific and additional requirements:

  1. Obsolescence:  Consumer market activity/churn can often lead to dramatic dropoff in demand for particular semiconductors. The result leads to semiconductor obsolescence events which negatively impact the LLC product supply chains.  In effect, an LLC product owner has to deal with managing a tsunami of activity from the SLC space to maintain product shipments.
  2. Reliability:   Semiconductors for the consumer market are optimized for consumer lifetimes. For LLC markets, the longer product life in non-traditional environmental situations often leads to product reliability and maintenance issues.
  3. Future Function:  LLC products often have the characteristic of being embedded in the environment. In this situation, upgrade costs are typically very high. A classic example is one of a satellite where the upgrade cost is prohibitively high.  PLM and electronics design systems must account for this reality.

Interestly, PLM is the perfect application to help manage these issues.  However, gaps in functionality to handle electronics prevent it from being effective.

Differentiated Issues/Gaps in current PLM Systems for LLC Product Teams

Since the PLM systems are dealing with electronics primarily at the mechanical level,  the only structured information available within the PLM systems consists of physical design abstractions.  However, this representation misses key aspects of the whole product for electronics.  These include:

  1. System Design Data with associated intent
  2. Meta-Product information on the various abstractions above the pseudo-mechanical chip (AI or SW stacks)
  3. Associated design chains (compilers, debuggers, analysis tools)

The lack of the capture, management, and communication of this information handicaps PLM systems from helping solve the significant issues for LLC markets. Examples include:

  • Obsolescence: Downstream struggle for supply chain teams to somehow manage part availability (through secondary and other channels) once the respective components are served with discontinuation notices.
    • Part Replacement: Can I replace the obsolete part with an equivalent ?  What was the system designer’s intent ? Is this a key semiconductor or one just needed for “plumbing” ?  Further,  often the design team is not available at the time of this event ?
    • Is there sufficient captured system design information to re-spins with EOL parts appropriately replaced and/or competitive requirements for new features.
  • Reliability: It is not unusual for System specific environmental conditions to generate reliability profiles wildly divergent from the semiconductor data-sheets.
    • How does the field organization “debug” reliability issues without a clear view of system design intent for the parts ?
    • How do the learnings of the field organization get back into the next system design process ?
  • Future Function: Increasingly,  field embedded electronics require flexibility to manage derivative design function WITHOUT hardware updates.  How does one design for this capability, and how does marketing understand the band of flexibility in defining new products.

How does one fill the gaps in the current PLM systems ?

Fig. 3 below delineates the critical features in PLM systems which could ease LLC product designers to deal with the above issues, but none of the standard PLM products from top 5 vendors (or others) support these:

  1. Capturing of Design “Intent” upstream during the design phase. Design intent could capture the operating conditions, expected life cycle of the product being designed, the domain and application the product is intended for and expectations on software and AI stack for the end application (for eg. ability to perform basic facial expression recognition on the device and availability of existing models and support for OS and frameworks like TensorFlow or PyTorch etc).
  2. Visibility/awareness of supply chain (distributors, vendors, pricing etc) upstream during the design phase in a strategic manner with a view towards lifecycle costs (vs immediate costs).
  3. The design intent captured above & awareness of supply chain could potentially allow the PLM tools provide “smart search” and iteratively result in an optimal part selection upstream during the design phase itself and then once again during the downstream process of respinning the design due to the imperative of replacing an EOL part (if any)
  4. The field data captured in the PLM systems can be an excellent source to build accurate reliability models for the key components. This is even more relevant since besides the limited accuracy of the reliability numbers (# hours) provided by Semicon vendors (and available in Silicon Expert), the reliability would be different in different operating conditions. Hence the field data in PLM is an excellent source to build these models. These models can be made available upstream to allow for more optimal selection of parts during the design phase itself vis-a-vis the design intent.
  5. Marketing data on potential derivatives can inform the flexibility built within the hardware systems for “over-the-air” updates.

Fig. 3 Gaps in existing PLM Systems

Conclusion:

PLM systems have shown incredible value for integrating the enterprise with a single view of the product design, deployment, maintenance, and end-of-life processes.   However,   the massive infusion of electronics which is embedded in nearly every system design is creating a situation where the core value statement of PLM systems is rupturing.

The solution?    PLM systems must integrate with Smart System Design (SSD) electronics EDA platforms. These platforms would have abstractions of the system at all interesting levels of the system such as hardware components, software components, AI stacks, and more.  With this representation, critical processes such as design intent, field feedback, derivative design, predictive maintenance functions can all be integrated within PLM systems.

Acknowledgements: Special thanks to Anurag Seth for co-authoring this article.

Related Information:
Also Read

DFT Moves up to 2.5D and 3D IC

Siemens EDA Discuss Permanent and Transient Faults

Analyzing Clocks at 7nm and Smaller Nodes


WEBINAR: Flash Memory as a Root of Trust

WEBINAR: Flash Memory as a Root of Trust
by Bernard Murphy on 10-09-2022 at 4:00 pm

secure flash

It should not come as a surprise that the vast majority of IoT devices are insecure. As an indication, one survey estimates that 98% of IoT traffic is unencrypted. It’s not hard to understand why. Many such devices are cost-sensitive, designing security into a product is hard, buyers aren’t prepared to pay a premium for security and there haven’t been any meaningful barriers to insecure products.

REGISTER HERE

Overcoming our human inability to understand low percentage risks isn’t going to happen so the burden falls on regulations. Which are now starting to develop teeth. The EU will require security certificates for all connected devices by 2023. In the US, NIST is working on cybersecurity regulations which are thought will appear in a year or two and will carry penalties for non-compliance. Automotive markets will self-police security by expecting ISO 21434 documentation on processes and risk. Still, many product builders will try to dodge the problem unless solutions are easy. Winbond has an intriguing approach with their secure flash.

Roots of Trust (RoT)

This concept is familiar to anyone with a moderate understanding of security. A root of trust in a system is a core component the system can always trust for security purposes – authentication, cryptography and so on. The goal is to minimize the attack surface around essential security functions, rather than distributing these across the system. All other services must turn to the RoT when making a security-related request. A hardware-based RoT is essential in such implementations.

The standard approach to an RoT is processor-centric – Apple T2 and Google Titan chips are a couple of examples. Such chips use on-board flash memory, for support but with limited size. This is necessary to keep cost down and because the combination of embedded flash and processor logic on a single chip limits memory size. That limitation is a problem for IoT applications which need to support complex stacks for NB-IoT and other advanced communications protocols. There are workarounds outside the RoT but those again increase the attack surface.

Winbond has an intriguing approach in which the make the flash memory a complement to an MCU root of trust, allowing for much more spacious storage. This is an active security component, not just a larger memory.

The Winbond W77Q secure flash memory

The W77Q is a smart flash memory with an emphasis on security. A single use key must sign write and erase commands. The device verifies boot code integrity on reset and allows for secure boot (XIP) directly from flash. Without need to first upload to DRAM. It supports fallback allowing boot from an alternative code space if an integrity problem is detected. It protects against rollback attacks where a hacker attempts to install a correctly signed older version of code with known bugs.

W77Q handles over the air updates directly, without need for MCU support. A remote trusted authority can force a clean boot using an authenticated watchdog timer. And it supports secure storage in separately protected partitions in the same device.

Pretty neat for a serial NOR flash pin-compatible with a conventional device, yet certified secure to a number of relevant standards. You can watch a detailed webinar HERE.

Also Read:

WEBINAR: Taking eFPGA Security to the Next Level

WEBINAR: How to Accelerate Ansys RedHawk-SC in the Cloud

Webinar: Semifore Offers Three Perspectives on System Design Challenges