RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

CEO Interview: Dr. Shafy Eltoukhy of OpenFive 

CEO Interview: Dr. Shafy Eltoukhy of OpenFive 
by Daniel Nenni on 03-05-2021 at 6:00 am

Shafy Eltoukhy

Dr. Shafy Eltoukhy has over 35 years of experience in the semiconductor industry. He served as VP and BU manager of the Analog Mixed Signal Group at Microsemi. He was the VP of Operations and Technology Development at Open-Silicon. He was the VP of Technology at Lightspeed Semiconductor where he joined the founding team that invented structured ASIC technology. Shafy was the Director of Technology Development at Actel Corporation, where he participated in the development of the first generation of FPGA products. He also held engineering positions at Intel Corporation. Shafy holds a Ph.D. from the University of Waterloo, Canada.

What brought you to semiconductors?
As an electrical engineering student studying for my master’s, I attended a class on semiconductor physics. I found the course to be very exciting and it sparked the passion I now have for the semiconductor industry. Once I earned my Ph.D. from the University of Waterloo, Canada, I became increasingly enthusiastic about semiconductors and their impact on the future. I was offered a job as a professor at the university but turned it down because I really wanted to be active in the industry.

My first job was at Intel where I was a device engineer working on DRAM technology. After a couple of years with Intel, some of my colleagues and I came up with an idea for a startup company, and Actel was born (now known as Microsemi). We launched it as an FPGA company, which was a new technology at that time. This was a turning point in my career, where I came to the realization that I loved to work for startup companies. It’s important to note that the fabless model was nonexistent at that time, so we talked to Chartered Semiconductor (now Global Foundries) and a few others about being our foundry. After the launch and success of our first product, the company went public. A few years later, I decided to start up another company, Lightspeed Semiconductor, which focused on ASIC technology. Since then, I’ve remained very active in ASICs and recently spent time as the general manager at Microsemi where I focused on analog mixed-signal product development.

The semiconductor industry is very exciting to me because every day there are changes and new advancements. ASICs have been especially exciting because of the collaboration with many different customers who have varied and innovative ideas, as well as many different target applications in a variety of vertical markets.

I can sense the excitement when you talk about startups. There are not that many people who are still in the startup arena and going strong.
That is true. I’ve found that I really enjoy small companies because, unlike larger ones, it’s much easier to implement key decisions and also to change the organization’s direction very quickly. It’s a fast-paced environment and the energy in the company is contagious. I think it’s one of the main reasons I still enjoy working for startups to this day.

What is OpenFive’s back story? Why was the business unit formed?
It started when Open-Silicon was acquired by SiFive, Inc. in 2018.  SiFive was focused on processor cores based on the RISC-V ISA. The addition of custom silicon capabilities to the SiFive portfolio helped us accelerate the IP integration and SoC design cycles and bring silicon to the market at a faster pace. The custom silicon BU built a successful business model that combined customization of SoCs with RISC-V cores. To drive further business growth, we launched the OpenFive brand and expanded into providing custom silicon solutions with differentiated IP, while being agnostic to processor architecture. This distinct OpenFive brand provides clarity on our ability to produce custom SoCs, from spec-to-silicon design, customizable IP, and manufacturing. The current emphasis in the industry on scalable silicon architectures makes Die-to-Die and Chip-to-Chip interconnects integral for disaggregated die and chiplet based SoC solutions, and has created strong requirements for experience in advanced packaging, test, and production in leading-edge process nodes such as 5nm. This need and opportunity drove the creation of OpenFive as you see it today.

What do you mean when you say OpenFive is processor agnostic? What is your core processor strategy?
OpenFive is very neutral as to which processor is used because we are an independent silicon business unit and we’re ultimately measured based on how many acres of silicon we sell. We have expertise in implementing SoCs with all relevant ISAs.

What customer challenges and business models is OpenFive addressing?
The challenges always depend on the type of customer, and we strive to offer each customer an optimized solution.  Let’s take for example a system company; they may not be familiar with the design process of a chip or with the manufacturing part of it, but they really want to get to the market quickly. OpenFive offers them a complete solution from spec-to-silicon. A lot of these customers are not semiconductor experts, and they simply want a chip that works based on their unique specifications. OpenFive also has customers that have their own design teams and front-end architecture. This is their bread and butter, and they know how to proceed, but they don’t have a physical design team or the tools to do the physical design. OpenFive supports these customers by using a netlist or RTL handoff model to deliver working silicon to them. This group doesn’t need to be concerned with acquiring physical design tools such as those from Cadence or Synopsys, and they don’t have to be concerned about their foundries, as OpenFive will take care of this for them. The third type of customer, which is also a sizable portion of our business, has a team that already has a chip that they have developed as a prototype. However, they are not experienced in working with the supply chain and dealing with the foundry and all the things that come with the operational side of it. They come to us with their design, and we handle the testing, production ramp, supply chain management, and so on. At OpenFive we are committed to offering customers our end-to-end expertise from SoC design, IP and manufacturing to deliver high-quality silicon in advanced nodes down to 5nm.

What do the next 12 months have in store for OpenFive? 
OpenFive’s goal with all of our customers is to add more value through our engagement, and with that in mind, we are moving in two major directions. The first is spec-to-silicon, where OpenFive will focus on a few vertical markets where we can add more value to the customer and take advantage of the platforms that OpenFive builds to reduce time-to-market and the solution cost for the customer. The second goal is to establish more investment in delivering IP for these vertical markets.

For example, we are delivering a lot of HBM solutions for the high performance computing market, going down to 7nm, 5nm, 3nm and so on. We’re also  staying ahead of the game by investing in More-than-Moore solutions with die-to-die (D2D) interfaces, chiplet technology and 2.5D packaging. By mixing and matching different technologies, we can offer chiplets that enable partitioning of the design into different functions, and the option to choose a process optimized for that particular function. The overall cost of the solution will be lower than going to a finer geometry process node that is very expensive. This area is very important to us moving forward. In the coming months, you will see many exciting new initiatives from OpenFive ranging from AI-enabled sub-systems to customizable D2D IP and chiplets with advanced 2.5D packaging, and we look forward to enabling customers to create domain-specific SoCs that are highly optimized for power, performance and cost.

Also Read:

CEO interview: Graham Curren of Sondrel

CEO Interview: Mark Williams of Pulsic

CEO Interview: Sathyam Pattanam


Perforce Embedded DevOps Summit 2021 and the Path to Secure Collaboration on the Cloud

Perforce Embedded DevOps Summit 2021 and the Path to Secure Collaboration on the Cloud
by Mike Gianfagna on 03-04-2021 at 10:00 am

Perforce Embedded DevOps Summit 2021 and the Path to Secure Collaboration on the Cloud

Perforce recently held their virtual Embedded DevOps Summit. There was a lot of great presentations across many disciplines. Of particular interest to me, and likely to the SemiWiki readership as well, was a presentation by Warren Savage entitled Secure Collaboration on a Cloud-based Chip Design Environment. I’ll provide a quick overview of the event and then I’ll dive into Warren’s presentation to illuminate the Perforce Embedded DevOps Summit 2021 and the path to secure collaboration on the cloud.

I’ve been to A LOT of virtual events this past year. I’m sure you have as well. After attending so many you start to get a sense of what works best. Live presentations and a robust way to interact with the speakers are two elements I find appealing. The Perforce DevOps Summit had both. The presentations were live, and each session had a live moderator to keep things moving. This makes the event a lot more interesting in my experience. Interaction with the speakers was done through Slack, a robust and reliable platform. All good.

Warren Savage

Back to secure collaboration on the cloud. I’ve known the presenter, Warren Savage for a long time. Warren has a very relaxed and effective style. He seems to be able to explain any complex topic in a way that everyone can understand.  Warren is a Silicon Valley veteran who hails from places like Fairchild, Tandem Computers and Synopsys. In 2004 he founded IPextreme with the goal of simplifying IP access. Silvaco acquired the company on 2016 and Warren has now been recruited by DARPA to work on cybersecurity.

The organization he’s working with is called ARLIS (Applied Research Laboratory for Intelligence and Security). The organization has a broad charter in research and development for artificial intelligence, information engineering and human systems. Warren’s talk had two parts – one that outlined the significant security risks that exist in the devices and networks we use every day and one that detailed the work DARPA is doing with Perforce to address these risks. Let’s take a look at both parts.

Vulnerabilities – Be Worried

Warren began with an overview of the incredibly connected world around us – how we got there. Most of us know this story quite well, so let’s skip to what’s wrong with it. If we look at all the interconnected devices around us, they communicate with the cloud and with each other. IoT is a good example. The issue with this massive network of devices is the uneven security that exists across the spectrum. Simple IoT devices can have weak security. Warren used the digital picture frame you give your grandmother so you can stream photos of her grandchildren to her as an example. This sounds innocent enough, but it turns out these devices present a meaningful attack vector.

Warren recounted the incident in 2016 when a denial-of-service attack on a major DNS server took the internet down on the east coast of the US and Europe for about a day. This attack was accomplished with a weapon called the Mirai botnet. Essentially a massive network of vulnerable devices (like that picture frame) all co-opted to perform a specific attack protocol. These botnets can be assembled in a hierarchical fashion, creating formidable processing power.  The recent SolarWinds hack is another example of how large the scope of these efforts can be. 

DARPA has created an Attack Surface Reference Model that catalogs the various methods to misappropriate chip technology. There are four primary vectors:

  • Side channel: extraction of sensitive data by observing external chip characteristics, such as power consumption or network traffic
  • Reverse engineering: extraction of algorithms and design details from illegally obtained representations
  • Malicious hardware: insertion of secretly triggered disruptive functions in the device
  • Supply chain: cloning, counterfeiting or re-marking devices

It turns out the semiconductor supply chain represents an enormous attack surface, some of the opportunities are detailed in the figure below.

Semiconductor Supply Chain Vulnerabilities

The Work Ahead – Be Less Worried

Warren described a rather ambitious program underway at DARPA to address these threats. Called Automatic Implementation of Secure Silicon (AISS), the goal is to embed security capabilities into the design flow. New tools and new IP will play a part. There are already approaches to address some of the threats mentioned. EDA tools can add additional logic to a chip that modifies its behavior. Only by entering a key can the chip’s original function be restored. This basically makes reverse engineering very difficult. The program’s goal is to add a fourth parameter to the familiar power, performance and area metrics (PPA) for security.

Warren went into some advanced work using blockchain technology to track the chain of custody of material in the semiconductor supply chain. This will help to close many attack surfaces. Back to AISS. There is a mandate that this system must operate entirely in the cloud. There is a lot work going on to host a familiar-looking, easy-to-use design flow in the cloud to deliver new security technology.

This design flow is where Perforce technology is used. A key goal of the system is to facilitate controlled access of assets from multiple companies who participate in a design project. Certain assets need access by specific users in multiple companies. Something like a crossbar switch is required to implement a system like this, and the Helix Core from Perforce is an excellent match for this need. The architecture of the system is shown below.

Perforce Helix Core Asset Control

The complete agenda of the Embedded DevOps Summit 2021 can be found here. I suspect there will be an opportunity to watch replays of the event. Keep watching here to get more information on the Perforce Embedded DevOps Summit 2021 and the path to secure collaboration on the cloud.

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read:

Single HW/SW Bill of Material (BoM) Benefits System Development

A Brief History of Perforce

Conference: Embedded DevOps


Maximizing ASIC Performance through Post-GDSII Backend Services

Maximizing ASIC Performance through Post-GDSII Backend Services
by Kalar Rajendiran on 03-04-2021 at 6:00 am

Panel 1 Alchip – HPC ASIC Manufacturing Done Your Way 1030x579 1

ASICs by definition are designed to meet the respective applications’ requirements. ASIC engineers deploy various design techniques to maximize performance, minimize power and reduce chip size. But is there more that can be done after the GDSII is taped out? A recent press release from Alchip Technology dated Feb 4, 2021 claims “High Performance Computing Demand Puts Premium on Backend Engineering Expertise.” The subheading of the same press release states “Once Mundane Service Now Prized for Squeezing Out Last nth of Performance.” From the subheading it is clear that Backend services being offered by Alchip Technology is not new. But the point that these same services have now become prized is worthwhile to understand why and how. Is it just a temporary phenomenon due to fluctuating market demand or is it a permanent shift in how the services are and will continue to be valued in the future? What are the criteria that would make one company better than another in rendering these services? The following is an attempt to arrive at answers to the above questions by taking a look at the evolution of the industry, the technologies and the supply chain ecosystem.

In the beginning semiconductor companies were vertically integrated and had their own foundries. There were dedicated departments that handled design, layout optimizations for process, packaging, test and manufacturing related aspects. With that vertical integration breaking down over the years (EDA tools, packaging, test, foundries), these capabilities needed to move out as well. Subsequently these capabilities started getting highly specialized with advances in the respective technologies.

The following diagram captures what is involved in backend services. With the introduction of every new process node, packaging and substrate technologies, comes the opportunities for higher performance of the ASICs. But along with the opportunities come complexities and challenges too. The result is an increase in the outsourcing of packaging, test, assembly and production responsibilities to companies who are far more experienced in these capabilities.

To master today’s advanced manufacturing technologies and fully extract the performance, power and area benefits offered by them, specialists need to be deployed. To use an analogy from the software domain, an optimized code written in a high-level language may be further optimized at the assembly language level by an expert in that assembly language. And an optimized piece of code at the assembly language level may be further optimized at the machine code level by that machine code expert. Backend specialists are like machine code experts of the semiconductor domain.

As an example, let’s look at packaging technology. This is an area where there have been tremendous advances that directly impact the performance of a semiconductor application in terms of speed, signal and power integrity. Chip-on-Wafer-on-Substrate (CoWoS®) is one technology that enables increased performance bandwidth, reduced power consumption and smaller form factor.

Following are some excerpts from the press release.

“Packaging isn’t packaging anymore,” declares Leo Cheng, Senior Vice President of Engineering at Alchip.  “With today’s design complexity, packaging has become the most cost/efficient route to increasing performance, lowering power consumption and meeting real estate constraints. 

Alchip has elevated its packaging capabilities to include Chip-on-Wafer-on-Substrate (CoWoS®) first developed by TSMC and this spring is expected to announce a true 2.5D INFO capability. 

Alchip’s CoWoS process runs on dedicated tooling and demonstrates IP performance equivalent to that of an original design.  The process also includes online debugging and active thermal control.  The company’s in-house design substrate design capabilities assure compliance with all system requirements and establishes the frame work for critical foundry-to-final test flow. 

Packaging is just one of the many areas within backend services. There is value to be maximized by customers within each and every area of backend services by leveraging a specialist service provider.

Alchip with its HQ in Taipei, a dedicated team in Hsinchu and its well-honed backend services portends to bring tremendous value to its customers. It’s understandable that demand for Alchip’s post-GDSII backend services has increased exponentially across all high-performance computing ASIC applications. Any customer looking to squeeze out the last nth of performance from their semiconductor device may want to have exploratory discussions with Alchip.

Also Read:

Alchip at TSMC OIP – Reticle Size Design and Chiplet Capabilities

Alchip moves from TSMC 7nm to 5nm!

Alchip Delivers Cutting Edge Design Support for Supercomputer Processor


NetApp Enables Secure B2B Data Sharing for the Semiconductor Industry

NetApp Enables Secure B2B Data Sharing for the Semiconductor Industry
by Mike Gianfagna on 03-03-2021 at 10:00 am

NetApp approach to security

Data sharing between semiconductor companies and EDA software companies has been critical to the advancement of the industry.  But it’s had security issues and associated loss of trust along the way.  For instance, there have been cases of customer designs shared as a testcase finding their way into a product demo without the consent of the customer. How did this happen? There was no malicious intent. The primary cause was that the shared data was not controlled within a secure vault and there was no tracking of how the data was used and by whom.  There was also no clear way to return the data that was sent or ensure that all instances of the data were deleted. This has led to major B2B trust issues which then leads to longer bug fix cycles because data is not easily shared. A new approach is needed. Read on to see how NetApp is working to improve secure B2B data sharing for the semiconductor industry.

Why the Industry Needs Secure and Trusted B2B Data Sharing

As I have shared in previous articles, data is the ever-growing lifeblood of semiconductor design.  Double digit data growth between 7, 5 and 3nm design nodes is straining design infrastructure.  At the same time the value of that data is increasing. Data once deleted after successful or failed analysis is being saved so AI/ML models can train or learn from past design runs. Data shared for the joint development of AI/ML models is just one example of the importance of robust secure B2B data sharing solutions.

Let’s examine some of the key reasons for B2B data sharing in the semiconductor industry. These items won’t necessarily make big headlines, but they represent a crucial process to advance chip design. The following points highlight some scenarios of interest.

EDA vendor debug

EDA vendors will always require access to customer designs for software debug – this need will never go away.  Concerns around sharing testcase data results in delays to gain access to the data, creating longer debug and resolution times.  I have even heard stories of EDA teams trying to guess the cause of a problem when access to data was not an option. Rapid access to data is critical for fast issue resolution of issues and for meeting time to market goals.

AI development

EDA tools are rapidly building AI-enabled solutions. Machine learning (ML)/deep learning (DL) can reduce algorithm complexity, increase design efficiency and improve design quality.  Training complex ML and DL models requires massive amounts of data.  And in most cases, it is data EDA vendors don’t have.  The data EDA vendors need is their customer’s design data.  Secure data sharing is critical to the rapid advancement of AI in the semiconductor industry. The challenge is the volume and proprietary nature of the data further complicates sharing.

NDA compliance

We have an NDA in place, so we’re covered, right?  Most data sharing NDAs require that data be returnedand/or deleted once it is no longer needed.  Verifying that all copies of sensitive data were fully deleted in compliance with an NDA is difficult at best. 

Collaboration

Modern chip design is a team sport.  IP providers, library vendors, tool vendors and design services teams all work together to meet critical design timelines and design goals.  Secure data sharing to facilitate collaboration is critical for this process to work.

Can we change the way we think about secure data sharing?

Let’s talk about the roles and responsibilities of Data Owners and Data Users. 

  • Data Owners should be able to share data into a data user’s secure walled off datacenter while still retaining complete visibility and control over WHO can access the data and WHAT systems can access the data. There should be visibility into how often the data is accessed with the ability to highlight anomalous data access patterns. Data Owners should be able to monitor the security attributes of the systems that have access to the data

Data Owners should also be able to securely revoke (or even securely wipe) the data from the system including removing key access.  Data Owners should not find data sitting on a data user’s system unused or after the terms of use have expired or the data has turned cold.  Data Owners should have full visibility of their data at any time even when it is in the Data Users’ datacenter or cloud environment.

  • Data Users should be able to use or share data in their own secure walled off datacenter where they have access to their own resources and tools. They should be able to access the data for approved processes such as test case debug, AI model development and for design collaboration.  Data sets are often so large that it is impractical to expect the Data Owners to host the compute and storage resource for development.  So, it is often critical to have access to the data in Data User’s own datacenter.

The NetApp Approach

NetApp’s ONTAP storage operating system is used by all of the top semiconductor and EDA companies. ONTAP is also used in all of the 3-letter acronym government facilities today for data sharing.  This means that B2B secure data is most likely already a possibility.  Because NetApp’s ONTAP storage operating system runs in all of the commercial clouds, B2B data sharing can be done datacenter-to-datacenter, datacenter-to-cloud or cloud-to-cloud, all with the same controls and monitoring. You can learn more about ONTAP from this prior post.

You can also get a broad view of NetApp’s approach to security here. There is a very useful technical report available from NetApp. A link is coming.

First, let’s take a look at some of the capabilities that allow NetApp to enable secure B2B data sharing for the semiconductor industry.

  • Support for Zero-Trust security architectures
  • Virtual Storage Machine (SVM) – this enables data to be walled off on a shared storage system. This is effectively a secure multi-tenant data storage environment.  SVM allows for role-based access that allows controlled access to allow Data Owners to monitor the storage environment even inside the Data User’s datacenter for real time auditing
  • Secure data transfer via SnapMirror or FlexCache means no more downloading and untar’ing data.Data is automatically transferred from one ONTAP filer to another with data encryption both at rest and in flight. An added benefit is the data is always up to date in the case of rapidly changing data sets
  • Data encryption both with encrypted or unencrypted drive with external key manager is supprted
  • Secure data shredding is supported
  • NFS and SMB security with Kerberos is supported
  • Military grade data security credentials are supported. ONTAP is EAL 2+ and FIPS 140-2 certified
  • File-level granular event monitoring with integration is security information and event management (SIEM) partners is available and supports:
    • Log management and compliance reporting
    • Real-time monitoring and event management. This provides visibility of WHO is accessing the data, what systems are accessing the data and how often the data is being accessed.
  • Integration into third party security tools like:
    • Splunk-based system monitoring to report changes to the system
  • Cloud Secure technology also monitors for anomalous access patterns alerting the Data Owners of suspicious access patterns

The B2B Data Owner has the ability to securely transmit data, revoke data, monitor the usage and access pattern of data, monitor and alert when the secure Zero-Trust infrastructure has been changed, etc. 

I’ve only scratched the surface here. NetApp offers a lot of capability to create a trusted, secure environment. NetApp is working to improve secure B2B data sharing for the semiconductor industry.

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read:

NetApp’s FlexGroup Volumes – A Game Changer for EDA Workflows

Concurrency and Collaboration – Keeping a Dispersed Design Team in Sync with NetApp

NetApp: Comprehensive Support for Moving Your EDA Flow to the Cloud


Semiconductor Shortage – No Quick Fix – Years of neglect & financial hills to climb

Semiconductor Shortage – No Quick Fix – Years of neglect & financial hills to climb
by Robert Maire on 03-03-2021 at 8:00 am

Toamagachi Semiconductor shortage

– Semi Situation Stems from long term systemic neglect
– Will require much more than money & time than thought
– Fundamental change is needed to offset the financial bias
– Auto industry is just the hint of a much larger problem

Like recognizing global warming when the water is up to your neck

The problem with the semiconductor industry has finally been recognized but only after it stopped the production of the beloved F150 Pick Up truck and Elon’s Tesla. Many analysts and news organizations wrongly blame the Covid pandemic and its many consequences and assume this is just another example of the Covid fallout. Wrong! This has been a problem decades in the making. Its not new. The fundamental reasons have been in the works for years. The only thing the pandemic did was to bring the issue to the surface more quickly.

The issue could have been brought to the surface just as easily and with worse consequences by a conflict between China and Taiwan. Or perhaps another trade spat between Japan and Korea.

The semiconductor industry is perhaps not as robust as would otherwise be thought given that it hasn’t seen a significant problem before.

The reality is that the “internationalization” of both the industry and its supply chain have opened it up to all manner of disruption coming at any point along that long chain.

The consolidation has further concentrated the points of failure into a small handful of players and perhaps one, TSMC, that is 50+% of the non memory chip market.

Tamagotchi Toys were the Canary in a Coal Mine

Most people may not remember those digital pets called Tamagotchi that were a smash hit in the late 90’s. Many in the semiconductor industry in Taiwan do remember them. In the summer of 1997 they sucked up a huge amount of semiconductor capacity in Taiwan and whacked out the entire chip industry for the entire summer causing delays and shortages of all types of chips.

Tamagotchi Tidal Wave Hits Taiwan

In essence, a craze over a kids toy created shortages of critical semiconductor chips. Semiconductor capacity is much greater now than it was 20 years ago but the industry remain vulnerable to demand spikes and slowdowns.

The memory industry is an example of the problem

Perhaps the best example of the chip industry’s vulnerability is the memory semiconductor market. The market lives on the razors edge of supply and demand and the balance maintained between the two.

Too much demand and not enough supply and prices skyrocket….too little demand and excess supply and prices collapse.

The memory industry is clearly the most cyclical and volatile in the semiconductor universe. One fab going off line for even a short while due to a power outage or similar causes the stop market for memory chips to jump.

Kim Jong-Un should buy memory chips futures

All it would take is one “accidentally” fired artillery round from North Korea that hit a Samsung fab in South Korea and took it out of commission. Memory prices would go through the roof for a very long time as the rest of the industry could never hope to make up for the shortage caused in any reasonable amount of time

Other industries, such as oil, do not have the same problem

When you look at other industries in which a product is a commodity like memory is you do not have the same production problem. The oil industry which is also a razor’s balance between supply and demand does not have the same issue as there is a huge amount of excess capacity ready to come on line at a moments notice.

The cost of oil pumps and derricks sitting around idle waiting to be turned on is very very low as compared to the commodity they pump. This means the oil industry can flex up and down as needed by demand and easily make up for the shortage if someone goes off line (like Iran).

Imagine if the oil industry kept pumping, at full output, never slowing, for each new oil field drilled.

In the semiconductor industry the capital cost is essentially the whole cost so fabs never ever go offline as the incremental cost to produce more chips is quite low. This means there is no excess capacity in the chip industry of any consequence and they run 24X7. Capacity is booked out months in advance and capacity planning is a science (perfected by TSMC).

The semiconductor industry has all the maneuverability of a super tanker that takes many miles to slow down or speed up….you just can’t change capacity that easily.

There is no real fix to the capacity issue due to financials

To build capacity that could be brought on line in a crisis or time of high demand would require an “un-natural” act. That is spending billions to build a fab, only to have it sit there unused waiting for the capacity to be needed. This scenario is not going to happen….even the government isn’t dumb enough to spend billions on a “standby” factory that needs a constant spend to keep up Moore’s law.

Its just not going to happen

Moving fabs “on shore” just reduces supply risk not demand risk

Rebuilding fabs in the US would be a good thing as it would mean fabs that are no longer an artillery shell away from a crazy northern neighbor or an hour boat ride away from a much bigger threat that still claims to own you.

That will certainly help reduce the supply side risk assuming we don’t build the new fabs on fault lines or flood zones. The demand side variability will still exist but could be managed better.

Restarting “Buggy Whip” manufacturing

The other key thing that most people do not realize is that most semiconductors used in cars, toys and even defense applications are made in very old fabs. All those older fabs that used to make 386 and 486 chips and 1 megabit memory parts have long ago been sold for scrap by the pound and shipped off to Asia (China) and are now making automotive and toaster oven chips.

Old fabs never die…they just keep making progressively lower value parts. As I have previously mentioned in a prior note, you don’t make a 25 cent microcontroller for a car in a $7B , 5NM fab….the math simply doesn’t work.
This ability to keep squeezing value out of older fabs has worked as demand for trailing edge has not exceeded capacity.

For a typical chip company, the leading edge fab makes the highest value CPU, the next generation older fab maybe makes a GPU, the next older fab maybe some I/O chips or comms chips, the older fab makes consumer chips and the oldest fabs make chips for TV remotes.

In bleeding edge fabs the equipment costs are the vast majority with labor being a rounding error. In older fabs , with fully depreciated equipment, labor starts to become a factor so many older fabs are better suited to be packed up and shipped off to a low labor cost country.

The biggest problem is that demand for older chip technology seems to have exceeded the amount of older capacity in the world as chips are now in everything and IOT doesn’t need bleeding edge.

Equipment makers for the most part don’t make 6 inch (150MM) tools anymore, some still make their old 8 inch (200MM) some don’t. As we have previously mentioned, demand for 200MM now exceeds what it was in their peak.

Old Tools are being Hoarded

Summary
Fixing not only the shortage issue but the risk issue will take not only a lot of time but a lot of money. The problem is systemic and has been dictated by financial math that has incentivized what we currently have in place.

In order to change the behavior of anyone who runs a chip company and can add we need to put in place financial incentives, legal decrees, legislative incentives and use a multiple of levers to change the current dynamics of the industry.

Even with all the written motivation in place it will still take years for the physical implementation of the incentivized changes.

TSMC has been under enormous pressure for years about a fab in the US. Now they are planning one in Arizona that is still years away, will be old technology when it comes on line and will barely be a rounding error….. all that from a multi billion dollar effort….. but its a start.

A real effort is likely to be well north of $100B and 10 to 20 years in the making before we could get back to where the US was in the semiconductor industry 20 years ago.

The Stocks
As the saying goes, buying semiconductor equipment company stocks is like buying a basket of the semiconductor industry. They can also be view as the “arms merchants” in an escalating war.

It doesn’t matter who wins or loses in the chip industry but building more chip factories is obviously good for the equipment makers, in general.

In the near term, foreign makers such as Tokyo Electron, ASM International, Nova Measuring and others may make for an interesting play.

There is plenty of time as we are sure that no matter what happens we will see zero impact from government sponsored activities in 2021 and it will likely take a very long time to trickle down so we would beware of “knee jerk” reactions that may drive the stocks near term.

Also Read:

“For Want of a Chip, the Auto Industry was Lost”

Will EUV take a Breather in 2021?

New Intel CEO Commits to Remaining an IDM


TECHTALK: Hierarchical PI Analysis of Large Designs with Voltus Solution

TECHTALK: Hierarchical PI Analysis of Large Designs with Voltus Solution
by Bernard Murphy on 03-03-2021 at 6:00 am

voltus min

Power integrity analysis in large chip designs is especially challenging thanks to the huge dynamic range the analysis must span. At one end, EM estimation and IR drop through interconnect and advanced transistor structures require circuit-level insight—very fine-grained insight but across a huge design. At the other, activity modeling requires system-level insight and rolling EM-IR analytics up to the full-chip power delivery network (PDN). Watch this CadenceTECHTALK on hierarchical PI analysis March 11 on a new approach to meet this need. REGISTER NOW to make sure you don’t miss the webinar.

The need

These are real design problems today, such as in the giant AI chips you are likely to see in hyperscalar installations, or perhaps in a CPU cluster together with eight giant GPUs. These are already way too big to run full-flat EM-IR analysis across the whole chip. Yet they are very important analyses to get right, because marketable implementations depend on finding the narrow window between under-design and over-design, between a design that may fail on timing and/or reliability in production or a design that you didn’t sufficiently size up critical areas of the PDN, or a design for which, to overcompensate for an uncertain analysis, you sized up too much of the network, pushing chip area outside a profitable bound.

Cadence has introduced a hierarchical analysis methodology in the Voltus IC Power Integrity Solution, which is particularly well suited to large designs with multiple repeated elements like those GPUs. (Come to think of it, this may well cover most super-large designs. After all, who is going to build such a design purely out of unique functions?) This latest release will generate models for IP blocks that can stand in for those blocks in full-chip analysis. These models have an order-of-magnitude-lower memory demand yet preserve accuracy within a few percent of a full-flat analysis—a very practical approach to managing EM-IR analysis across huge designs.

Summary: Hierarchical PI Analysis of Large Designs with Voltus Solution

Memory requirements and runtime for full-chip EM-IR analysis has become a major challenge at advanced nodes. It is not uncommon to see designs with 100s of millions of cells and some even in the multi-billion range. To run a flat analysis requires multiple terabytes of memory over a distributed network. To mitigate these issues, the Cadence® Voltus™ IC Power Integrity Solution enables designers to run hierarchical analysis using IP modeling technology. This helps designers create xPGV models for their IP blocks, accurately capturing the demand current and electrical parasitics. These xPGV models are an order of magnitude smaller compared to the fully extracted block. When used in the chip-level analysis, can help significantly reduce runtime and memory. The modeling methodology used in the Voltus IC Power Integrity Solution ensures minimal result difference relative to a fully flat analysis. This TechTalk will cover the generation of xPGV models, including the package model, and their use in IC-level analysis. 

Attend this CadenceTECHTALK to learn how to:

  • Run your largest designs much faster with lower memory
  • Perform very accurate sub-chip analysis, including impact of chip-level demand current and parasitics
  • Reuse IP models in different designs or for multiple instantiations within a design

Also Read

Finding Large Coverage Holes. Innovation in Verification

2020 Retrospective. Innovation in Verification

ML plus formal for analog. Innovation in Verification


USB4 Makes Interfacing Easy, But is Hard to Implement

USB4 Makes Interfacing Easy, But is Hard to Implement
by Tom Simon on 03-02-2021 at 10:00 am

USB4 Verification IP

USB made its big splash by unifying numerous connections into a single cable and interface. At the time there were keyboard ports, mouse ports, printer ports and many others. Over the years USB has delivered improved performance and greater functionality. However, as serial interfaces became more popular and started being used for what were previously parallel interfaces, there was a proliferation of new serial cables and protocols. The latest version of USB, referred to as USB4, makes a new bold move to unify many of these different interfaces. USB4 naturally works for USB data streams, but it also can tunnel PCIe, Thunderbolt3, and DisplayPort data streams.

USB4 supports 20 Gbps and can go up to 40 Gbps. It specifies use of the USB Type-C connector, which further simplifies the user experience. And unlike its predecessors, it mandatorily manages the power distribution with USB PD. It offers one connector for device interfaces, storage, peripherals and display output. However, with this unification comes complexity under the hood. Many legacy and new features are included in the host and device specification for USB4.

One of the hallmarks of the USB interface is its backward compatibility. And so, USB4 is USB 2 and USB 3 compatible, as one might expect. USB4 is a multi-lane interface, with lane bonding support as well for pipelining. Higher data rates call for more sophisticated encoding and error correction algorithms. Layers of abstraction for routing and tunneling have added complexity. Indeed, the list of features inside a properly functioning USB4 interface is lengthy.

Implementing USB4 is not a trivial task. At each stage of development, it is essential to have the ability to verify that everything conforms to the specification and is implemented properly. It is imperative to have a verification environment that can exercise all the functionality and provide designers information to help isolate and pin down issues. Last Summer Truechip, a leading provider of verification IP (VIP), announced the customer shipment of their USB4 and eUSB VIP.

USB4 Verification IP

Truechip has a truly impressive offering of VIP for nearly every category of design. These include storage, BUS & interfaces, USB, automotive, memory, PCIe, networking, MIPI, AMBA, display, RISC-V, and defense & avionics. Their VIP includes coverage, assertions, BFMs, monitors, scoreboard and testcases. The host and device BFM models, includes bus functional models and agents for the electrical layer, logical layer, transport layer, configuration layer and the protocol adapter layer. Their VIP works on a wide range of platforms – UVM, OVM, VMM and Verilog.

Truechip’s USB4 VIP is fully compliant with the v1.0 specification. It includes backward compatibility with USB 2.0. As expected, it also includes the Power Delivery for USB 3.0 and Type-C v2.0. Truchip’s VIP also supports all logical layer ordered sets. It has 64/66b, 128/132b and Reed-Solomon FEC encoding and decoding. In reality the list of features it supports is too long to list here.

The deliverables for the USB4 VIP are also comprehensive. In addition to the host and device models, it includes bus functional models and agents for the electrical layer, logical layer, transport layer, configuration layer and the protocol adapter layer. It comes with a monitor and scoreboard. There are test suites for basic and directed protocol tests. It has low power tests, error scenario tests, stress tests, random tests and compliance tests.

Truechip’s USB4 VIP is highly configurable and contains everything needed to verify any portion of a USB4 interface design. With it designers can be assured that their finished products will fully conform to the specification and will work reliably in silicon. For more information on this VIP check out the Truechip website.

Also read:

Bringing PCIe Gen 6 Devices to Market

PCIe Gen 6 Verification IP Speeds Up Chip Development

TrueChip CXL Verification IP

Webinar Replay on TileLink from Truechip

 

 

 

 

 


Features of Resistive RAM Compute-in-Memory Macros

Features of Resistive RAM Compute-in-Memory Macros
by Tom Dillinger on 03-02-2021 at 8:00 am

V bitline

Resistive RAM (ReRAM) technology has emerged as an attractive alternative to embedded flash memory storage at advanced nodes.  Indeed, multiple foundries are offering ReRAM IP arrays at 40nm nodes, and below.

ReRAM has very attractive characteristics, with one significant limitation:

  • nonvolatile
  • long retention time
  • extremely dense (e.g., 2x-4x density of SRAM)
  • good write cycle performance (relative to eFlash)
  • good read performance

but with

  • limited endurance (limited number of ‘1’/’0’ write cycles)

These characteristics imply that ReRAM is well-suited for the emerging interest in compute-in-memory architectures, specifically for the multiply-accumulate (MAC) computations that dominate the energy dissipation in neural networks.

To implement a trained NN for inference applications, node weights in the network would be written to the ReRAM array, and the data inputs would be (spatially or temporally) decoded as the word lines accessing the array weight bits.  The multiplicative product of the data/wordline = ‘1’ and the stored weight_bit = ‘1’ would result in significant memory bitline current that could be readily sensed to denote the bit product output – see the figure below.

At the recent International Solid State Circuits Conference (ISSCC), researchers from Georgia Tech University and TSMC presented results from an experimental compute-in-memory design using TSMC’s 40nm ReRAM macro IP. [1]  Their design incorporates several unique features – this article summarizes some of the highlight of their presentation.

Background

As the name implies, ReRAM technology is based on the transitions of a thin film material between a high-resistance and low-resistance state.  Although there are a large number of different types of materials (and programming sequences) used, a typical metal-oxide thin-film implementation is depicted in the figure below.

The metal oxide thin film material shown incorporates the source and transport of oxygen ions/vacancies under an applied electric field of high magnitude.  (The researchers didn’t elaborate on the process technology in detail, but previous TSMC research publications on ReRAM development did utilize a TiO-based thin film programming layer.  Multiple metal-oxide thin film materials are also used.)

As depicted in the figure above, an initial “filament forming” cycle is applied, resulting in transport of oxygen ions in the thin film.  In the Reset state (‘0’), a high electrical resistance through the metal-oxide film is present.  During the application of a Set (‘1’) write cycle, oxygen ion migration occurs, resulting in an extension of the filament throughout the thin film layer, and a corresponding low electrical resistance.  In the (bipolar operation) technology example depicted above, the write_0 reset cycle breaks this filament, returning the ReRAM cell to its high resistance state.

The applied electric field across the top thin film for the (set/reset) write operation is of necessity quite large;  the applied “read” voltage to sense the (low or high) bitcell resistance utilizes a much smaller electric field.

There are several items of note about ReRAM technology:

  • the bitcell current is not a strong function of the cell area

The filamentary nature of the conducting path implies that the cell current is not strongly dependent on the cell area, offering opportunities for continued process node scaling.

  • endurance limits

There is effectively a “wearout” mechanism in the thin film for the transition between states – ReRAM array specifications include an endurance limits on the number of write cycles (e.g., 10**4 – 10**6).  Commonly, there is no limit on the number of read cycles.

The endurance constraints preclude the use of ReRAM as a general-purpose embedded “SRAM-like” storage array, but it is the evolutionary approach adopted as an eFlash replacement, and a compute-in-memory offering where pre-calculated weights are written, and updated very infrequently.

  • resistance ratio, programming with multiple write cycles

The goal of ReRAM technology is to provide a very high ratio of the high resistance to low resistance states (HRS/LRS).  When the cell is being accessed during a read cycle – i.e., data/wordline = ‘1’ – the bitline sensing circuit is simplified if i_HRS << i_LRS.

Additionally, it is common to implement a write to the bitcell using multiple iterations of a write-read sequence, to ensure the resulting HRS or LRS cell resistance is within the read operation tolerances.  (Multiple write cycles are also initially used during the forming step.)

  • HRS drift, strongly temperature dependent

The high-resistance state is the result of the absence of a conducting filament in the top thin film, after the oxygen ion transport during a write ‘0’ operation.  Note in the figure above the depiction of a high oxygen vacancy concentration in the bottom metal oxide film.  Any time a significant material concentration gradient is present, diffusivity of this material may occur, accelerated at higher temperatures.  As a result, the HRS resistance will drift lower over extended operation (at high temperature).

Georgia Tech/TSMC ReRAM Compute-in-Memory Features

The researchers developed a ReRAM-based macro IP for a neural network application, with the ReRAM array itself providing the MAC operations for a network node, and supporting circuitry providing the analog-to-digital conversion and the remaining shift-and-add logic functionality.  The overall implementation also incorporated three specific features to address ReRAM technology issues associated with:  HRS and LRS variation; low (HRS/LRS) ratio; and, HRS drift.

low HRS/LRS ratio

One method for measuring the sum of the data inputs to the node multiplied times a weight bit is to sense the resulting bitline current drawn by the cells whose data/wordline = ‘1’.  (Note that unlike a conventional SRAM block with a single active decoded address wordline, the ReRAM compute-in-memory approach will have an active wordline for each data input to the network node whose value is ‘1’.  This necessitates considerable additional focus on read-disturb noise on adjacent, unselected rows or the array.)  However, for a low HRS/LRS ratio, the bitline current contribution from cells where data = ‘1’ and weight = ‘0’ needs to be considered.  For example, if (HRS/LRS) = 8, the cumulative bitline current of eight (data = ‘1’ X weight = ‘0’) products will be equivalent to one LRS current (‘1’ X ‘1’), a binary multiplication error.

The researchers chose to use an alternative method.  Rather than sensing the bitline current (e.g., charging a capacitor for a known duration to develop a readout voltage), the researchers pumped a current into the active bitcells and measured Vbitline directly, as illustrated below.

The effective resistance is the parallel combination of the active LRS and HRS cells.  The unique feature is that the current source value is not constant, but is varied with the number of active wordlines – each active wordline also connects to an additional current source input.  Feedback from Vbitline to each current source branch is also used, as shown below.

This feedback loop increases the sensitivity of each current source branch to Reffective, thus amplifying the resistance contribution of each (parallel) LRS cell on the bitline, and reducing the contribution of each (parallel) HRS cell.  The figure below illustrates how the feedback loop fanout to each current branch improves the linearity of the Vbitline response, with an increasing number of LHS cells accessed (and thus, parallel LRS resistances contributing to Rtotal).

LRS/HRS variation

As alluded to earlier, multiple iterations of write-read are often used, to confirm the written value into the ReRAM cell.

The technique employed here to ensure a tight tolerance on the written HRS and LRS value evaluates the digital value read after the write, and increases/decreases the pulse width of the subsequent (reset/set) write cycle iteration until the (resistance) target is reached, ending the write cycle.

HRS drift

The drift in HRS resistance after many read cycles is illustrated below (measured at high operating conditions to accelerate the mechanism).

To compensate for the drift, each bitcell is periodically read – any HRS cell value which has changed beyond a pre-defined limit will receive a new reset write cycle to restore its HRS value.  (The researchers did not discuss whether this “mini-reset” HRS write cycle has an impact on the overall ReRAM endurance.)

Testsite Measurement Data

A micrograph of the ReRAM compute-in-memory testsite (with specs) is shown below.

Summary

ReRAM technology offers a unique opportunity for computing-in-memory architectures, with the array providing the node (data * weight) MAC calculation.  The researchers at Georgia Tech and TSMC developed a ReRAM testsite with additional features to address some of the technology issues:

  • HRS/LRS variation:  multiple write-read cycles with HRS/LRS sensing are used
  • low HRS/LRS ratio:  a Vbitline voltage-sense approach is used, with a variable bitline current source (with high gain feedback)
  • HRS drift:  bitcell resistance is read periodically, and a reset write sequence applied if the read HRS value drops below a threshold

I would encourage you to review their ISSCC presentation.

-chipguy

References

[1]  Yoon, Jong-Hyeok, et al., “A 40nm 64kb 56.67TOPS/W Read-Disturb-Tolerant Compute-in-Memory/Digital RRAM Macro with Active-Feedback-Based Read and In-Situ Write Verification”, ISSCC 2021, paper 29.1.

 


It’s Energy vs. Power that Matters

It’s Energy vs. Power that Matters
by Lauri Koskinen on 03-02-2021 at 6:00 am

Lauri at the white board

In tiny devices, such as true wireless headphones, the battery life of the device is usually determined by the chips that execute the device’s functions. Professor Jan Rabaey of UC Berkeley, who wrote the book on low power, also coined the term “energy frugal” a number of years ago, and this term is even more valid today with the proliferation of true wireless devices.

When optimizing the battery lifetime, many times power and energy are used interchangeably. However, they are not interchangeable as the device’s battery stores energy while reducing power can actually consume more energy. Techniques to reduce energy by reducing voltage are being deployed more broadly as demand takes off for true wireless products. In this blog, I’m going to illustrate what’s behind this trend through several examples that demonstrate the relationship between energy, power and voltage.

Let’s start by reviewing the basic equations for energy and power, shown below in Figure 1.  They look similar but there are a few, critical takeaways: 1) energy consumption cannot be reduced by reducing frequency, 2) leakage cannot be reduced without reducing VDD (excluding process options) and finally, 3) because of the quadratic relationship, VDD is by far the most effective method of reducing energy.

Figure 1: Basic equations for energy and power

Let’s look at the takeaways with some examples. For takeaway 1), an example is simple: reducing frequency by 10%, for example, increases Eleak by 10% (as t increases 10%) while Edyn remains unchanged. This “fallacy” is mostly seen in “run-to-complete” strategies. For example, let’s say that your processor consumes 90% dynamic energy and 10% leakage energy at its nominal voltage. If you run to complete (i.e. run the processor as fast as you can) and then let it leak (i.e. no power gating), neither dynamic energy or leakage change (see equation). But the fallacy shows up if you try to run faster for the sake of shutting down earlier.  For example, let’s say a 10% frequency increase for a 10% VDD increase to run to complete 10% faster. Your new energy consumption is E = 0.9*(1,1)2 + 0.1*1.1*0.9 = 119%. Clock gating doesn’t change this equation as it equally affects all dynamic energy cases but let’s look at power gating’s effect. If your power gating switches super-fast and doesn’t cost active energy, then the theoretical maximum you can save is the leakage energy (10%). How about running as fast as you can and then power gating? The dynamic energy increase is quadratic and the leakage linear, so you can’t win. For the 10% frequency increase case above, you would still end up consuming more energy (0.9*(1,1)2 + 0 = 109%).

For takeaways 2) and 3) above, let’s turn to examples that employ reduced voltage. These are not hypothetical examples as we are working with companies to deploy solutions based on reduced voltage today.  I’ll need to explain a few assumptions to start.  Assume that your computation time linearly depends on VDD (a realistic assumption up to a point). Let’s say that this is a slow operating mode (you also have modes that take more of the clock cycle), so your processor (at the same 90% dynamic/ 10% leakage energy as above) finishes in 50% of the clock cycle. Let’s use the remaining 50% of the clock cycle to reduce VDD (i.e. halve VDD). This would result in a huge reduction in energy.  For those interested in the exercise: E = 0.9*(0.5)2 + 0.1 * 0.5 * 2 = 32.5%. It gets even better,

as Ileak reduces exponentially with voltage. Let’s say that Ileak reduces by 90% when VDD is halved as above. Your new energy is reduced further to only 23.5%. (E = 0.9*(0.5)2 + 0.1 * 0.1 (0.5 * 2) = 23.5%)

In case you are thinking that I’m writing this from an ivory tower, there are also cases where reducing voltage does not make sense when looking at the total chip. Let’s say that you have an old PLL which consumes as much energy as your processor but which can be shut off with no leakage. Then the 50% VDD drop case from above would end up consuming more energy (2*0.5 + 0.5*(0.9*(0.5)2 + 0.1 * 0.1 (0.5 * 2)) = 112%). It’s not an uncommon story in the IC industry that the overhead ends up cancelling out the gains, and in upcoming blogs I’ll show you how to avoid that with dynamic voltage and frequency scaling (DVFS) systems based on our experience working with design teams working on true wireless devices.

https://minimaprocessor.com/


Webinar: Achronix and Vorago Deliver Innovation to Address Rad-Hard and Trusted SoC Design

Webinar: Achronix and Vorago Deliver Innovation to Address Rad-Hard and Trusted SoC Design
by Mike Gianfagna on 03-01-2021 at 10:00 am

Webinar Achronix and Vorago Deliver Innovation to Address Rad Hard and Trusted SoC Design FINAL

Radiation hardening is admittedly not a challenge every SoC design team faces. Methods to address this challenge typically involve a new process technology, a new library or both. Trusted, secure design is something more design teams worry about and that number is growing as our interconnected world creates new and significant attack surfaces. This challenge typically requires the introduction of new IP, new process tweaks or both. There is a webinar coming on SemiWiki that explains how to deal with both of these challenges with minimal perturbation to both the IP and process strategy. The work here is significant. Read on to learn how Achronix and Vorago deliver Innovation to address rad-hard and trusted SoC design.

The webinar presents the collaboration of two companies. Achronix brings embedded FPGA technology to the table and Vorago brings a unique and low-impact approach to radiation hardened design. Together, these two companies solve a lot of rather difficult problems in an elegant way. First, a bit about the speakers.

Dr. Patrice Parris

The webinar begins with a presentation by Dr. Patrice Parris, chief technology officer at Vorago Technologies. With several degrees in EE, CS and physics from MIT and a diverse career in innovative work at NXP, Freescale and Motorola, Patrice provides a comprehensive overview of radiation hardened design that is easy to follow. He describes Vorago’s unique and patented capabilities to provide technology solutions to address radiation hardening and extreme temperature requirements. More on this technology in a moment.

The next speaker is Raymond Nijssen, vice president and chief technologist at Achronix. Raymond has deep background in ASIC/FPGA design as well as EDA product development. He is driving both the software systems to support Achronix FPGAs as well as key aspects of its FPGA architectures. Both of these gentlemen hold multiple patents. The depth of their technical understanding is substantial. More relevant for the webinar is that they both are able to explain complex concepts in ways that are easy to understand.

Raymond Nijssen

If the topics of radiation hardening or trusted, secure design are of interest, I highly recommend this webinar. You will come away with new tools and new insights. I will provide an overview of the topics covered in the webinar and then provide a link to register.

We’ll start with Vorago. The company provides an innovative technology called HARDSIL® that adds radiation hardening cost-effectively to existing production fab capability. The approach is to add a small number of mask steps and implants to achieve rad-hard performance. These additions are easily added and don’t impact transistor performance or yield. So, there is minimal impact on the design flow and IP. If this sounds too good to be true, watch the webinar. You will be treated to a very comprehensive overview of how this all works, including SEM photos.  Patrice also does a great job explaining the various types of circuit events that occur during radiation dosing of semiconductors. There are several, with different implications to short- and long-term performance of the circuit. I thought I understood these issues. I wound up learning some new and interesting concepts.

Throughout the webinar, Patrice and Raymond interleave their presentations to build the complete story. Achronix is a unique company that provides both stand-alone and embedded FPGA solutions. I previously covered the offerings of Achronix in this post. There are many other excellent posts about Achronix on its SemiWiki page. Raymond provides an overview of the threats that exist in the semiconductor supply chain. There are many opportunities for theft, tampering and reverse engineering. A trusted flow is daunting for sure. What is quite interesting are the benefits of using embedded FPGA technology in chip design. You need to see Raymond unfold the benefits in detail, but the primary point is that the function and implementation of a circuit are separated in an FPGA and that makes a big difference regarding security.

Raymond and Patrice also describe how HARDSIL is being applied to the Achronix embedded FPGA technology to complete the picture. There is a lot of very useful information presented in this webinar. The tight collaboration between Achronix and Vorago comes across quite well. This webinar will be presented on Tuesday, March 9, 2021 at 10AM Pacific time. You can register for the webinar here. I highly recommend you attend and see how Achronix and Vorago deliver Innovation to address rad-hard and trusted SoC design.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.