RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Podcast EP134: The Impact of Using a Physically Aware NoC with Charlie Janac

Podcast EP134: The Impact of Using a Physically Aware NoC with Charlie Janac
by Daniel Nenni on 01-20-2022 at 10:00 am

Dan is joined by Charlie Janac, president and CEO of Arteris IP. Charlie’s career spans 20 years in multiple industries, including design automation, semiconductor capital equipment, nanotechnology, industrial polymers, and venture capital.

Charlie discusses the benefits of using network-on-chip, or NoC IP on several types of design projects. He also discusses the substantial benefits of using physical awareness for the NoC and how to set up this valuable capability.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Forty Four Billion Reasons Why TSMC Remains Dominant

Forty Four Billion Reasons Why TSMC Remains Dominant
by Robert Maire on 01-20-2022 at 6:00 am

Forty Four Billion Reasons Why TSMC Remains Dominant

-Chips for America is better than nothing, but not much
TSMC $40-44B Capex crushes competition (Intel & Samsung)
-Additional efforts other than handouts are necessary
-Could/should TSMC be “adopted” as a US company?

Competition can’t keep up with $40-$44B of Capex

TSMC’s recent announcement of $40-$44B of capex spending is a mind blowing number that is a large increase from last years huge number and makes them far and away the biggest spender period.

Both Intel and Samsung can’t hope to spend anywhere near that number.
If we assume some is for expansion of existing non leading edge capacity but the majority is aimed at the leading edge it means a dominant share of capacity and a very dominant share of EUV and other advanced enabling tool sets.

$52B Chips for America is barely a rounding error

When you assume that the Chips for America act is a one time, one shot disbursement spread over a number of years and a number of companies it becomes clear that its not much against TSMC’s spend.

It also does not compare to what China as a whole is spending on semiconductor technology.

Basically the US is being outclassed and outgunned by both China and Taiwan ( probably part of China in the not too distant future).

Even if Intel got the whole $52B it still couldn’t keep up as the spend would be over several years. Never mind that only $10B of the $52B is for fab projects with a $3B limit per project. Essentially the $52B will be spread so thin as to be ineffective versus the focused sharp spend of TSMC.

Can the US fabs being built make a difference?

Intel announced two fabs in Arizona at $10B each along with TSMC announcing a 5NM fab in Arizona which by the time its operational will be a drop in the bucket trailing edge fab perhaps meant to mollify the US.

Samsung has announced a $17B in Texas in addition to existing facilities there.
It looks like Intel has chosen Ohio for its “megafab” project and Micron is eyeing North Carolina.

While details are scarce, it sounds like the Intel Ohio and Samsung Texas fabs are the most impactful on the US. Samsung would be somewhat less impactful as we assume that bleeding edge technology R&D will continue to be done in Korea making the Texas fab a “fast follower” much as the existing Samsung fab in Texas is today. That leaves Intel Ohio as the only trail blazing R&D facility in the US.

It also remains to be seen if the brain trust in Portland can either be moved or shared with Ohio or if Portland remains the R&D center with Ohio for production.

Excellent CNET article

All this begs the question as to why we can’t get TSMC in the US?
(In a real way)

Obviously TSMC’s brain trust is in Taiwan but could it be both re-created and/or moved to some extent with the right incentives? Rather than a token trailing edge fab in the US, a real leading edge state of the art complete with R&D?

Maybe grant citizenship to those employees willing to move to escape a potential China take over or at least hedge against it.
Throw a lot of money at them, along with a nice life in the US.

It might be a much more reliable and expedient way of fixing the US’s chip problem as compared to trying to rebuild Intel or take hand me downs from Samsung.

Right now time is not on the US’s side and we need chip capacity and expertise fast. We cannot wait ten years for Intel to build a fab and get critical employee mass in Ohio starting with a literal greenfield.

We need to think outside the box or get left behind

Incentives and disincentives are needed in addition to Chips for America

The cost of a semiconductor fab is similar around the world as the majority is the cost of the equipment which costs the same wherever you are (or should it?)

Could/should the US put export taxes on US semiconductor equipment (whether made in the US or a foreign subsidiary but with US technology)
Maybe a 50% tax on tools going to China, maybe 25% on Taiwan. Penalize those companies who would try to trans ship equipment to China. Use the received funds to support the US chip industry.

Incentivize all the companies, Intel, Micron, TSMC and others to build fabs in the US because the primary cost, equipment, would be cheaper here.

We could also go back to prior export restrictions that limited US tool exports to equipment that was several generations old. This technology export restriction worked well with SMIC and China for years but was removed over the years as relations improved (even though relations are now souring again).

The US seems to forget that the majority of semiconductor tools have their basis in US technology (including EUV made by ASML).

Is it better to throw money at the problem as compared to controlling our own fate through our own technology?

Where are the non-handout protections and incentives in the Chips for America act?

In a zero sum game equipment makers don’t get hurt, consumers pay a little more for US made…

As the world needs a certain amount of semiconductors and it takes a certain amount of equipment to make them. Equipment makers would sell the same amount of equipment just to different end locations.

Any chip maker, foreign owned or US owned would benefit by having a fab in the US.

End consumers would likely have to pay a little more for chips made in the US rather than China , but I would rather pay a little more and be able to buy a car and a computer when I need one and not held hostage.

Removing an attractive target- “Broken Nest”

From a political and global strategic perspective it is the existence of TSMC in Taiwan that likely creates additional instability in the region vis a vis China. If Taiwan were a backward Island of farmers China would not be so focused on getting hold of it and would likely not be doing as much saber rattling.

It is the mere fact that Taiwan contains exactly what China desperately needs right now, semiconductor dominance, that makes it such an attractive target, worthy of a lot of effort and bloodshed to obtain.

Remove semiconductors from Taiwan and suddenly the target’s value is greatly diminished and maybe not worth starting a war over.

This is not unlike the middle east and its supply of oil that made it so strategically important. If it were just a desert with camels, no one would care.

Could Taiwan be less attractive if China wasn’t getting TSMC? Could that cool the current tensions?

Broken Nest: Deterring China from invading Taiwan

The stocks

The stocks have lost a little steam as the rapid run up has lost some momentum with less new news and concerns about some sort of peak.
The TSMC capex news underscores that 2022 will continue to be a very good year for chips and likely better than 2021.

Although there is the possibility of slowing, it appears that current momentum remains strong enough to make it a very good year.

We remain most positive on the arms merchants…the semiconductor equipment stocks as its clear that “wartime” spend in the chip industry continues to increase benefitting the tool makers.

While TSMC stands to make a huge amount of money from the continued supply demand imbalance it seems to be plowing most of that money back into capex so its unclear exactly how much will result in increased profitability.
This is even more so the case with Intel that has more spending projects than they currently have money for. First Intel has to figure out where the money is coming from. It looks as if they have already committed their entire market cap in spend over all the projects they have announced.

Even if the Chips for America act is a “gift” to Intel, they are still billions short of what they need.

Samsung will not be left out and will likely double down as well.
The main risk we see, at least in the near term remains supply chain issues limiting tool makers and other kinks in the supply chain like fires in Berlin and new variations of Covid locking down China again.

The usually seasonally weak Q1 is not

Perhaps one of the best signs we see is that usually Q1 is the weakest quarter of the year for chips as we are in a post partum depression after the selling frenzy of the holiday season coupled with a couple of weeks for Chinese new year. Q1 is also usually the low point for memory pricing again related to slower demand.

We are not seeing as much of the normal “seasonal” slowing in the industry. Demand for chips seems undaunted. We had previously thought that we would start to see an easing of the crunch in 1H of 2022 but it seems to be longer lasting.

We still question how long it lasts as the industry remains a cyclical one by its very nature but demand remains overwhelming for the time being.

We may see some psychological weakness as investors are concerned about a cyclical peak or Covid news. Even though business increases may be as good as 2021, we don’t think the stocks will move quite as quickly as they did last year and may prove more volatile on the way.

For now, memory seems stable, but that is usually the first thing to fall. Memory makers seem to be maintaining a rational spend and technology pattern that continues to support profitability. If anything , Micron is probably more undervalued given the current circumstance.

Chip equipment companies need to be razor focused on execution as the demand is there if they can make product.‌

Also Read:

“Too Big To Fail Two” – Could chip failure take down tech & entire economy?

Semicon West is Semicon Less

Supply Chain Breaks Under Strain Causes Miss, Weak Guide, Repairs Needed


Embedded Logic-NVM Solutions for Al Chips

Embedded Logic-NVM Solutions for Al Chips
by Kalar Rajendiran on 01-19-2022 at 8:00 pm

What is Analog NVM

Last month, eMemory Technology hosted a webinar titled “eMemory’s Embedded Logic-NVM Solution for AI Chips.” While the purpose was to present their embedded Logic-NVM solution, the webinar nicely sets the stage by highlighting Analog NVM’s value as it relates to neural networks. Of course, the algorithms of neural networks are  core to implementing AI chips, especially in weights storage. Dr. Chen, the presenter is a manager of one of eMemory’s many technology development departments. Following is a synthesis of the salient points I gathered from the webinar.

Market Trends

There is a massive migration of AI processing from the cloud to the edge, enabled by emerging AI algorithms. Fast growing AI applications are many, such as data inference, image and voice processing and recognition, autonomous driving, cybersecurity, augmented reality, etc. In order to develop efficient AI chips for these applications, it is important to implement various types of Al processing elements (PEs) with low power consumption and high computing performance.

Neural Networks

Artificial Intelligence (AI) is about emulating the human brain. Human brain, of course consists of many neural networks and neurons are the structural and functional units of these networks. It is neurons that perceive changes in the environment and transmit information to other neurons through current conduction.  The neurons can be divided into four parts, namely the receptive zone, trigger zone, conducting zone and the output zone. The basic architecture of an electronic equivalent of the human neural network must include corresponding zones/layers.

Analog NVM

Implementing a Neural Network electronically is achieved through a Multi-Layer Perceptron (MLP) structure. The MLP consists of an input layer, hidden layers, and output layer that are all connected via weights (the electronic equivalent of synapses). The input layer is mapped to data inputs and weights, the hidden layer to the net-input function, and the output layer to activation function and output.

Interestingly, this kind of mapping architecture is comparable to a non-volatile memory array architecture. Refer to Figure below.

For an NVM array, the data input of the NN is the WL data input, weights are stored in the NVM units to do a multiply-accumulate (call MAC) process, and finally, the output data is through the activation function(ADC)  to generate. These output data could be for making decision or transferring to the next PE system. The trick with emulating weights behavior is accomplished with different current levels by leveraging the NVM cell’s data retention capabilities.

Why Analog NVM

Refer to Figure below for two different architectures for designing AI Inference chips.

The Von Neumann architecture approach consumes lot of power due to the SRAM-based processing elements. As high power consumption cannot be tolerated by edge computing applications, In-Memory Computing architecture is the preferred approach for now. By leveraging the analog NVM characteristics, this approach can lower the power consumption and simplify the implementation at the same time.

The power consumption savings on a data inference application could be 10x-1000x using in-memory computing architecture implemented using analog NVM. Fast growing AI applications that were mentioned earlier in the market trends section can all benefit from lower power consumption.

eMemory’s Analog NVM IP Offering

eMemory’s Analog Memory is floating gate-based, and is built on embedded logic compatible memory process that uses a single poly layer. It allows precision current controllability using a smart PV function circuit to support multi-level cell current that can support 4bits~5bits accuracy. The NVM IP demonstrates good data retention and a very low error rate in eMemory special analog IP design.

Refer to Figure below for details about eMemory’s NVM IP.

 

This was developed in collaboration with one of their customers. It is important to note that no extra masks were needed and the manufacturing followed the foundry’s baseline process.

Realizing CIM with eMemory’s Analog NVM IP

eMemory’s team built neural network (NN) processors using floating gate-based NVM to emulate MLP and model a compute-in-memory (CIM) TensorFlow. eMemory’s analog NVM demonstrated excellent control of current with low standard deviation and error rate.

 

The four- major step process for realizing CIM using eMemory’s Analog NVM IP is as follows:

  • Choose the appropriate NN model for the application
  • Use “Software” such as the open-source TensorFlow to help build the new specific NN model as well as the training model in the AI Chip
  • Design the memory cell/array to fit the type of weights and various degrees of accuracy needed such as 2bits/4bits/5bits or higher.
  • Build the peripheral circuits such as smart PV function, activation function, precision ADC/DAC part

Summary

The flow presented in the webinar is one way to implement CIM using analog NVM in an AI application.  Alternate flows can be implemented for one’s specific application through collaboration between software and hardware engineers. The Q&A session after the webinar provides some guidance on handling different types of neural networks and common questions you may have. You can access a recording of that entire webinar from eMemory’s Resources page.

For more information about their IP offering, you can contact eMemory Technology.

Related Blog


Conquering the Impossible with Aspiration and Attitude

Conquering the Impossible with Aspiration and Attitude
by Daniel Nenni on 01-19-2022 at 10:00 am

Conquering the Impossible with Aspiration and Attitude

Cornami is an interesting company. Their leader is also an interesting person. I’ve interviewed Wally Rhines many times on various topics. All of them have been big hits. Cornami is focused on enabling fully homomorphic encryption (FHE) in a commercially viable and widespread manner. Many say this isn’t possible, but Wally and several other high-profile supporters of Cornami believe otherwise.  I covered some of the technical details behind Cornami here. This discussion isn’t about what Cornami is doing. Rather it’s about the people and the process that facilitate conquering the impossible.

Cornami recently released a white paper on the backstory of how Cornami is doing what most say can’t be done. It’s written by a good friend and long-time collaborator, Mike Gianfagna. Mike digs in to understand why some very prominent folks got behind Cornami and its mission. These are folks who judge the efficacy of business plans and technical ideas for a living. Mike explores why these folks bought in to the vision. Mike has seen his share of business plans in his career as well, absolutely, so I wanted to understand what he discovered. The story takes a few unexpected turns and delivers a few surprises along the way.

If you are interested in the benefits of FHE, or if you just wonder what it takes to do the impossible, you’ll want to read this white paper. Here is a summary of some of the twists and turns in the story and the key stars of the show. Don’t worry, I won’t spoil the story. My aim is to shine a light on the how unpredictable life can be when you aim to do the impossible.

Why Should I Care?

About FHE that is. The answer has to do with data security. Most will understand the importance of protecting data. Compromises can cause all sorts of havoc. The actual value of data is growing exponentially and so protecting that value is another aspect of the problem. One of the stars of this story coined the term, “data is the new oil”. There’s a lot that goes with that. You can dig into more details about data security. Just remember FHE is the most bullet-proof way to protect data. Even quantum computing can’t break it.

An Evolving Collaboration

Gordie Campbell founded Cornami years before Wally joined as CEO. Gordie’s story could occupy a white paper all by itself. Gordie founded and became chairman and CEO of CHIPS and Technologies, the world’s first fabless semiconductor company. That innovation drew the famous comment from Jerry Sanders of AMD, telling Gordie publicly that “real men have fabs”. The rest, as they say, is history.

Years later, Gordie and Wally’s paths would cross. The reason had to do with software discounts. You’ll need to read the white paper to learn more. After Wally left Mentor/Siemens, he was doing some work for the Defense Advanced Research Projects Agency (DARPA). The agency had a particular interest in, you guessed it, FHE. The Department of Defense (DoD) was convinced this was a critical technology for the future. After a lot of research, Wally concluded that no one was working on FHE. It would be at least ten years before anything remotely resembling a solution could be in hand.

How did Wally go from “no way” to CEO of Cornami? There are other star performers involved. You’ll have to read the white paper.

Smart Money

My overview will end with Eric Chen of Softbank. Eric led the most recent funding round for Cornami. Eric has a background in physics and a PhD in electrical engineering from Stanford University. He is first and foremost a technologist. He has fueled that passion as both an entrepreneur and an investor. Way before meeting with Gordie and Wally to discuss Cornami’s funding round, Eric had done substantial research on data, its impact on the economy and how to protect it. He knew a lot about, you guessed it, FHE. Why did he invest?  You’ll have to read the white paper to find out.

To Learn More

There’s a quick synopsis of a rather engaging white paper on what’s involved in conquering the impossible. I highly recommend it. You can get your copy here.

Also read:

CEO Interview: Wally Rhines of Cornami

I Have Seen the Future – Cornami’s TruStream Computational Fabric Changes Computing

Podcast EP65: Trust But Verify – The Backstory of Applied Materials and Cornami with Wally Rhines


2021 Retrospective. Innovation in Verification

2021 Retrospective. Innovation in Verification
by Bernard Murphy on 01-19-2022 at 6:00 am

Innovation image 2021

As we established last year, we will use the January issue of this blog to look back at the papers we reviewed last year. We lost Jim Hogan and the benefit of his insight early last year, but we gained a new and also well-known expert in Raúl Camposano (another friend of Jim). Paul (GM, Verification at Cadence), Raúl (Silicon Catalyst, entrepreneur, former Synopsys CTO) and I are ready to continue this series through 2022 and beyond. As always, feedback welcome.

The 2021 Picks

These are the blogs in order, January to December. All got good hits. The hottest of all was the retrospective, suggesting to me that you too wanted to know what others found most interesting 😀. This year, “Finding Large Coverage Holes” and “Agile and Verification” stood out, followed by “Side Channel Analysis” and “Instrumenting Post Silicon Validation”. Pretty good indicators of where you are looking for new ideas.

2020 Retrospective

Finding Large Coverage Holes

Reducing Compile Time in Emulation

Agile and Verification, Validation

Fuzzing to Validate SoC Security

Neural Nets and CR Testing

Instrumenting Post-Silicon Validation

Side Channel Analysis at RTL

An ISA-like Accelerator Abstraction

Memory Consistency Checks at RTL

Learning-Based Power Modeling

Scalable Concolic Testing

Paul’s view

I am really enjoying this blog; I can’t believe it’s been 2 years already. It is amazing to me how Bernard seems to find something new and interesting every month. Our intention when we launched this blog was just to share and appreciate interesting research, but in practice the papers have directly influenced Cadence’s roadmap in verification. Which I think is the ultimate show of appreciation.

The biggest theme I saw in our 2021 blogs was raising abstraction. As has been the case for the last 30 years, this continues to be the biggest lever to improve productivity. Although, I should probably qualify that to domain-specific abstraction. Historically, abstractions have been independent of application – polygon to gate to netlist to RTL. Now the abstractions are often fragmenting – ISA to ILA for accelerator verification in the September blog. Mapping high level behavioral axioms to SystemVerilog for memory consistency verification in the October blog. Verilog to Chisel for agile CPU verification in the April blog. Assertions generalizing over sets of simulations for security verification in the May blog. And then of course, some abstractions continued to be domain-agnostic: Gate-level to C++ for system level power modeling in the November blog. Coverage to text tagging in the February blog.

The other theme which continued to shine through is how innovation emerges at intersections of different skills and perspectives. The February blog on leveraging document classification algorithms to find coverage holes is one great example this year. Early ML methods from the 1980’s rediscovered and reapplied to CPU verification in the June blog. Game theory used to optimize FPGA compile times in emulation in the March blog. It’s been great to see Bernard take this principle into our own paper selection this year, in a few months diverting away from “functional verification” into topics like power, security, and electrical bugs. It’s helping us do our own connecting of dots between two different domains.

Looking forward to continuing our random walk through verification again this year!

Raúl’s view

Without focusing on any particular area, from June to December, we touched on many interesting topics in Verification. The two most popular ones were Embedded Logic to Detect flipped Flops (hardware errors) and Assessing Power-Side Channel Leakage at the RTL-Level. Another RTL-Level paper dealt with memory consistency. At an even higher level, we looked at Instruction-Level Abstractions for verification. We also had the obligatory papers on ML/NN, one to generate better pseudo-random tests, the other to build accurate power models of IP. Finally, our December pick on Concolic Testing to reach hard to activate branches also deals with increasing test coverage.

One of the areas we focus on this blog is marketability; methodology papers, foundational papers, extensions of existing approaches and too small niches all do not qualify for different reasons. This has of course little to do with the technical merits. Some of the presented research is ripe for adoption, e.g., use of ML/NN to improve different tasks in EDA. A few are around methodology, e.g., an emulation infrastructure; some are more foundational such as higher-level abstractions. Others are interesting niches, for example side-channel leakage. But they are all research worthy and reading the papers was time well spent!

My view

We three had a lively discussion on what principle (if any) I am following in choosing papers. Published in a major forum certainly. As Paul say, it has been something of a random walk through topics. I’d like to get suggestions from readers to guide our picks. Based on hits there are a lot of you, but you are evidently shy in sharing your ideas. Maybe a private email to me would be easier – info@findthestory.net.

  • I’m especially interested in hard technical problems you are facing constantly
  • If you can (not required), provide a reference to a paper on the topic. This could be published in any forum.
  • I’m not as interested in solved problems – how you used some vendor tool to make something work in your verification flow. Unless you think your example exhibits some fundamentally useful capability that can be generalized beyond your application.

Meantime we will continue our random walk, augmented by themes we hear continue to be very topical – coherency checking, security, abstraction

Also Read

Methodology for Aging-Aware Static Timing Analysis

Scalable Concolic Testing. Innovation in Verification

More Than Moore and Charting the Path Beyond 3nm


ESDA Reports Double-Digit Q3 2021 YOY Growth and EDA Finally Gets the Respect it Deserves

ESDA Reports Double-Digit Q3 2021 YOY Growth and EDA Finally Gets the Respect it Deserves
by Mike Gianfagna on 01-18-2022 at 10:00 am

ESDA Reports Double Digit Q3 2021 YOY Growth and EDA Finally Gets the Respect it Deserves

 

It’s that time again. ESDA has recently released the numbers for Q3, 2021. Industry revenue increased 17.1% year-over-year from $2,953.9 million to $3,458.1 million in Q3 2021. The four-quarter moving average, which compares the most recent four quarters to the prior four, rose 16.1%. Further, the companies tracked in the report employed 51,182 people globally in Q3 2021, an 8.7% increase over the Q3 2020 headcount of 47,087 and up 2.4% compared to Q2 2021. Another banner quarter and another upbeat outlook. The numbers also roll up in an interesting way. Read on to see how EDA finally gets the respect it deserves.

A Look at the Numbers

This report remined me of last quarter’s report. According to Wally Rhines, executive sponsor of the SEMI Electronic Design Market Data report, “Geographically, all regions reported double-digit growth, with product categories CAE, Printed Circuit Board and Multi-Chip Module, SIP, and Services also showing double-digit growth.”

Revenue by Product and Application Category – Year-Over-Year Change

  • CAE revenue increased 13.7% to $1,054.7 million. The four-quarter CAE moving average increased 11.8%.
  • IC Physical Design and Verification revenue increased 0.7% to $612.6 million. The four-quarter moving average for the category rose 16%.
  • Printed Circuit Board and Multi-Chip Module (PCB and MCM) revenue increased 14.5% to $298.3 million. The four-quarter moving average for PCB and MCM increased 10.9%.
  • SIP revenue rose 30.6% to $1,373.3 million. The four-quarter SIP moving average grew 22.1%.
  • Services revenue increased 12.5% to $119.1 million. The four-quarter Services moving average increased 9.2%.

Revenue by Region – Year-Over-Year Change

  • The Americas, the largest reporting region by revenue, purchased $1,494.5 million of electronic system design products and services in Q3 2021, a 14.3% increase. The four- quarter moving average for the Americas rose 15.8%.
  • Europe, Middle East, and Africa (EMEA) revenue increased 22.6% to $451.7 million. The four-quarter moving average for EMEA grew 11.9%.
  • Japan revenue increased 11.8% to $259.8 million. The four-quarter moving average for Japan rose 3.3%.
  • Asia Pacific (APAC) revenue increased 19.7% to $1,252.1 million. The four-quarter moving average for APAC increased 21.3%.

A Look Behind the Numbers

Dan and I had the opportunity to chat with Wally about the report. Wally always has vast amounts of information and deep perspectives on the data. This time was no different. What are some trends that are noteworthy?

First of all, emulation had a very strong showing at 32% growth. This reflects the complexity of the designs being done. Highly complex designs will need hardware/software verification and emulation is the best approach to do that. The types of companies driving this growth are noteworthy as well. We have seen many system companies in the information processing arena enter the semiconductor market as new “chip companies”,

Wally pointed out that we’re now seeing the same thing happen in the automotive sector. Of course, Tesla has always been there. But now, companies like VW, Hyundai, Ford, and GM (among others) have all announced their intention of becoming more active in chip design. These are large entities with potentially big appetites for silicon.

We asked Wally if there were any weak spots of note. IC layout was a bit weak this quarter, as was services. When you look at the rolling averages, none of that has a large impact, however. IP was also noted as a large segment with excellent results. So, the general view is that the train continues to run strong and fast.

A final note. Wally sees EDA approaching a $14B industry. For many years, his view was that if you took two percent of semiconductor revenue, that would be EDA revenue. With semiconductor revenue north of $550B, we can clearly see EDA’s percentage growing beyond 2 percent. So perhaps EDA finally gets the respect it deserves.

For information about SEMI market research reports, visit the SEMI Market Research Reports and Databases Catalog.

Also read:

ESD Alliance Reports Double-Digit Growth – The Hits Just Keep Coming

Is EDA Growth Unstoppable?

The Juggernaut Continues as ESD Alliance Reports Record Revenue Growth for Q4 2020


Unlocking the Future with Robotic Precision and Human Creativity

Unlocking the Future with Robotic Precision and Human Creativity
by Mike Gianfagna on 01-18-2022 at 6:00 am

Unlocking the Future with Robotic Precision and Human Creativity

From the perspective of all time for recorded human history, the last 300 years (a blink on that time scale) has seen incredible, life-changing and world-changing advances. Water and steam-driven machines first showed up in the 1700s. This is often called Industry 1.0. Powered assembly lines in the late 1800s became Industry 2.0. Automation/computers started Industry 3.0 in the late 1960s. Artificial Intelligence (AI) is now driving Industry 4.0. This timeline should be familiar to most. A lot has been written about it. What is interesting is what comes next. That is the topic of a recent white paper from Achronix. I found the perspective offered in this piece to be quite enlightening and fresh. Without disclosing too much of its content, let’s examine how robotic precision and human creativity will write the next chapter.

How Does AI Fit?

AI is a rather broad term. It’s defined by a branch of computer science that aims to emulate human behavior algorithmically. Focusing a bit more, we see a lot of references to machine learning (ML), a subset of AI that uses statistical models derived from data. Focusing further, deep learning (DL) utilizes neural networks to perform inferencing. These systems can also be adaptive, i.e., learn.

Achronix’s white paper treats the various regimes of AI and puts them in perspective regarding a timeline of innovation. As we approach the present day, an important observation/prediction is made.

Adaptive AI algorithms, most certainly DL algorithms, will not only learn on their own, but also interpret real-time inputs from human beings. This ability to adapt in real time with minimal latency will be essential.

This observation puts a great perspective on AI going forward. It’s not about replacing humans. It’s about leveraging their insights to create better results.

Keeping Up with the Data

Achronix then discusses the environment, ecosystem and technology needed to build successful AI deployment. We all know that 5G networks and pervasive IoT devices are creating an explosion of data. Keeping up with the processing demands of that data is the key to success. But performing this rather daunting task in a commercially viable way is far easier said than done.

Throwing more servers at the problem is one approach. This has been the plan of record for many years. One size doesn’t fit all, and this approach will drive CAPEX and OPEX so high that commercial viability will no longer be within reach. Specialization is the key to taming this challenge. Enter data accelerators. The white paper points out that, depending on the data accelerator type and the workload, the computational ability of a single data accelerator on one server can do the work of as many as 15 servers, drastically cutting down the CAPEX and OPEX.

This is clearly the way forward. If we consider CPUs as the baseline approach to data crunching, there exist three architectures to take us to the next level: GPU, FPGA and ASIC. Each of these approaches occupies a spectrum of programmability, customizability and cost, as shown in the figure below.

Data Accelerator Architectures

A Surprise Ending

 Here is where the white paper takes a very interesting turn. What if you could achieve the efficiency and optimality of an ASIC with an off-the-shelf device, costing far less? It turns out Achronix is delivering on this dream. A combination of FPGAs, embedded FPGA IP, a unique 2D network on chip (NoC) architecture and a large array of very high-speed interfaces put this goal within reach.

I learned a lot about AI, it’s deployment and some unique ways to implement it in this white paper from Achronix. If AI is part of your next design project, I highly recommend reading it. You can get your copy here. You’ll learn how to implement AI in a cost-effective manner. You’ll also learn what the future may look like as robotic precision and human creativity write the next chapter.


DAC 2021 – Embedded FPGA IP from Menta

DAC 2021 – Embedded FPGA IP from Menta
by Daniel Payne on 01-17-2022 at 10:00 am

Menta min

I’ve followed the enthusiastic market acceptance of FPGA chips over the decades, and even semiconductor companies like  Intel acquired Altera, while AMD tries to acquire Xilinx. The idea of field programmable logic makes a lot of sense for use in systems designs today, and it was inevitable that a company like Menta would offer both soft and hard IP for embedding an FPGA into an ASIC design. At the #58thDAC I met with Yoan Dupret, the Managing Director & CTO at Menta, who has also done stints at Altis Semiconductor, Infineon Technologies, CSR, Samsung and DelfMEMS.

Menta at #58thDAC

Q&A

Q: The FPGA architecture has been around for awhile, so are there any patent issues with your approach?

A: Patents aren’t much of an issue now, we already have patents applied for using pure standard cell IP.

Q: What is the embedded FPGA design flow like for your customers?

A: The design flow is a standard ASIC flow, and we provide setup and guidelines to make regular structures for the best density. With our soft IP approach to embedded FPGAs, we train and guide our customers to be most successful.

Q: How did Menta get started?

A: Our company started back in 2007, and it’s a University spin-out. Two generations of IP have already been done, we started out privately financed, and then in 2015 a new investor came in and we changed to use a standard cell approach.

Q: Where can your embedded FPGAs be used?

A: With our soft IP product any foundry with standard cells can be used.

Q: What kind of customers are using Menta technology?

A: We have multiple Space & Defense customers in the US and Europe – indeed since 2015 – and also edge customers, including 5G.

Q: Do you sell IP generators or instances of FPGAs?

A: We don’t sell IP generators, although we do use our own internal generators, then deliver the FPGA instances. We also help our customers to use the IP in their SoC. Menta has chip architects in house to advise customers.

Q: When did you announce the eFPGA as soft IP?

A: We just announced our eFPGA as soft IP this week at DAC.

Q: What is the process to get an eFPGA as hard IP?

For hard IP we start out with the specification, and the time to reach a physical implementation is about 1-5 months, all dependent on the foundry and size of the FPGA.

Q: How many process nodes have you used for eFPGA IP?

A: We’ve implemented our IP on more than 10 technologies so far, ranging from 180nm to 12nm, with even smaller nodes now in progress. We have a custom IP delivery model. Our technology must be the most efficient to fit the requirements, by combining LUTs and DSPs, Multipliers, Adders, Filters, and Memory inside of an FPGA instance.

eFPGA

Q: How long does it take to get an eFPGA using soft IP?

A: With our new soft IP technology, we can go from  IP Spec in, to RTL verified out in just a few days.

Q: What are some typical application areas?

A: Anywhere that the electronic specifications or standards are still changing, like 5G, AI, cryptography and telecom. With the RISC-V community, they always want ISA extensions. Even a micro-controller chip can be adapted with new functions by using an eFPGA.

Q: Where is Menta physically located?

A: We have three offices: France, New York and Armenia. Our company is in growth mode, so I expect our staff to double in the next 6 months.

Q: Who are you partnered with at DAC this year?

A: At this DAC we have several in booth partners (Codasip, Andes, Secure IC). At the IP Track session we presented with Secure IC. There’s a Poster session with Andes and Codasip. We also brought a demo board in our booth, which is running an algorithm and filtering images.

Q: What makes your eFPGA soft IP different?

A: The fact that it is a soft IP makes already a large difference. We also own completely our software which allows our customers to integrate and re-distribute it within their SDK. Yield, reliability, test, flexibility to provide on any node and delivery time are all differentiators.

Q: How experienced is the management team at Menta?

A: There’s an average of 22 years experience at Menta within our management team.

Q: How long have you been attending DAC?

A: I’ve been attending DAC for about 10 years now, and I joined Menta in 2016.

Q: Where did the name Menta come from?

A: There is a book and movie called Dune, and part of the plot has people called Mentats, and the performed logic, computing and cognitive thinking.

Related Blogs


CMOS Forever?

CMOS Forever?
by Asen Asenov on 01-16-2022 at 6:00 am

CMOS Forever

Today, the CMOS chip manufacturing is the pinnacle of the human technology defining economy, society and perhaps us as modern humans. This was highlighted by the recent chip shortage, followed by the ‘shocking’ realization that more than 80% of all chips are manufactured in the Far East.

Important decisions need to be taken by the Western Governments regarding the future of the CMOS technology. When contemplating such decisions some of the ‘post’ or ‘beyond’ CMOS mythology from the recent past need to be re-examined.

Things were looking good with CMOS technology development on the West at the beginning of the century with Intel leading the advanced CMOS technology by two generations and Europe making significant contributions led by ST and LETI and complemented by IMEC. It was easy in such circumstances to take wrong strategic decisions. For example, NFS in the US decided that the semiconductor industry does not need any more academic research support and time has come to move to the next ‘big thing’. This thinking suited UK very well too as at that time, we have lost already the advanced CMOS manufacturing anyway. The notion that UK will compensate for this loss by inventing the next ‘big thing’ was politically appealing and financially manageable.

Hence, CMOS was de-prioritized by EPSRC and calls for proposals for the next ‘big thing’ in the ‘post CMOS’ era started to appear. In no particular order, these included carbon nanotubes, graphene, 2D materials, various incarnations of ‘quantum’ including quantum computing…. Despite the fantastic intellectual challenges associated with the corresponding research the realization that none of these have the potential of replace the CMOS technology is slowly coming home.

This is nothing new for the big chip manufacturers including Intel, Samsung and TSMC. Until 2013 the International Technology Roadmap for Semiconductors (ITRS) was ‘the bible’ of the semiconductor industry. Every two years in a new ITRS edition everybody was reading first the emerging technology section. My take from reading this section was that nothing was ‘emerging’ on the horizon capable of replacing CMOS. Not surprisingly, the investments of the biggest semiconductor players in ‘post CMOS’ technologies are a minute fractions of their CMOS R&D budgets.

In my humbled opinion, today there is still nothing on the horizon with potential for replacing CMOS. However, all the evidence suggests the maturing and consolidation of the semiconductor industry. Nothing new with this either, just look at the history of avionic industry: after approximately 80 years of rapid technology development the avionic industry is now a mature industry with only Boeing and Airbus remaining as major players, one of them is in the US and the other one is in Europe. Unfortunately, in the semiconductor industry case, most of the potential winners in the CMOS end game are in the Far East with China emerging as a strong contender.

Asen Asenov (FIEEE, FRSE) is the James Watt Professor in Electrical Engineering and the Leader of the renown Glasgow Device Modelling Group. He directs the development of quantum, Monte Carlo and classical models and tools and their application in the design of advanced and novel CMOS devices. He also was founder and the CEO of Gold Standard Simulations (GSS) Ltd. acquired in 2016 by Synopsys. He is currently also CEO of Semiwise – a semiconductor IP and services company, and a director of Surecore and Ngenics.


Podcast EP57: A Perspective of 2021 and 2022 with Malcolm Penn

Podcast EP57: A Perspective of 2021 and 2022 with Malcolm Penn
by Daniel Nenni on 01-14-2022 at 10:00 am

Dan is joined by Malcolm Penn, long-term semiconductor industry veteran and founder of Future Horizons, Dan and Malcolm review their last discussion on 2021 forecasts, which produced aggressive numbers many said were too optimistic. Their predictions turned out to be on the mark.

They also explore the topic of 2022 -what will this year look like and what will be the drivers and the risks going forward? Malcolm also mentions his yearly forecast event, coming up on January 18.  He has graciously offered a 40% discount on the event ticket for SemiWiki subscribers. You can register for the event here

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.