ads mdx semiwiki building trust gen 800x100ai

2021 Retrospective. Innovation in Verification

2021 Retrospective. Innovation in Verification
by Bernard Murphy on 01-19-2022 at 6:00 am

Innovation image 2021

As we established last year, we will use the January issue of this blog to look back at the papers we reviewed last year. We lost Jim Hogan and the benefit of his insight early last year, but we gained a new and also well-known expert in Raúl Camposano (another friend of Jim). Paul (GM, Verification at Cadence), Raúl (Silicon Catalyst, entrepreneur, former Synopsys CTO) and I are ready to continue this series through 2022 and beyond. As always, feedback welcome.

The 2021 Picks

These are the blogs in order, January to December. All got good hits. The hottest of all was the retrospective, suggesting to me that you too wanted to know what others found most interesting 😀. This year, “Finding Large Coverage Holes” and “Agile and Verification” stood out, followed by “Side Channel Analysis” and “Instrumenting Post Silicon Validation”. Pretty good indicators of where you are looking for new ideas.

2020 Retrospective

Finding Large Coverage Holes

Reducing Compile Time in Emulation

Agile and Verification, Validation

Fuzzing to Validate SoC Security

Neural Nets and CR Testing

Instrumenting Post-Silicon Validation

Side Channel Analysis at RTL

An ISA-like Accelerator Abstraction

Memory Consistency Checks at RTL

Learning-Based Power Modeling

Scalable Concolic Testing

Paul’s view

I am really enjoying this blog; I can’t believe it’s been 2 years already. It is amazing to me how Bernard seems to find something new and interesting every month. Our intention when we launched this blog was just to share and appreciate interesting research, but in practice the papers have directly influenced Cadence’s roadmap in verification. Which I think is the ultimate show of appreciation.

The biggest theme I saw in our 2021 blogs was raising abstraction. As has been the case for the last 30 years, this continues to be the biggest lever to improve productivity. Although, I should probably qualify that to domain-specific abstraction. Historically, abstractions have been independent of application – polygon to gate to netlist to RTL. Now the abstractions are often fragmenting – ISA to ILA for accelerator verification in the September blog. Mapping high level behavioral axioms to SystemVerilog for memory consistency verification in the October blog. Verilog to Chisel for agile CPU verification in the April blog. Assertions generalizing over sets of simulations for security verification in the May blog. And then of course, some abstractions continued to be domain-agnostic: Gate-level to C++ for system level power modeling in the November blog. Coverage to text tagging in the February blog.

The other theme which continued to shine through is how innovation emerges at intersections of different skills and perspectives. The February blog on leveraging document classification algorithms to find coverage holes is one great example this year. Early ML methods from the 1980’s rediscovered and reapplied to CPU verification in the June blog. Game theory used to optimize FPGA compile times in emulation in the March blog. It’s been great to see Bernard take this principle into our own paper selection this year, in a few months diverting away from “functional verification” into topics like power, security, and electrical bugs. It’s helping us do our own connecting of dots between two different domains.

Looking forward to continuing our random walk through verification again this year!

Raúl’s view

Without focusing on any particular area, from June to December, we touched on many interesting topics in Verification. The two most popular ones were Embedded Logic to Detect flipped Flops (hardware errors) and Assessing Power-Side Channel Leakage at the RTL-Level. Another RTL-Level paper dealt with memory consistency. At an even higher level, we looked at Instruction-Level Abstractions for verification. We also had the obligatory papers on ML/NN, one to generate better pseudo-random tests, the other to build accurate power models of IP. Finally, our December pick on Concolic Testing to reach hard to activate branches also deals with increasing test coverage.

One of the areas we focus on this blog is marketability; methodology papers, foundational papers, extensions of existing approaches and too small niches all do not qualify for different reasons. This has of course little to do with the technical merits. Some of the presented research is ripe for adoption, e.g., use of ML/NN to improve different tasks in EDA. A few are around methodology, e.g., an emulation infrastructure; some are more foundational such as higher-level abstractions. Others are interesting niches, for example side-channel leakage. But they are all research worthy and reading the papers was time well spent!

My view

We three had a lively discussion on what principle (if any) I am following in choosing papers. Published in a major forum certainly. As Paul say, it has been something of a random walk through topics. I’d like to get suggestions from readers to guide our picks. Based on hits there are a lot of you, but you are evidently shy in sharing your ideas. Maybe a private email to me would be easier – info@findthestory.net.

  • I’m especially interested in hard technical problems you are facing constantly
  • If you can (not required), provide a reference to a paper on the topic. This could be published in any forum.
  • I’m not as interested in solved problems – how you used some vendor tool to make something work in your verification flow. Unless you think your example exhibits some fundamentally useful capability that can be generalized beyond your application.

Meantime we will continue our random walk, augmented by themes we hear continue to be very topical – coherency checking, security, abstraction

Also Read

Methodology for Aging-Aware Static Timing Analysis

Scalable Concolic Testing. Innovation in Verification

More Than Moore and Charting the Path Beyond 3nm


ESDA Reports Double-Digit Q3 2021 YOY Growth and EDA Finally Gets the Respect it Deserves

ESDA Reports Double-Digit Q3 2021 YOY Growth and EDA Finally Gets the Respect it Deserves
by Mike Gianfagna on 01-18-2022 at 10:00 am

ESDA Reports Double Digit Q3 2021 YOY Growth and EDA Finally Gets the Respect it Deserves

 

It’s that time again. ESDA has recently released the numbers for Q3, 2021. Industry revenue increased 17.1% year-over-year from $2,953.9 million to $3,458.1 million in Q3 2021. The four-quarter moving average, which compares the most recent four quarters to the prior four, rose 16.1%. Further, the companies tracked in the report employed 51,182 people globally in Q3 2021, an 8.7% increase over the Q3 2020 headcount of 47,087 and up 2.4% compared to Q2 2021. Another banner quarter and another upbeat outlook. The numbers also roll up in an interesting way. Read on to see how EDA finally gets the respect it deserves.

A Look at the Numbers

This report remined me of last quarter’s report. According to Wally Rhines, executive sponsor of the SEMI Electronic Design Market Data report, “Geographically, all regions reported double-digit growth, with product categories CAE, Printed Circuit Board and Multi-Chip Module, SIP, and Services also showing double-digit growth.”

Revenue by Product and Application Category – Year-Over-Year Change

  • CAE revenue increased 13.7% to $1,054.7 million. The four-quarter CAE moving average increased 11.8%.
  • IC Physical Design and Verification revenue increased 0.7% to $612.6 million. The four-quarter moving average for the category rose 16%.
  • Printed Circuit Board and Multi-Chip Module (PCB and MCM) revenue increased 14.5% to $298.3 million. The four-quarter moving average for PCB and MCM increased 10.9%.
  • SIP revenue rose 30.6% to $1,373.3 million. The four-quarter SIP moving average grew 22.1%.
  • Services revenue increased 12.5% to $119.1 million. The four-quarter Services moving average increased 9.2%.

Revenue by Region – Year-Over-Year Change

  • The Americas, the largest reporting region by revenue, purchased $1,494.5 million of electronic system design products and services in Q3 2021, a 14.3% increase. The four- quarter moving average for the Americas rose 15.8%.
  • Europe, Middle East, and Africa (EMEA) revenue increased 22.6% to $451.7 million. The four-quarter moving average for EMEA grew 11.9%.
  • Japan revenue increased 11.8% to $259.8 million. The four-quarter moving average for Japan rose 3.3%.
  • Asia Pacific (APAC) revenue increased 19.7% to $1,252.1 million. The four-quarter moving average for APAC increased 21.3%.

A Look Behind the Numbers

Dan and I had the opportunity to chat with Wally about the report. Wally always has vast amounts of information and deep perspectives on the data. This time was no different. What are some trends that are noteworthy?

First of all, emulation had a very strong showing at 32% growth. This reflects the complexity of the designs being done. Highly complex designs will need hardware/software verification and emulation is the best approach to do that. The types of companies driving this growth are noteworthy as well. We have seen many system companies in the information processing arena enter the semiconductor market as new “chip companies”,

Wally pointed out that we’re now seeing the same thing happen in the automotive sector. Of course, Tesla has always been there. But now, companies like VW, Hyundai, Ford, and GM (among others) have all announced their intention of becoming more active in chip design. These are large entities with potentially big appetites for silicon.

We asked Wally if there were any weak spots of note. IC layout was a bit weak this quarter, as was services. When you look at the rolling averages, none of that has a large impact, however. IP was also noted as a large segment with excellent results. So, the general view is that the train continues to run strong and fast.

A final note. Wally sees EDA approaching a $14B industry. For many years, his view was that if you took two percent of semiconductor revenue, that would be EDA revenue. With semiconductor revenue north of $550B, we can clearly see EDA’s percentage growing beyond 2 percent. So perhaps EDA finally gets the respect it deserves.

For information about SEMI market research reports, visit the SEMI Market Research Reports and Databases Catalog.

Also read:

ESD Alliance Reports Double-Digit Growth – The Hits Just Keep Coming

Is EDA Growth Unstoppable?

The Juggernaut Continues as ESD Alliance Reports Record Revenue Growth for Q4 2020


Unlocking the Future with Robotic Precision and Human Creativity

Unlocking the Future with Robotic Precision and Human Creativity
by Mike Gianfagna on 01-18-2022 at 6:00 am

Unlocking the Future with Robotic Precision and Human Creativity

From the perspective of all time for recorded human history, the last 300 years (a blink on that time scale) has seen incredible, life-changing and world-changing advances. Water and steam-driven machines first showed up in the 1700s. This is often called Industry 1.0. Powered assembly lines in the late 1800s became Industry 2.0. Automation/computers started Industry 3.0 in the late 1960s. Artificial Intelligence (AI) is now driving Industry 4.0. This timeline should be familiar to most. A lot has been written about it. What is interesting is what comes next. That is the topic of a recent white paper from Achronix. I found the perspective offered in this piece to be quite enlightening and fresh. Without disclosing too much of its content, let’s examine how robotic precision and human creativity will write the next chapter.

How Does AI Fit?

AI is a rather broad term. It’s defined by a branch of computer science that aims to emulate human behavior algorithmically. Focusing a bit more, we see a lot of references to machine learning (ML), a subset of AI that uses statistical models derived from data. Focusing further, deep learning (DL) utilizes neural networks to perform inferencing. These systems can also be adaptive, i.e., learn.

Achronix’s white paper treats the various regimes of AI and puts them in perspective regarding a timeline of innovation. As we approach the present day, an important observation/prediction is made.

Adaptive AI algorithms, most certainly DL algorithms, will not only learn on their own, but also interpret real-time inputs from human beings. This ability to adapt in real time with minimal latency will be essential.

This observation puts a great perspective on AI going forward. It’s not about replacing humans. It’s about leveraging their insights to create better results.

Keeping Up with the Data

Achronix then discusses the environment, ecosystem and technology needed to build successful AI deployment. We all know that 5G networks and pervasive IoT devices are creating an explosion of data. Keeping up with the processing demands of that data is the key to success. But performing this rather daunting task in a commercially viable way is far easier said than done.

Throwing more servers at the problem is one approach. This has been the plan of record for many years. One size doesn’t fit all, and this approach will drive CAPEX and OPEX so high that commercial viability will no longer be within reach. Specialization is the key to taming this challenge. Enter data accelerators. The white paper points out that, depending on the data accelerator type and the workload, the computational ability of a single data accelerator on one server can do the work of as many as 15 servers, drastically cutting down the CAPEX and OPEX.

This is clearly the way forward. If we consider CPUs as the baseline approach to data crunching, there exist three architectures to take us to the next level: GPU, FPGA and ASIC. Each of these approaches occupies a spectrum of programmability, customizability and cost, as shown in the figure below.

Data Accelerator Architectures

A Surprise Ending

 Here is where the white paper takes a very interesting turn. What if you could achieve the efficiency and optimality of an ASIC with an off-the-shelf device, costing far less? It turns out Achronix is delivering on this dream. A combination of FPGAs, embedded FPGA IP, a unique 2D network on chip (NoC) architecture and a large array of very high-speed interfaces put this goal within reach.

I learned a lot about AI, it’s deployment and some unique ways to implement it in this white paper from Achronix. If AI is part of your next design project, I highly recommend reading it. You can get your copy here. You’ll learn how to implement AI in a cost-effective manner. You’ll also learn what the future may look like as robotic precision and human creativity write the next chapter.


DAC 2021 – Embedded FPGA IP from Menta

DAC 2021 – Embedded FPGA IP from Menta
by Daniel Payne on 01-17-2022 at 10:00 am

Menta min

I’ve followed the enthusiastic market acceptance of FPGA chips over the decades, and even semiconductor companies like  Intel acquired Altera, while AMD tries to acquire Xilinx. The idea of field programmable logic makes a lot of sense for use in systems designs today, and it was inevitable that a company like Menta would offer both soft and hard IP for embedding an FPGA into an ASIC design. At the #58thDAC I met with Yoan Dupret, the Managing Director & CTO at Menta, who has also done stints at Altis Semiconductor, Infineon Technologies, CSR, Samsung and DelfMEMS.

Menta at #58thDAC

Q&A

Q: The FPGA architecture has been around for awhile, so are there any patent issues with your approach?

A: Patents aren’t much of an issue now, we already have patents applied for using pure standard cell IP.

Q: What is the embedded FPGA design flow like for your customers?

A: The design flow is a standard ASIC flow, and we provide setup and guidelines to make regular structures for the best density. With our soft IP approach to embedded FPGAs, we train and guide our customers to be most successful.

Q: How did Menta get started?

A: Our company started back in 2007, and it’s a University spin-out. Two generations of IP have already been done, we started out privately financed, and then in 2015 a new investor came in and we changed to use a standard cell approach.

Q: Where can your embedded FPGAs be used?

A: With our soft IP product any foundry with standard cells can be used.

Q: What kind of customers are using Menta technology?

A: We have multiple Space & Defense customers in the US and Europe – indeed since 2015 – and also edge customers, including 5G.

Q: Do you sell IP generators or instances of FPGAs?

A: We don’t sell IP generators, although we do use our own internal generators, then deliver the FPGA instances. We also help our customers to use the IP in their SoC. Menta has chip architects in house to advise customers.

Q: When did you announce the eFPGA as soft IP?

A: We just announced our eFPGA as soft IP this week at DAC.

Q: What is the process to get an eFPGA as hard IP?

For hard IP we start out with the specification, and the time to reach a physical implementation is about 1-5 months, all dependent on the foundry and size of the FPGA.

Q: How many process nodes have you used for eFPGA IP?

A: We’ve implemented our IP on more than 10 technologies so far, ranging from 180nm to 12nm, with even smaller nodes now in progress. We have a custom IP delivery model. Our technology must be the most efficient to fit the requirements, by combining LUTs and DSPs, Multipliers, Adders, Filters, and Memory inside of an FPGA instance.

eFPGA

Q: How long does it take to get an eFPGA using soft IP?

A: With our new soft IP technology, we can go from  IP Spec in, to RTL verified out in just a few days.

Q: What are some typical application areas?

A: Anywhere that the electronic specifications or standards are still changing, like 5G, AI, cryptography and telecom. With the RISC-V community, they always want ISA extensions. Even a micro-controller chip can be adapted with new functions by using an eFPGA.

Q: Where is Menta physically located?

A: We have three offices: France, New York and Armenia. Our company is in growth mode, so I expect our staff to double in the next 6 months.

Q: Who are you partnered with at DAC this year?

A: At this DAC we have several in booth partners (Codasip, Andes, Secure IC). At the IP Track session we presented with Secure IC. There’s a Poster session with Andes and Codasip. We also brought a demo board in our booth, which is running an algorithm and filtering images.

Q: What makes your eFPGA soft IP different?

A: The fact that it is a soft IP makes already a large difference. We also own completely our software which allows our customers to integrate and re-distribute it within their SDK. Yield, reliability, test, flexibility to provide on any node and delivery time are all differentiators.

Q: How experienced is the management team at Menta?

A: There’s an average of 22 years experience at Menta within our management team.

Q: How long have you been attending DAC?

A: I’ve been attending DAC for about 10 years now, and I joined Menta in 2016.

Q: Where did the name Menta come from?

A: There is a book and movie called Dune, and part of the plot has people called Mentats, and the performed logic, computing and cognitive thinking.

Related Blogs


CMOS Forever?

CMOS Forever?
by Asen Asenov on 01-16-2022 at 6:00 am

CMOS Forever

Today, the CMOS chip manufacturing is the pinnacle of the human technology defining economy, society and perhaps us as modern humans. This was highlighted by the recent chip shortage, followed by the ‘shocking’ realization that more than 80% of all chips are manufactured in the Far East.

Important decisions need to be taken by the Western Governments regarding the future of the CMOS technology. When contemplating such decisions some of the ‘post’ or ‘beyond’ CMOS mythology from the recent past need to be re-examined.

Things were looking good with CMOS technology development on the West at the beginning of the century with Intel leading the advanced CMOS technology by two generations and Europe making significant contributions led by ST and LETI and complemented by IMEC. It was easy in such circumstances to take wrong strategic decisions. For example, NFS in the US decided that the semiconductor industry does not need any more academic research support and time has come to move to the next ‘big thing’. This thinking suited UK very well too as at that time, we have lost already the advanced CMOS manufacturing anyway. The notion that UK will compensate for this loss by inventing the next ‘big thing’ was politically appealing and financially manageable.

Hence, CMOS was de-prioritized by EPSRC and calls for proposals for the next ‘big thing’ in the ‘post CMOS’ era started to appear. In no particular order, these included carbon nanotubes, graphene, 2D materials, various incarnations of ‘quantum’ including quantum computing…. Despite the fantastic intellectual challenges associated with the corresponding research the realization that none of these have the potential of replace the CMOS technology is slowly coming home.

This is nothing new for the big chip manufacturers including Intel, Samsung and TSMC. Until 2013 the International Technology Roadmap for Semiconductors (ITRS) was ‘the bible’ of the semiconductor industry. Every two years in a new ITRS edition everybody was reading first the emerging technology section. My take from reading this section was that nothing was ‘emerging’ on the horizon capable of replacing CMOS. Not surprisingly, the investments of the biggest semiconductor players in ‘post CMOS’ technologies are a minute fractions of their CMOS R&D budgets.

In my humbled opinion, today there is still nothing on the horizon with potential for replacing CMOS. However, all the evidence suggests the maturing and consolidation of the semiconductor industry. Nothing new with this either, just look at the history of avionic industry: after approximately 80 years of rapid technology development the avionic industry is now a mature industry with only Boeing and Airbus remaining as major players, one of them is in the US and the other one is in Europe. Unfortunately, in the semiconductor industry case, most of the potential winners in the CMOS end game are in the Far East with China emerging as a strong contender.

Asen Asenov (FIEEE, FRSE) is the James Watt Professor in Electrical Engineering and the Leader of the renown Glasgow Device Modelling Group. He directs the development of quantum, Monte Carlo and classical models and tools and their application in the design of advanced and novel CMOS devices. He also was founder and the CEO of Gold Standard Simulations (GSS) Ltd. acquired in 2016 by Synopsys. He is currently also CEO of Semiwise – a semiconductor IP and services company, and a director of Surecore and Ngenics.


Podcast EP57: A Perspective of 2021 and 2022 with Malcolm Penn

Podcast EP57: A Perspective of 2021 and 2022 with Malcolm Penn
by Daniel Nenni on 01-14-2022 at 10:00 am

Dan is joined by Malcolm Penn, long-term semiconductor industry veteran and founder of Future Horizons, Dan and Malcolm review their last discussion on 2021 forecasts, which produced aggressive numbers many said were too optimistic. Their predictions turned out to be on the mark.

They also explore the topic of 2022 -what will this year look like and what will be the drivers and the risks going forward? Malcolm also mentions his yearly forecast event, coming up on January 18.  He has graciously offered a 40% discount on the event ticket for SemiWiki subscribers. You can register for the event here

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Webinar: Investing in Semiconductor Startups

Webinar: Investing in Semiconductor Startups
by Mike Gianfagna on 01-14-2022 at 6:00 am

Webinar Investing in Semiconductor Startups

Investing in semiconductor startups is something Silicon Catalyst knows a lot about. During a time when venture funding for chip companies all but disappeared, this remarkable organization built a robust incubator, ecosystem, support infrastructure and funding source. Silicon Catalyst has assembled a top-notch management team and an extensive, world-class advisor network. You can learn more about this remarkable organization here. Silicon Catalyst also has a great track record for putting on compelling events with A-list participants. You can read SemiWiki coverage of their most recent event here. So, when Silicon Catalyst announces a webinar on investing in semiconductor startups, you must take notice.

Chips are Popular Again

It appears the rest of the world is now seeing what Silicon Catalyst saw all along. As stated by Silicon Catalyst:

Following a remarkable year of over 25% year on year growth, the global semiconductor industry is poised to experience strong growth in 2022. World-wide sales for this year are projected to reach in excess of $600 billion in what many are calling the golden era of semiconductors.

Chips are indeed hot again. Another interesting fact courtesy of Silicon Catalyst:

Nuvia is a great example, having taken their first round of money in April 2019 at a post-money valuation of $16 million and being acquired by Qualcomm in March of 2021 for over $1.2 billion.

Examples like this are truly remarkable. They also don’t happen every day. For every home run, there are many more failures. Understanding the trends and developing insight to spot the companies that are correctly leveraging those trends is the focus of the upcoming webinar. As usual, Silicon Catalyst has assembled an all-star cast to discuss these topics. We can all learn a lot from these folks, so I highly recommend you attend this event. More information is coming.

An All-Star Cast Weighs In

First, let’s look at the panel lineup. A stellar group from around the world.

Moderator

  • Cliff Hirsch, Semiconductor Times. Cliff has extreme depth and breadth in semiconductors and related technologies, communications, data/telecom network infrastructure, and open-source web technology. He has analyzed greater than 4,000 private and public companies in the semiconductor & comm/IT space. Check out the latest news on semiconductor startups here

Panelists

  • Rajeev Madhavan, North America, Clear Ventures. Rajeev is a founder and General Partner of Clear, where he focuses on early-stage technology investments. His notable career exits include Apigee (IPO), YuMe (IPO), Virident (acquired), Magma (IPO), Groupon (IPO), VxTel (acquired), LogicVision (IPO) and Ambit (acquired). Rajeev has the uncanny ability to deeply understand what entrepreneurs are trying to do, and to steer them onto a successful path. I know Rajeev. He truly has the golden touch.
  • Emily Meads, EU, Speed Invest. Emily passionately supports Deep Tech companies and the Deep Tech ecosystem, and always strives to give scientific credibility to the VC side of the table. Before joining Speedinvest, Emily worked for Fraunhofer IZM, as well as a software engineering startup where she first caught the startup bug. She then worked at Spin Up Science where she specifically supported innovators on their Deep Tech commercialization journeys.
  • Dov Moran, Israel: Grove Venture Capital. Dov Moran is one of Israel’s most prominent hi-tech leaders, entrepreneurs and investors. He is known as a pioneer of several flash memory technologies, most notably as the inventor of the USB flash drive. Dov was a founder and CEO of M-Systems (NSDQ: FLSH), a world leader in the flash data storage market. Under Dov’s leadership, M-Systems grew to $1B revenue, and was acquired by SanDisk Corp (NSDQ: SNDK) for $1.6B.
  • Owen Metters, UK, Foresight Williams Technology Funds. Dr. Metters is an Investment Manager at Williams Advanced Engineering (WAE). He has worked at Oxford University Innovation, the technology transfer organization for the University of Oxford, supporting academics in the commercialization of University IP leading to the formation of several successful spin-out companies which have then raised over £20m of VC funding & holds a PhD in Inorganic & Materials Chemistry from Bristol University.

And a special presentation: Semi Industry Trends and Market Opportunities for 2022, presented by:

  • Junko Yoshida, Editor in Chief, The Ojo-Yoshida Report. Junko has always been a “roving reporter” in the most literal sense. After logging 11 years of international experience at a Japanese consumer electronics company, Junko pursued journalism, breaking stories, securing exclusives, and filing incisive analyses from Tokyo, Silicon Valley, Paris, New York, and China. During her three decades at EE Times, Junko rose through the ranks from Tokyo correspondent to West Coast bureau chief, European bureau chief, news editor, and editor-in-chief.

I know Junko and I find this part quite exciting. She is someone who will always find the hidden truth in every story. Her insights are legendary. I can’t wait to hear her perspectives in her new role. She will be joined by Bolaji Ojo, Publisher and Managing Editor @The Ojo-Yoshida Report.

Junko has offered some comments about the upcoming event. Consider this a sample of what’s to come:

“Semiconductors are the lifeblood of today’s economy. It is pouring into every economic sector, at different speeds and vigor. This means there are huge investment opportunities yet to be tapped in semiconductors using new products and old ones that are finding new applications. Finding where to strategically put investment dollars in semiconductors should be a passion for every investor because this process will endure for a while. The Ojo-Yoshida Report identifies certain technology segments and market applications investors should be paying attention to.”

How to Attend the Webinar

The webinar will be held on Zoom and is open to the public. Attendees will be able to submit questions to the panel and they will be addressed as time permits.

January 19, 2022, 09:00 AM in Pacific Time (US and Canada)

You can register for the webinar on investing in semiconductor startups here.

Also Read:

Silicon Catalyst Hosts an All-Star Panel December 8th to Discuss What Happens Next?

Silicon Startups, Arm Yourself and Catalyze Your Success…. Spotlight: Semiconductor Conferences

WEBINAR: Maximizing Exit Valuations for Technology Companies


CES is Back – Partially

CES is Back – Partially
by Bill Jewell on 01-13-2022 at 2:00 pm

chart pie chart description automatically genera 2

CES (formerly the Consumer Electronics Show) returned to Las Vegas, Nevada last week. In 2021, CES was remote due to the COVID-19 pandemic. On April 28, 2021, the Consumer Technology Association (CTA), the sponsor of CES, announced CES 2022 would be held in Las Vegas. On the date of the announcement new COVID cases in the U.S. were less than 60,000 per day. On the day CES 2022 opened, January 5, 2022, new COVID cases in the U.S. were over 700,000 per day as the new omicron variant spread rapidly. Nevertheless, the show went on with COVID protocols including proof of vaccination, wearing masks indoors, social distancing, and optional on-site testing.

CTA stated CES 2022 live attendance was over 45,000 people, about a quarter of the over 175,000 attendees at the last live event, CES 2020. Over 2300 companies exhibited at CES 2022, about half the 4500 companies at CES 2020. We at Semiconductor Intelligence elected to attend CES 2022 virtually.

In conjunction with CES 2022, CTA released its forecast for U.S. consumer electronics in 2022. Total U.S. consumer electronics are projected at $293 billion, up 1.8% from 2021. Smartphones and computing are the two largest segments at about $75 billion. Video, Smart Home and Automotive are each in the $23B to $25B range.

Most categories of consumer electronics are expected to grow in the low-to mid-single-digit range in 2022. However, three emerging categories with high grow rates are virtual reality eyewear, connected exercise equipment and electric bikes.

AT CES 2022, keynote presentations were given by Samsung Electronics, General Motors, and Abbott. Interestingly only one of the three keynotes was from an electronics company.

Samsung Electronics’ keynote was led by Jong-Hee (JH) Han, Vice Chairman & CEO. The emphasis was not on products but on demonstrating commitment to the environment through a more eco-conscious product life cycle. Samsung plans to have zero standby power usage in its TVs and smartphones by 2025. Older smartphones will be repurposed for IoT applications. Samsung TVs will have solar powered remote controls to reduce battery waste.

Samsung did introduce some new products in its keynote. The Freestyle portable projector can be controlled with voice commands or wirelessly with a smartphone. It can project up to 100 inch images and includes a smart speaker. The Samsung Gaming Hub will have access to video games directly from a Samsung smart TV. The Odyssey Ark is a 55-inch gaming projector which is curved and can be aligned either horizontally or vertically. Samsung also created the home connectivity alliance (HCA) with other appliance makers to increase interoperability between products, ensure safety & data security, and increase energy efficiency.

Samsung Freestyle Projector

Samsung Odyssey Ark Monitor

General Motor’s keynote address was led by chair and CEO Mary Barra. She stated GM is transforming from an automaker to a platform innovator through electrification, software-enabled services, and autonomous driving. GM will have 30 electric vehicle (EV) models by 2025 and all new vehicle models will be electric by 2035.

GMC Hummer EV Pickup

Abott’s keynote was led by president and CEO Robert B. Ford. The keynote focused on electronic implants to improve health and health monitoring. Abbott’s Freestyle Libre glucose monitoring system uses a small sensor on the back of arm and data on a smartphone app. Its Heartmate 3 heart pump is implanted as a blood pump for people with advanced heart failure. Abbott’s neuromodulation devices alter nerve activity through electrical impulses to treat movement disorders such as Parkinson’s disease. Abbott introduced Lingo, a line of bio-wearable devices to track glucose, ketones, lactate and alcohol in order to improve diets and athletic performance.

Abbott Lingo Biosensor

Pepcom’s Digital Experience at CES 2022 introduced many innovative products as shown below.

Labrador Systems demonstrated its Labrador Retriever personal robot which can help move large loads or deliver trays. It is controlled through voice commands or a smartphone app.

Labrador Retriever

In a sign of our times, the MaskFone includes built in earphones and a microphone to enable users to talk on their smartphones in public without removing their masks.

MaskFone

Altis introduced what it claims is the world’s first artificial intelligence (AI) personal trainer. The device consists of a soundbar-sized console which uses a computer vision neural network and an exercise science deep learning model to personally instruct the user.

Altis AI Personal Trainer

Vuzix Shield smart glasses are safety classes which include video projectors, stereo sound, and noise-cancelling microphones. The Vuzix Shield glasses connect to smartphones and other devices using Wi-Fi and Bluetooth.

Vuzix Shield Smart Glasses

Hopefully CES 2023 can return to the scope of previous CES shows. Seeing in-person demonstrations of new consumer electronics is far preferable to watching videos. A live CES enables people to see, touch and even use many new devices and to talk directly to representatives of the companies. CES also brings worldwide media attention to the electronics industry.

Related Blog


It’s Now Time for Smart Clock Networks

It’s Now Time for Smart Clock Networks
by Tom Simon on 01-13-2022 at 10:00 am

Movellus Maestro Clock Network

By now most SoC designers are pretty familiar and comfortable with the use of Network on Chip (NOC) IP for interconnecting functional blocks. Looking at the underlying change that NOCs represent, we see the use of IP to supplant the use of tools for implementing a critical part of the design. The idea that ‘smart’ things are better than just structural implementation is a ubiquitous theme in our lives. Smart bulbs, smart appliances, smart thermostats, smart doorbells all make for better performance and functionality once the technology became available. The time has now come for on-chip clocking to take advantage of a smart approach through the use of IP and a new architecture to replace fixed clock trees and meshes found in previous generations of designs.

Clock networks have always been a challenging area for IC design. While they are often regarded as an unseen part of any design, they consume a significant percentage of chip power and take up considerable real estate. On top of this they are a critical factor in proper chip functionality and performance. Though for years they have been a neglected area in design flows. Movellus, a provider of clock solutions, is taking a fundamentally new approach to clock design. Instead of building a fixed clock tree out of unintelligent buffers, wires and PLLs, they use a set of intelligent IP blocks to handle the major issues encountered in clock design, skew, gating, OCV, power integrity and more.

The capabilities of the Movellus Maestro Clock Network solution is nicely summarized in a paper authored by Linley Gwennap Principal Analyst and Aakash Jani Senior Analyst with the Linley Group. The paper titled “Movellus Maestro: An Intelligent Clock

Network” explains the motivation for applying an IP based solution to clock generation and covers the benefits that result.

Historically clock networks have either been clock trees or meshes or a hybrid. Each has their own advantages and trade-offs. Clock trees tend to use less power but are subject to clock skew. Meshes reduce skew but come with an increase in power consumption. Maestro blends the two with the addition of intelligent IP that monitors skew caused by a variety of factors such as supply voltage fluctuations, OCV and temperature.

Movellus Maestro Clock Network

By virtually eliminating Skew and PVT effects higher operating frequencies can be obtained. Movellus cites some examples where usable clock periods have increased by up to 44%, allowing for much higher Fmax. Another phenomenon that Maestro manages is voltage droop when blocks are toggled on and off to conserve system power. Normally as blocks are switched on when needed there is a latency period while the power rails recover from the additional load. The Maestro Adaptive Workload Module (AWM) reduces this latency by managing clock speeds, allowing in higher system performance.

Maestro reduces the effects of OCV and power jitter on the clock by constantly monitoring and adjusting the clock network. This is especially important at near threshold voltages found in IoT devices. With proper management of OCV and jitter, margins can be reduced to improve performance and power. Maestro also employs a clever system that distributes the operation of the clock subsystems across different phases to spread out simultaneous switching IR impact from clock operation. This reduces overall power consumption and allows for improved performance.

The Linley paper covers additional details and other features of the Movellus Maestro Clock Network. It’s about time that clocks became an area for innovation. Traditionally the major players in EDA have not devoted resources to radically rethinking this crucial component of all SOCs. In a way it is surprising, given the hugely important role that clock distribution plays. The paper is available to read as a download from the Movellus website.

Also Read:

Advantages of Large-Scale Synchronous Clocking Domains in AI Chip Designs

CEO Interview: Mo Faisal of Movellus

Performance, Power and Area (PPA) Benefits Through Intelligent Clock Networks


AI at the Edge No Longer Means Dumbed-Down AI

AI at the Edge No Longer Means Dumbed-Down AI
by Bernard Murphy on 01-13-2022 at 6:00 am

face recognition

One aspect of received wisdom on AI has been that all the innovation starts in the big machine learning/training engines in the cloud. Some of that innovation might eventually migrate in a reduced/ limited form to the edge. In part this reflected the newness of the field. Perhaps also in part it reflected need for prepackaged one-size-fits-many solutions for IoT widgets. Where designers wanted the smarts in their products but weren’t quite ready to become ML design experts. But now those designers are catching up. They read the same press releases and research we all do, as do their competitors. They want to take advantage of the same advances, while sticking to power and cost constraints.

Facial Recognition

AI differentiation at the edge

It’s all about differentiation within an acceptable cost/power envelope. That’s tough to get from pre-packaged solutions. Competitors have access to the same solutions after all. What you really want is a set of algorithm options modeled in the processor as dedicated accelerators ready to be utilized, with ability to layer on your own software-based value-add. You might think there can’t be much you can do here, outside of some admin and tuning. Times have changed. CEVA recently introduced their NeuPro-M embedded AI processor which allows optimization using some of the latest ML advances, deep into algorithm design.

OK, so more control of the algorithm, but to what end? You want to optimize performance per watt, but the standard metric – TOPS/W – is too coarse. Imaging applications should be measured against frames per second (fps) per watt. For security applications, for automotive safety, or drone collision avoidance, recognition times per frame are much more relevant than raw operations per second. So a platform like NeuPro-M which can deliver up to thousands of fps/W in principle will handle realistic fps rates of 30-60 frames per second at very low power. That’s a real advance on traditional pre-packaged AI solutions.

Making it possible

Ultimate algorithms are built by dialing in the features you’ve read about, starting with a wide range of quantization options. The same applies to data type diversity in activation and weights across a range of bit-sizes. The neural multiplier unit (NMU) optimally supports multiple bit-width options for activation and weights such as 8×2 or 16×4 and will also support variants like 8×10.

The processor supports Winograd Transforms or efficient convolutions, providing up to 2X performance gain and reduced power with limited precision degradation. Add the sparsity engine to the model for up to 4X acceleration depending on quantity of zero-values (in either data or weights). Here, the Neural Multiplier Unit also supports a range of data types, fixed from 2×2 to 16×16, and floating point (and Bfloat) from 16×16 to 32×32.

Streaming logic provides options for fixed point scaling, activation and pooling. The vector processor allows you to add your own custom layers to the model. “So what, everyone supports that”, you might think but see below on throughput. There are also a set of next generation AI features including vision transformers, 3D convolution, RNN support, and matrix decomposition.

Lots of algorithm options, all supported by a network optimization to your embedded solution through the CDNN framework to fully exploit the power of your ML algorithms. CDNN is a combination of a network inferencing graph compiler and a dedicated PyTorch add-on tool. This tool will prune the model, optionally supports model compression through matrix decomposition, and adds quantization-aware re-training.

Throughput optimization

In most AI systems, some of these functions might be handled in specialized engines, requiring data to be offloaded and the transform to be loaded back when completed. That’s a lot of added latency (and maybe power compromises), completely undermining performance in your otherwise strong model. NeuPro-M eliminates that issue by connecting all these accelerators directly to a shared L1 cache. Sustaining much higher bandwidth than you’ll find in conventional accelerators.

As a striking example, the vector processing unit, typically used to define custom layers, sits at the same level as the other accelerators. Your algorithms implemented in the VPU benefit from the same acceleration as the rest of the model. Again, no offload and reload needed to accelerate custom layers. In addition, you can have up to 8 of these NPM engines (all the accelerators, plus the NPM L1 cache). NeuPro-M also offers a significant level of software-controlled bandwidth optimization between the L2 cache and the L1 caches, optimizing frame handling and minimizing need for DDR accesses.

Naturally NeuPro-M will also minimize data and weight traffic . For data, accelerators share the same L1 cache. A host processor can communicate data directly with the NeuPro-M L2, again reducing need for DDR transfers. NeuPro-M compresses and decompresses weights on-chip in transfer with DDR memory. It can do the same with activations.

The proof in fps/W acceleration

CEVA ran standard benchmarks using a combination of algorithms modeled in the accelerators, from native through Winograd, to Winograd+Sparsity, to Winograd+Sparsity+4×4. Both benchmarks showed performance improvements up to 3X, with power (fps/W) by around 5X for an ISP NN. The NeuPro-M solution delivered smaller area, a 4X performance, 1/3 of the power, compared with their earlier generation NeuPro-S.

There is a trend I am seeing more generally to get the ultimate in performance by combining multiple algorithms. Which is what CEVA has now made possible with this platform. You can read more HERE.

Also Read:

RedCap Will Accelerate 5G for IoT

Ultra-Wide Band Finds New Relevance

Low Power Positioning for Logistics – Ultimate Tracking