SILVACO 073125 Webinar 800x100

Can Generative AI Recharge Phone Markets?

Can Generative AI Recharge Phone Markets?
by Bernard Murphy on 10-09-2023 at 10:00 am

Consensus on smartphone markets hovers somewhere between slight decline and slight growth indicating lack of obvious drivers for more robust growth. As a business opportunity this unappealing state is somewhat offset by sheer volume ($500B in 2023 according to one source) but we’re already close to peak adoption outside of China so the real question for phone makers must be “what is the next killer app that could move the needle?”

We consumers are a fickle lot and entertainment seems to rank high on our list of must-haves. Arm is betting on mobile gaming. Another possibility might be generative AI for image creation/manipulation. Qualcomm has already demonstrated a phone based capability while others including Apple are still focused on large language model apps. For me it’s worth looking closer at the image aspect of generative AI simply to be a little more knowledgeable if and when this takes off. For fun I generated the image here using Image Creator from Microsoft Bing.

Diffusion-based generation

I am going to attempt to explain the concept by comparing with an LLM. LLMs train on text sequences, necessarily linear. Lots of it. And they work on tokenized text, learning when they see a certain sequence of tokens what might commonly follow that sequence. Great for text but not images which are 2D and generally not tokenizable, so the training approach must be different. In diffusion-based training, first noise is progressively added to training images (forward diffusion), while the network is trained by denoising modified images images to recover each original image (reverse diffusion). Sounds messy but apparently the denoising method (solving stochastic differential equations) is well-defined and robust. The Stable Diffusion model, as one example, is publicly available.

It is then possible to generate new images from this trained network, starting from a random noise image. Now you need a method to guide what image you want to generate. Dall.E-2, Midjourney, and Stable Diffusion can all take text prompts. These depend on training taken from text labels provided along with the training images. Inference then includes prompt information in the attention process in the path to inferring a final image. Like LLMs these systems also use transformers which means that support for this capability requires new hardware.

Generation is not limited to creating images from scratch. A technique called inpainting can be used to improve or replace portions of an image. Think of this as an AI-based version of the image editing already popular on smartphones. Not just basic color, light balance, cropping out photobombs, etc but fixing much more challenging problems or redrafting yourself in cosplay outfits – anything. Now that I can see being very popular.

Will generative AI move the needle?

I have no idea – see above comment on fickle consumers. Then again, visual stimulus, especially around ourselves, and play appeals to almost everyone. If you can do this on your phone, why not? AI is a fast-moving domain which seems to encourage big bets. I certainly wouldn’t want to bet against this possibility.

I should also mention that generative imaging already has more serious applications, especially in the medical field where it can be used to repair a noisy CAT scan or recover details potentially blocked by bone structure. I can even imagine this technology working its way into the forensics toolkit. We’ve all seen the TV shows – Abby or Angela fill in missing details in a photograph by extrapolating with trained data from what is visible. Generative imaging could make that possible!


SPIE- EUV & Photomask conference- Anticipating High NA- Mask Size Matters- China

SPIE- EUV & Photomask conference- Anticipating High NA- Mask Size Matters- China
by Robert Maire on 10-09-2023 at 6:00 am

Conference EUV Lithography

– SPIE EUV & Photomask conference well attended with great talks
– Chip industry focused on next gen High NA EUV & what it impacts
– Do big chips=big masks? Another Actinic tool?
– AI & chip tools, a game changer- China pre-empting more sanctions

The SPIE EUV & Photomask conference in Monterey California

Both the weather and the crowds were great at the conference with what appeared to be record attendance amidst excellent presentations. For an industry that is in the middle of an ugly down cycle, there were a lot of people there in a very positive mood. It seemed more than double from Covid lows.

It was all about anticipation of High-NA EUV – due by end of year

Much of the conference, presentations and discussion was about the soon to be shipped (rumored to be about $400M) High-NA EUV tool by ASML.

What it means for the industry both in terms of promise and problems associated with it and what is being done to prepare for it.

Unlike the roll out of the first generation of EUV tools which seemed to take forever, was mired with problems and was somewhat anti-climatic after it finally arrived it feels like the industry may have a better handle on it this time around.

We also think that ASML is clearly doing a better job with less said perhaps working better.

Of course we haven’t yet had one shipped let alone installed and working. Even though it is significantly different from first gen tools, there is likely enough commonality to smooth the way a bit.

Are bigger Masks better? and needed?

One of the key problems with high NA EUV tools is that the new optics limits the size of the print area on the wafer (the die).

This means that chip size is limited and half that of current EUV & DUV scanners. Unfortunately the industry is moving in the opposite direction with ever larger chips needed to fill the compute power hungry applications like AI. Many current Nvidia chips could not be printed as is with High-NA scanners.

The fix that is most often talked about is “stitching” together two fields/two prints to print one whole chip. Imagine trying to print a single photograph from two negatives adjacent to one another to produce a seamless picture- its really, really hard.

Now try doing it with nanometer scale, atomic precision so that electronic circuitry lines up seamlessly- not at all easy- but needs to be done due to the limits of High NA.

Obviously the move to chiplets works well as a solution but not everything lends itself to that solution.

The pre-game show

Sunday, before the conference, a large semiconductor manufacturer gathered their key suppliers in a room to convince them, push them, and get commitments from them to adopt bigger photomasks which will allow High NA scanners to print bigger chips thus helping to fix the High-NA tiny chip problem (somewhat).

The proposal is to double the current 6X6 Photomask (negative) to a 6X12 size which would work in a High-NA scanner.

This is not as easy as it sounds but would obviously be the most elegant fix of the High NA small print problem. Essentially the entire photomask industry supply chain would have to change.

Probably easiest for mask writers and inspection tools from Lasertec and KLA but harder on “blank” makers who produce the blank photomasks.

This certainly has generated quite a bit of controversy as neither stitching nor bigger masks are easy but one is certainly more elegant.

Another Actinic Mask inspection tool to compete with Lasertec monopoly? But not from KLA….

From the conference, rumor has it, that Zeiss (the famous maker of all ASML’s lenses) will be making an actinic (EUV wavelength) mask inspection tool.

This seems to make sense as they have been making an “AIMS” EUV mask “review” tool, which finally seems to acceptable to the industry after a difficult start. Zeiss obviously knows how to make critical EUV lenses, and the industry would like more than one supplier.

But what about KLA?……Crickets….

KLA has been radio silent about its long lost/overdue actinic tool. While Lasertec had a nice presentation at the conference about their actinic tool there wasn’t anything from KLA.

The industry is clearly not fully satisfied with E-Beam mask inspection or using DUV technology or “print and pray” using wafer inspection they want the “real thing”…actinic pattered mask inspection (APMI).

AI’s impact on semiconductor equipment tools

We have been wondering where there will be impact on semiconductor tools makers from the AI revolution.

Metrology and inspection tools made by companies like KLA, AMAT, ONTO, Nova and many others consist of a light source to illuminate a target, optics to capture the image and millions of lines of code to analyze the image to provide useful information to the user.

While the illumination source and optics are difficult and complex in many cases they are likely not the competitive “moat” that millions of lines of image analysis code written over decades is. It seems much if not most of the value of measuring and/or inspecting semiconductors is determining what’s in the picture of the chip not taking the picture of the chip.

As an example, doing a “die to die” comparison of a known good chip to a chip under question gets a lot easier with today’s AI.

As both the chips and the photomasks that they are printed from, get more complex, more advanced AI is changing the industry.

But it is also likely democratizing it. It is likely a lot easier with AI tools to analyze these highly complex images, you don’t need millions of lines of custom code written over decades. From what we have heard, many chip makers, especially the large ones, have put a significant effort into this and may rival in some cases what is available from tools makers.

It also allows new start ups, especially those with AI expertise (such as China) to develop new tools, more quickly or replace existing or sanctioned tools.

If a chip maker says to a tool maker “just give me an image, I’ll do my own analysis”, what does that do to the value of tools?

Is China preempting new sanctions? Gina is pissed!

While China continues to buy whatever they can get their hands on, they seem to be planning on losing more access to US tools and are getting ahead of the problem. As an example, we have heard that while China continues to buy KLA inspection tools they have also been buying less capable non US tools which they might not have otherwise bought but perhaps assume they will be able to get them for longer than the US tools.

It seems more than blatantly clear that new sanctions are coming on or about the one year anniversary of tool sanctions last October.

China all but spit in Gina Raimondo’s face by announcing a 7NM chip while she was visiting China. The timing seems that China was certainly daring her to put more restrictions in place and she will clearly oblige them.

Gina Raimondo yesterday said she needs more “tools” (read that as sanctions) to control China chips. She said “it was incredibly disturbing” (the progress that China has obviously made in the face of sanctions).

The only thing that may hold back nuclear Armageddon chip sanctions is Biden meeting with Xi in November.

5NM is next for China…will they get to 3NM?

As we pointed out in a prior report, we were not surprised that China got a 7NM chip out. We also fully expect them to put out a 5NM chip. The sanctions on EUV are clearly inadequate and any other sanctions are clearly very porous. ASML still has many DUV tools in the pipeline destined for China and AMAT, LRCX and KLAC still have China as their best customer.

Maybe the US should stop what’s in the pipeline before it gets shipped. Tool makers will scream bloody murder and they will double down on expensive lobbyists in Washington to press for relief.

Its unclear how far things will go but its safe to say more sanctions than we have now otherwise the US should just surrender and ship anything China wants.

Getting to 5NM is a forgone conclusion and embarrassment. The open question is can they keep going and how far? There are some clear indications that 3NM is not entirely out of reach with their existing DUV tools and a lot of effort and cost.

Where there’s a will, there’s a way……..

Sanctions have forgotten about all the existing tools in China

All we seem to hear about are sanctions on shipping new tools to China. But what about all those US manufactured tools and technology that are currently pumping out 7NM chips in China?

Maybe sanctions should and will contain language about service, spare parts, upgrades and all things that keep the offending tools working.

What about sending US people to China to fix & service tools and process problems? Hopefully, those in Washington will figure out that its not just new tools but all the tools that were previously shipped (in such large volumes). As we have seen in a past examples, when service & support is withdrawn, fabs collapse quickly.

The stocks

We think semiconductor equipment stocks are in for a rough earnings season. The downcycle is clearly going deeper into 2024. Memory still sucks. TSMC is slowing Arizona, not due to labor or other false excuses but because demand is weak. Utilization rates are low for TSMC and way worse for second tier players like GloFo.

Equipment companies are going to face some sort of increased sanctions. Anywhere from a total cutoff to strong tightening. Other customers, such as Taiwan, Korea and others will certainly not make up for any near term loss in China.

The only question at this point is how bad.

We think the stocks, even though they have been off have been holding up better than they should have given the current and expected state.

That will likely not be the case after quarterly reports that don’t talk about an end being in sight while potentially being forced to talk about the impact of yet to be know sanctions.

We would certainly lighten up ahead of the quarterly reports as the risk profile has increased beyond what is tolerable.

Companies with higher than average exposure to China obviously could see significant impact.

Its going to be a bumpy next few weeks no matter what…..

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Micron Chip & Memory Down Cycle – It Ain’t Over Til it’s Over Maybe Longer and Deeper

Has U.S. already lost Chip war to China? Is Taiwan’s silicon shield a liability?

ASML-Strong Results & Guide Prove China Concerns Overblown-Chips Slow to Recover


Podcast EP186: The History and Design Impact of Glass Substrates with Intel’s Dr. Rahul Manepalli

Podcast EP186: The History and Design Impact of Glass Substrates with Intel’s Dr. Rahul Manepalli
by Daniel Nenni on 10-06-2023 at 10:00 am

Dan is joined by Dr. Rahul Manepalli. Rahul is an Intel Fellow and Sr. Director of Module Engineering in the Substrate Package Technology Development Organization. Rahul and his team are responsible for developing the next generation of materials, processes and equipment for Intel’s package substrate pathfinding and development efforts. He has been with Intel for over 23 years

Intel Ushers a New Era of Advanced Packaging with Glass Substrates

Rahul recounts the R&D efforts that Intel invested in glass substrates over the past decade. He details the challenges of bringing this material to the mainstream, with reliable handling being a major focus The performance, flexibility and scaling benefits of glass substrates are also discussed, along with a forward view of production deployment.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


McKinsey & Company Shines a Light on Domain Specific Architectures

McKinsey & Company Shines a Light on Domain Specific Architectures
by Mike Gianfagna on 10-06-2023 at 8:00 am

McKinsey & Company Shines a Light on the Impact of Domains Specific Architectures

When we hear McKinsey & Company, we may think of one of the “Big Three” management consultancies.  While that’s true, this firm has a reach and impact that goes far beyond management consulting. According to its website, the firm accelerates sustainable and inclusive growth.  While this is an inspirational statement, the purpose of the company really gets my attention – to help create positive, enduring change in the world. Silicon Catalyst felt the same way recently when they invited one of the partners from McKinsey to discuss the future of software and semiconductors. The comments made illustrate a solid understanding of the trends around us and a tendency to revolutionize results. Read on to see how McKinsey & Company shines a light on domain specific architectures.

The Event and the Speaker

 Silicon Catalyst hosted a networking event recently that focused on the future of compute. Presenting was Rutger Vrijen, Ph.D, a partner in McKinsey’s global Semiconductor and Advanced Electronics practice.

He serves semiconductor and other advanced-electronics clients on a range of topics, including growth strategy and transformation, cross-border M&A, pricing excellence, supply-chain performance diagnostics and transformation, as well as effective, efficient product development. With two patents and over 30 articles in globally leading scientific journals, Rutger had some insightful comments to share with the group.

The Discussion

Rutger Vrijen, Ph.D

Rutger began his talk with an overview of the forces that got us to domain specific architectures. Essentially, the slowing of transistor density scaling (Moore’s Law) and the acceleration of the energy cost (Dennard scaling).

While Moore’s Law is still quite important, process innovation is no longer enough to address the innovation requirements of advanced products. We must turn to architectural innovation and here is where domain specific architectures (DSAs) become relevant.

Rutger focused on the impact of DSAs across four main markets – HPC & AI, IoT, blockchain, and automotive. The impact is broader than this, but these markets exhibit some high-impact results from a workload-specific approach. Rutger reports that across these four domains, DSAs have an estimated 2026 market share of $89B, with HPC & AI leading the pack at $46B.

According to Rutger, the DSA movement has attracted around $18B in venture funding since 2012 and there are about 150 DSA startups today. The movement is real. Critical enablers for DSA innovators are becoming increasingly available, which will further accelerate changes.  These enablers include:

  • Increasing manufacturing leadership of foundries, providing universal access to leading manufacturing capabilities
  • A mature cloud market, delivering fast routes to customers and applications for chip startups that are integrated in cloud infrastructures
  • Increasing maturity of licensed and open-source hardware and software IP to democratize chip design and software stacks
  • Advanced semiconductor packaging and heterogeneous integration to interconnect DSAs with low latency and high bandwidth
  • Material innovations including paradigms beyond CMOS (e.g., photonic, neuromorphic)

Rutger went on to discuss the incredible impact DSAs and AI in general will have on design and manufacturing. The discussion was quite exciting. I’ll be sharing a link where you can get a lot of this detail in a moment. But first I want to convey one interesting use of AI that McKinsey was behind – the design of racing sailboats. 

It turns out this sport has some real challenges. The design of the boat is done over many months, with the need for a lot of simulation time. This requires actual humans to spend time in the simulator to debug the design. These are the same humans who are conducting press interviews and promotion for the event – a difficult balance. Something else I didn’t know – the actual race boat with final hydrofoil designs is physically available only a few weeks before the race. Talk about pressure.

McKinsey had a different idea. In 2021, they partnered with the New Zealand team to build a reinforcement learning approach to sailboat design. The software was able to adjust 14 different boat controls simultaneously, a task that typically takes three Olympic medalist sailors. This approach to extreme and continuous optimization paid off – New Zealand won the Americas Cup that year. This is another example of how McKinsey is quietly changing the world.

To Learn More

You can read the entire story about domain-specific architectures and the future of compute in the  McKinsey Insight piece here. McKinsey is also collaborating with the SEMI organization  for an event dedicated to DSAs and the future of compute. There is a who’s who lineup for this event. It will take place at SEMI headquarters in Milpitas, CA. Here are some of the presenters:

Startups

  • Cerebras – Dhiraj Mallick
  • SiMa – Gopal Hegde
  • Recogni – Marc Bolitho
  • Tenstorrent – Keith Witek, Aniket Saha
  • ai – Gavin Uberti

Investors

  • Silicon Catalyst – Pete Rodriguez
  • Cambium Capital – Bill Leszinski
  • ModularAI – Chris Lattner
  • Simon Segars

Ecosystem

  • Synopsys – Antonio Varas
  • Rescale – Joris Poort, Edward Hsu
  • GlobalFoundries – Jamie Schaeffer
  • TSMC – Paul Rousseau
  • LAM – David Fried
  • Advantest – Ira Leventhal
  • ASE – Calvin Cheung
  • Intel – Satish Surana

You can register for this event here. And that’s how McKinsey & Company shines a light on domain specific architectures.


CEO Interview: Sanjeev Kumar – Co-Founder & Mentor of Logic Fruit Technologies

CEO Interview: Sanjeev Kumar – Co-Founder & Mentor of Logic Fruit Technologies
by Daniel Nenni on 10-06-2023 at 6:00 am

DSC01699

Sanjeev is a renowned technopreneur in the semiconductor industry. With more than 20+ years of experience, he is known for his enormous resilience and deep tech knowledge that sets him apart from others in the industry.

Sanjeev started his career as a hardware designer and then forayed into the FPGA domain due to his love for configurable hardware technologies. After acquiring a deep understanding of FPGA and implemented various high speed serial protocols controllers, he joined hands with Anil Nikhra (Co-Founder) to start Logic Fruit Technologies.

Prior to starting his entrepreneurial journey in 2009, Sanjeev was associated with Agilent Technologies as an FPGA Expert and NeoMagic as FPGA/Board R&D Lead. He is an EE graduate from IIT Kanpur and is passionate about sports fitness. He is an avid badminton player and cricketer.

Tell us about your company?

Logic Fruit Technologies (LFT) has completed 13 years of operations, having its niche in developing FPGA and CPU centric complex systems by utilizing organically growing Logic Fruit’s IP portfolio. Even though Logic Fruit develops a lot of RTL IPs, like PCIe, CXL, JESD204C, ARINC818 we are not an IP company but a system and solutions company.

We are preferred partners/vendors to key enterprise customers such as AMD, Intel, Keysight, Siemens, Lattice, Achronix and Indian Research PSUs like ISRO and DRDOs for investigation & feasibility study, and architecture development followed by complete development of the various systems & solutions.

Logic Fruit has developed and delivered 100+ Hardware & Software centric Solutions for diverse industries like Test and Measurement, Telecom, Aerospace & Defense, Semiconductor by using our capabilities and IPs in Hardware and Software Engineering.

The targeted market for Logic Fruit Technologies is US, Europe and India.

What problems are you solving?

Being a R&D focused product engineering company, our forte lies in providing customized solutions to our customers. LFT with its client-centric approach, aims to deliver tailored solutions aligned to customer’s specific challenges and objectives by utilizing the existing expertise and IPs.

Owing to confidentiality, we are not in a position to divulge any information regarding our ongoing projects. However, to showcase, we would like to tell you about ARINC 818 based single board computer, that has been developed with the Govt of India, under the TDFS schemes.

Logic Fruit takes pride in sharing that we are the first one to deliver the project under this scheme.

What application areas are your strongest?

Logic Fruit technologies has been primarily working on Aerospace & Defense, Test & Measurement, Semiconductors and Telecommunications. We are using our skill set around System Architecture, hardware design, RTL development, and SW design, to develop a complex heterogeneous to handle various kind of applications requiring real time high compute processing power.

What keeps your customers up at night?

Time to market, missing capability around FPGA technologies, a team who can deliver.

What does the competitive landscape look like and how do you differentiate?

Logic Fruit technologies is one of the leading product engineering companies with expertise in most advanced global technologies especially in FPGAs and Heterogeneous systems.

Our key differentiators are the expertise of working with FPGAs from over 20+ years and the over 50+ RTL IPs that we have built over the years, helps us to develop end-to-end solutions for our customers by reducing their overall development time and increase their confidence with our proven technologies in the end product.

What new features/technology are you working on?

Logic Fruit Technologies vision is to be a leader in Deep Technology Solutions by having various IPs around high-speed interfaces, programmable HW while delivering reliable, efficient, and scalable solutions to its global clients using our ever-growing IP portfolio and engineering capabilities.

We also look forward to being an effective contributor to Indian Government’s “Make in India” initiative specifically in the domain of Aerospace to Indianize complex systems.

How do customers normally engage with your company?

There are multiple channels how customers can engage with us e.g. online through website (https://www.logic-fruit.com) social media (LinkedIn, twitter), email (info@logic-fruit.com) and offline like referrals, in person events, Tech conferences etc. As we are in R&D profession so we have very strong NDAs in place and we keep our projects privacy at top priority and that is why we are able to work with defense and aerospace sector for more than a decade now.

Also Read:

CEO Interview: Stephen Rothrock of ATREG

CEO Interview: Dr. Tung-chieh Chen of Maxeda

CEO Interview: Koen Verhaege, CEO of Sofics


proteanTecs On-Chip Monitoring and Deep Data Analytics System

proteanTecs On-Chip Monitoring and Deep Data Analytics System
by Kalar Rajendiran on 10-05-2023 at 10:00 am

On chip monitoring and analytics platform

State-of-the-art electronics demand high performance, low power consumption, small footprint and high reliability from their semiconductor products. While this imperative is true across many different market segments, it is critical for applications such as the automotive/autonomous driving and data centers. As electronic devices become more intricate and compact, the margin of error increases, necessitating innovative approaches to ensure optimal performance and longevity.

While traditional methods struggle to help deliver on this imperative, proteanTecs offers a comprehensive framework for enhancing product performance and reliability, through the integration deep data analytics. By combining data from specialty chip telemetry agents (monitoring IP) with machine learning (ML) algorithms, their solutions are deployed via cloud and embedded software to provide insights and visibility throughout the system lifecycle. These agents provide parametric design profiling, margin monitoring, power and reliability management, I/O channel health monitoring and in-field predictive monitoring. This groundbreaking solution is the proteanTecs On-Chip Monitoring and Deep Data Analytics System, and the company recently published a whitepaper on this subject matter. This whitepaper is an excellent read for everyone involved in the development of modern day applications that demand high performance and reliability. Following are some excerpts from that whitepaper.

On-Chip Monitoring and Analytics Platform

The success of the proteanTecs platform relies on two fundamental pillars, namely, comprehensive monitoring and data-driven analytics. By integrating these two elements into the chip design process, the company offers a holistic approach to optimization that covers the entire product lifecycle.

 

Ease of Implementing On-Chip Monitoring

The key is the platform’s ability to meticulously monitor critical chip parameters in real-time. This is achieved through a network of proprietary monitors, called agents, that are strategically placed within the chip architecture. These agents continuously gather data on parameters such as voltage, temperature, timing margins, and interconnect quality, all while the chip is in operation. This dynamic monitoring unveils a wealth of insights, unearthing vital information about a chip’s behavior under various workloads, conditions, and over its operational lifetime.

The hardware IP system from proteanTecs includes an extensive array of monitors for gathering data and a Full Chip Controller (FCC) that serves as the central hub. The FCC interfaces to the various monitors and relays the gathered data to the firmware (FW), edge software (SW) and the cloud platform via standard interfaces like JTAG, APB and I2C.

Ease of Incorporating Data-Driven Analytics

The data collected by the on-chip agents become the foundation for the second pillar of the proteanTecs platform which is deep data analytics. The platform boasts a sophisticated cloud-based analytics infrastructure that ingests, processes, and interprets the data. This complex ecosystem deploys advanced algorithms and machine learning techniques to dissect the intricate relationships between the monitored parameters and the chip’s performance, reliability, and power consumption.

The proteanTecs Software (SW) analytics platform operates in the cloud, acting as an interface to gather data from various sources like wafer probe, Automated Test Equipment (ATE) vendors, and the system during both productization and normal operation. The platform excels in Agent fusion and leverages Agent measurements to offer a comprehensive understanding of both the chip and the system.

Benefits from the proteanTecs Solution

During the New Product Introduction (NPI) phase, the platform’s Process Classification Agent (PCA) and Design Profiling Agent (DPA) collaborate to provide a comprehensive view of process characteristics and design sensitivity. This helps with process tuning, optimal voltage-frequency binning, and power management strategies.

As chips move from development to mass production, proteanTecs’ Timing Margin Agents (MA) come into play. These agents enable the accurate measurement of timing margins during functional operation, offering insights into actual system behavior that generic ring oscillators or critical path replicas cannot replicate. This leads to better understanding and control of the system’s performance, power consumption, and reliability.

Workload Agents (WLA) enable visibility into the application workloads and their impact on the hardware. They serve as a proxy for how much voltage and temperature stress the chip has been exposed to, during normal operation. This is important to determine the remaining useful life of a product and for efficient power and performance management.

The voltage droop sensor (VDS) and local voltage and thermal sensors (LVTS) enable real-time power and temperature management during the operational phase of a chip’s life. This not only maximizes performance but also extends the chip’s longevity by preventing excessive thermal degradation.

The solution’s impact extends into in-field monitoring and lifetime assessment, with embedded on-board software, in-chip monitoring and cloud analytics. By continuously monitoring key parameters, including those susceptible to performance degradation and latent defects, proteanTecs enables early detection of abnormal behavior that may lead to eventual failures. This preemptive capability is invaluable in critical applications, such as data centers and automotive systems, where reliability is paramount.

Summary

By integrating proteanTecs on-chip monitoring and deep data analytics solutions into their products, manufacturers can enhance their systems’ longevity, resilience, safety, and performance. The system offers unparalleled insights into chip behavior, performance optimization, and reliability enhancement. The end-to-end health monitoring fosters optimal reliability, performance, power efficiency, and cost-effectiveness across a broad spectrum of applications, ranging from automotive to data centers and beyond.

You can download the entire whitepaper from here. To learn more about proteanTecs technology and solutions, visit www.proteanTecs.com.

Also Read:

Predictive Maintenance in the Context of Automotive Functional Safety

Semico Research Quantifies the Business Impact of Deep Data Analytics, Concludes It Accelerates SoC TTM by Six Months

Maintaining Vehicles of the Future Using Deep Data Analytics


Pairing RISC-V cores with NoCs ties SoC protocols together

Pairing RISC-V cores with NoCs ties SoC protocols together
by Don Dingee on 10-05-2023 at 6:00 am

An architecture pairing RISC-V cores with NoCs

Designers have many paths for differentiating RISC-V solutions. One path launches into various RISC-V core customizations and extensions per the specification. Another focuses on selecting and assembling IP blocks in a complete system-on-chip (SoC) design around one or more RISC-V cores. A third is emerging: interconnecting RISC-V cores and other IP blocks with a network-on-chip (NoC) instead of a simple bus structure. And it’s not just at the high end – pairing RISC-V cores with NoCs answers many SoC design challenges where data must flow efficiently in any workload using any on-chip protocol.

Performance tiers changing with advanced interconnect schemes

Simply counting gates, cores, and peripheral blocks no longer describes the performance potential of an SoC design. Interconnect schemes now define the lines between SoC performance tiers, according to Semico Research, and a new tier has opened where interconnects change from simple bus structures to more sophisticated schemes.

Semico’s updated definition recognizes three forces at work: the pervasiveness of multicore designs, a higher bar for what is considered a complex design, and the subsequent blurring line between “microcontroller” and “SoC.” In Semico’s latest view, the notion of gate counts as a metric disappears since one modern processor core can drag many gates with it. Complexity becomes a function of interconnects, varying with subsystems and diverse IP blocks.

SoC performance tiers, image courtesy Semico Research Corp.

Where a simple bus will do, likely a part with a single processor core and low-duty-cycle peripherals that aren’t continuously contending for the bus, Semico sees a commodity controller tier. Anything above that becomes an SoC, presumably with at least some peripherals fighting for on-chip bandwidth and attention from the processor core(s). Higher SoC tiers have multiple cores and multiple IP subsystems, each with tuned interconnect technology.

NoCs pick up more protocols and subsystems

RISC-V has quickly moved up these performance tiers as more powerful cores appear, with no less applicability at the lower end of the Semico scale. However, RISC-V designers may have less experience in complex interconnect schemes seen in the higher tiers. “TileLink may be the first thought for RISC-V interconnect, but it can be difficult to use in more complex scenarios,” says Frank Schirrmeister, VP of Solutions and Business Development for Arteris.

A NoC’s superpower is its ability to connect subsystems using different protocols, and SoC designers are likely to run into several protocols at even moderate complexity. AXI leveled the playing field for simple IP block connections. Multicore solutions with co-processing blocks demand cache-coherence, giving rise to the CHI protocol. I/O memory sharing helped shape the faster CXL interconnect. “When it’s time to co-optimize compute and transport with various subsystems and protocols in play, a NoC is a better solution,” continues Schirrmeister.

What can pairing RISC-V cores with NoCs look like? Arteris customer Tenstorrent provides a glimpse into the possibilities. Their recent focus is creating a reusable chiplet combining RISC-V cores, machine-learning acceleration IP, and standard peripherals found in many edge AI applications. At scale, a single-die implementation could look like the following diagram, using the Arteris Ncore cache-coherent interconnect and several segments of the Arteris FlexNoC non-coherent interconnect.

image courtesy Arteris

A Smart Memory Controller (SMC) provides a high-performance, server-grade memory connection in memory-intensive applications. The unnamed “chiplet link” could be UCIe, a relatively new specification optimized for tighter chiplet integration. When new subsystem interconnects emerge, adapting a section of the NoC is more manageable than ripping up the entire chip-wide structure.

Pairing RISC-V cores with NoCs lowers risk and time-to-market

If that diagram looks complex, and granted, maybe most RISC-V applications aren’t that complex right now, consider this: chiplets are already driving integration much higher. Today’s advanced RISC-V multicore part will be next year’s value SoC as innovation picks up pace.

Arteris Ncore and Arteris FlexNoC development tools output RTL for implementation, providing several advantages. Physical NoC estimation is straightforward in an EDA workflow. NoC parameter adjustments, such as the number of pipeline stages, are also a few clicks away in EDA tools. The modifications mentioned above for adding a subsystem protocol are also readily accomplished. “At the high end, users gain immediate access to our NoC expertise,” says Schirrmeister. “At the low end, our tools are easy to use for first-pass success and provide a growth path for more ambitious future projects with complex interconnects.”

Pairing RISC-V cores with NoCs lowers the risk of one more IP block entering a design and triggering a ripple of interconnect redesign across the chip. It also reduces time-to-market for complex SoC designs compared to do-it-yourself interconnect structures. We haven’t discussed the other benefits of NoCs here, such as bandwidth and power management, but the case for NoCs in RISC-V designs is strong just considering a diverse protocol mix.

Visit the Arteris website for more information on NoCs and other products.


The Quest for Bugs: “Verification is a Data Problem!”

The Quest for Bugs: “Verification is a Data Problem!”
by Bryan Dickman on 10-04-2023 at 10:00 am

The Quest for Bugs Verification is a data problem

Verification Data Analytics

Hardware Verification is a highly data-intensive or data-heavy problem. Verification Engineers recognise this and spend much of their time dealing with large and complex datasets arising from verification processes.

In “The Dilemmas of Hardware Verification” we explored the key challenges around verification of complex hardware IP and systems. The “completeness dilemma” leads engineering teams to be heavily dependent on data and data analytics, to make incomplete processes measurable and bounded and allow product development teams to make data-driven decisions about signoff quality for product releases.

So, in fact, one of the many core skillsets of a good Verification Engineer, is data analytics.

Great Verification Engineers need to be great Data Analysts.

Engineers deal with huge volumes of data: suites of tests, test results, coverage results, resource planning and utilisation data, version control data, waiver analysis, bugs and defect tracking and ongoing optimisation and continuous improvement through trend analysis and root causes analytics.

In so doing, Verification Engineers utilise many different data sources to ensure projects are on track and progressing towards project goals whilst ensuring accurate information is available to support signoff decisions at key quality milestones occurring during the product development lifecycle.

Verification data also presents a huge opportunity to optimise and streamline verification workflows.

The ROI of the final delivered product is heavily determined by the development costs, and it’s been well documented that 70% or more of these costs are attributable to verification activities. So, care must be taken to ensure that verification activities are effective and efficient and not wasteful.

Of course, a healthy degree of paranoia is helpful from a Verification Engineer’s perspective as there is strong compulsion to run more and more verification cycles because a bug escape that reaches the customer or end user can be extremely costly, impactful, and potentially reputationally damaging! See “The Cost of Bugs” where we explore the balance between the “cost of finding bugs” (the verification costs) versus the “cost of not finding bugs” (the impact costs of bug escapes).

Insights from Data

The value of verification data is realised when it yields key insights.

Think of insights as questions.

An insight might be a high-level question that an Engineering Manager is asking of the engineering team to understand how effective or efficient the product development process is. It could also be a question asked by the senior leadership team, the quality team or the sales and revenue team.

Insights can also drive a strategy of continuous improvement enabled by an understanding of effectiveness and efficiency.

In some cases, insights can be unpredictable or unexpected. Curiosity and an analytical approach to cleaning, understanding, exploring, and validating the data, and reviewing the analytical views can reveal observations that were not previously available. These unexpected insights present opportunities to challenge the status quo sometimes and re-think established practices. However, care must be taken to challenge and validate the assumptions.

Beware that it’s sometimes possible to make the analytics fit the narrative rather than the other way round.

It’s useful to think of insights in the context of a data-value stack, as illustrated in Figure 1 The Analytics Inverse Pyramid.

Insights enable Data-Driven Decision making.

Insights are made possible by good Data Analytics, which are in turn enabled by Data Models constructed from the Data Sources loaded into the Data Lake. The point is to figure out what Data-Driven Decisions are required by the business first and let this drive the data capture, the data pipelines, and the data analytics, not vice-versa!

Figure 1 The Analytics Inverse Pyramid

The raw data at the base of the pyramid has little value unless it is clean and accurate and is fed through a data pipeline driving powerful analytics driving high value insights.

The anatomy of data and why we should care…

If we follow Exec-Level care-abouts driving verification excellence all the way through to verification engineering reality – daily activities – we can better describe what is happening at each stage.

From the CFO and CEOs’ viewpoint there are multiple issues to worry about, but when it relates to engineering development of the company’s all important revenue bearing products, it boils down to these.

Figure 2 Cost, Quality, Delivery

Customers want the same outcomes from their supplier, meaning the verification effort you put in must be effective and efficient, to drive cost effective solutions for them. To achieve this, your design and verification processes must be well instrumented to avoid the so-called “black box” syndrome, whereby products arrive without a clear idea of just how good the verification effort has been at finding bugs and perhaps without a good handle on costs, or project timescales.

Excellence depends on good data and an engineering culture that knows how to exploit it.

Figure 3 Data Pipelines, below, indicates the importance of analytics to provide insights into the verification effort to assess effectiveness and efficiency. Useful analytics require the correlation of information from various data sets generated by the daily activity of design and Verification Engineers.

Figure 3 Data Pipelines

It’s a useful thought-experiment to measure where your verification effort sits in relation to the questions in orange, at each of the data pipeline stages above. Perhaps surprisingly, not all engineering teams have a good enough handle on what data they have, where it’s located, how clean it is and how to exploit it. Later in the paper we explore creating the culture of curiosity and the competences necessary to make this transition possible.

Figure 4 Data Challenges, below, illustrates some of the challenges teams are likely to encounter when developing the analytics needed for good decision making, to drive important improvements verification processes and indicate necessary investments in tools and hardware.

Figure 4 Data Challenges

These challenges are not unique to hardware verification but must be overcome to reach basic levels of analytics capability.

Deriving analytics from diverse data sets can be extremely complex, particularly when it comes to correlating them. A simple example would be to illustrate bug discovery at different stages of the product life cycle phases so you can assess progress against your Verification Plan.

Other insight questions require more complex data engineering to provide the information required. In smaller companies this task could fall to the engineering team, or it might be outsourced. As good “data engineers”, the verification team need to be comfortable with thinking around these problems.

Larger teams may have the luxury of internal data engineering/analyst resource to make these developments in-house. In both cases, Verification teams need to be fluent with the data challenges, to ensure they get what is needed if analytics are to be developed, or improved. See Step1: Train your engineers to think like Data Analysts.

The data quality, data volume trap…

Our focus for this white paper is to discuss “Data Analytics” in the context of organising, automating, cleaning, and visualising verification datasets that most teams already have. However, you can’t discuss this topic without raising the question: –

What about AI? Can I use it?

Everyone is aware of the potential offered by Machine Learning (ML) currently being embedded in EDA tools (see Step2: Exploit Advanced EDA Tooling), as well as the opportunities offered by data science to improve the targeting of coverage and parsing of data to make for easier analysis. Although this paper will touch on these subjects, it is primarily focussed on how to make the best use of data to drive better insights into the verification process.

 

Figure 5 Low quality, small datasets are barriers to developing analytics or successfully deploying advanced ML/AI techniques.

Although there are no publicly available numbers showing how many engineering teams have successfully implemented ML and AI, it is likely many will have encountered problems with data quality or size of datasets.

In their thought-provoking article, “A survey of machine learning applications in functional verification”, Yu, Foster and Fitzpatrick asserted, “Due to the lack of large datasets, much research has to settle for relatively primitive ML techniques that demand only small training datasets with hundreds of samples. The situation has prevented advanced ML techniques and algorithms from being applied”.

In Figure 5 (Data Quality), above, small amounts of unreliable verification data are difficult to analyse with any degree of confidence – you find yourself in Trap 1. In this case, the plausible option is to invest in cleaning up your data and developing excellent analytics – there is no easy jump to ML/AI in Level 2 from Trap 1.

Large amounts of low-quality information may be very difficult to manage and understand, making it unsuited to ML or AI techniques, let alone any necessary data engineering needed to produce good analytics – This is Trap 2. As with smaller data sets, a large-scale clean-up operation is indicated. For these reasons, bad data quality and smaller data sets present significant challenges for companies wishing to move to ML enabled EDA tools and more advanced AI techniques.

A more plausible and necessary step for many organisations is to make better use of the data they do have to create useful analytics to enable great decision making and continuous improvement.

Although “just” creating good analytics may seem less exciting than going straight to ML/AI in Level 2, they may still be difficult to implement until your data has been cleaned and some of the challenges explored in Figure 4 have been overcome.

Assuming you have your data engineering organised, built on a foundation of high-quality data and stunning analytics to shine the light into those dark and bug-rich corners, it’s time to think about what insights to look for.

Insight: “Is my Verification Effective?”

Returning to “Insights”, many from verification datasets can be classified as effectiveness and efficiency insights. Let’s start with effectiveness. What does that mean for a verification team, and who else wants to know about it?

Effectiveness can be described as a function in the following way: –

Each of the variables in the formula is quite probably captured in a separate database and is described by a set of data schemas.

The richness of the data schemas used to collect the data has a direct impact on the quality of analytics that can be generated from it.

A “data model” connects these sources using primary keys to allow correlation of the data. Once the team have identified what analytics are required, there may be a need to elaborate the data schemas.

The effectiveness insight requires analytics that show testbench effectiveness in terms of ability to make verification progress, defined as increasing coverage and/or finding bugs. If a testbench is not advancing coverage or finding bugs, then it might be ineffective, unless verification goals are already fully met.

The utility of good analytics is the ability to analyse testbench effectiveness in a visual fashion so that development teams can make targeted improvements to testbench implementations. Continuous improvements are achieved through iterations of code refactoring, performance optimisations, or re-architecting, with a view to increasing the ability of the testbench to hit bugs and coverage with fewer seeds or cycles. Analytics are used at each stage to demonstrate actual improvements.

INSIGHT: “Does my Testbench find Bugs?”

For this, we need data schemas that enable analytics to visualise and drill into the bug curve over time. We expect to see a cumulative bug curve that flattens and saturates over time; or a bug rate curve that peaks and then falls towards zero.

Better still is to correlate these bug curves with verification effort to give a true indication of verification effort versus bugs found.

And with a hierarchy of verification such as Unit->Sub-system->Top->System, the analytics need to be able to present the bugs v effort data at each level and enable users to see how different levels and different units or sub-systems compare. Such analysis capability offers insights of which are the effective verification environments, and which are apparently ineffective. From this, teams can make decisions about where to invest engineering effort for the greatest return.

What does that mean in terms of data?

To do this we need to join the bug data with the verification results data so that we can explore how many cycles of verification are running between finding bugs – and to look at this over the product development lifecycle since it will vary according to what stage of development the product is at.

INSIGHT: “Does my Testbench increase Coverage?”

The analytics also need to correlate coverage data with verification effort data. If the analytics are revealing that the bug curve is saturated and the coverage is saturated, the engineering team can use this information to make decisions about what to do next; Run more cycles? Run less cycles? Improve the verification environment?

Further, with bug and coverage data collected across the whole product development lifecycle and all verification methodologies applied, you can reason about the relative effectiveness of each methodology. i.e., you must consider the effectiveness in the context of the whole verification lifecycle and the stage you are at. For example, Unit testing might appear to be ineffective (does not find many bugs) due to earlier top-level or formal verification doing a good job of cleaning out most bugs. So, you must consider the whole lifecycle of verification and the order that is chosen to execute various methodologies.

Insight: “Is my Verification Efficient?”

The second most important question relates to efficiency. You may have effective stimulus and checking, but can verification be done with the minimum amount of human and platform resources, and can it be delivered in the shortest possible time?

Efficiency is a function of the following: –

To understand efficiency, you must look at the details of:

  • Individual testbenches to understand if they have been architected and implemented in an optimal way for the highest performance with the given methodology.
  • Regression workflows to understand if they are running jobs optimally and not wasting resources by needlessly re-running full regression sets when more targeted runs are more efficient.
  • The available platform capacities which may be shared across multiple teams. Is there a shortage of resources that leads to inefficiencies in utilisation?
  • The performance of the platform, both the hardware (compute, storage, and network) and the EDA tools that are running the verification workloads.

This insight tells us how efficiently implemented simulation testbenches are. If a testbench is very slow, it will consume much greater levels of compute and simulation license resources. Slow testbenches might need to be re-implemented to make them run faster. This question relates to Unit or Sub-system testbench architecture and methodology.

Efficiency insights require analytics that reveal relative performances of verification environments tracked over time so that changes to performances can be identified and outliers can be observed and investigated. Since testbenches will vary by architecture and implementation, some degree of performance variability is to be expected, but having good analytics dashboards available to monitor these environments enables early detection of performance impacts that may arise from bad coding practices or platform/environment/tools degradations. When teams can see this data – they can fix these problems.

Without analytics, teams are flying blind regarding efficiency. 

Collecting bug data is the most important step towards Level 1 analytics capability!

We have discussed the value of bugs in The Quest for Bugs series of articles, but it is worthwhile to restate here why Bug Data is one of the richest sources of verification data and can drive the most useful insights such as verification effectiveness.

Bugs are a fantastic source of insights and learning, BUT only if you collect them!

…and the collection of good quality bug data is the challenging bit.

With enough accurate bug data, you can glean insights into both the effectiveness of your verification strategies, and the quality of your design (Level 1). If you look across the entire design, do some units or functions yield more bugs than others and if so, what is the cause of this? Maybe steps can be taken to reduce the number of bugs being introduced into the code? Does the bug data point at hotspots of complexity. What is the underlying root cause of these bugs and can bugs be avoided in the first place? From a verification effectiveness perspective, which methodologies are the most effective at finding bugs quickly? Are you spending vast resources running verification cycles that do not find bugs?

Can you “shift-left” and find those bugs earlier in the product development lifecycle and saturate the bug curve sooner, meaning release the product sooner?

To answer these questions, you need to ensure you are collecting enough bug data and that you have an adequate bug schema that captures the right information about bug discovery, bug impacts, and bug root causes. If you have a rich bug dataset, you will be able to drill bug analytics to answer many of these questions and perhaps expose some unexpected insights. Welcome to Level 1 Analytics!

The challenge is often persuading your engineering teams to get the bug-logging habit.

It’s an engineering practices or engineering culture thing. Some teams just do this as a natural part of their job, other teams are less willing and see bug-logging as an overhead to making forward progress.

Engineering teams need to see concrete value from the bug analytics as a motivation to collect the data. But of course, it’s a “chicken and egg” problem; no bug data or poor-quality bug data = no analytics or low value analytics.

When is the right time to start bug-logging? How do you ensure that the bug data is complete and accurate?

There are 3 key motivators for bug-logging: –

  1. Teamwork and communication: the task list to develop complex products (hardware or software), is long and likely to involve multiple people. Unless bugs are diligently logged and tracked there is a risk of bugs slipping through due to poor practice. It’s often the case that the bug reporter and the bug solver are not the same person, so you need to record and track the bug communications (triage, analysis, and solutions) to ensure nothing slips through the net.
  2. Progress tracking and sign-off: As the project transitions through the product development lifecycle there is a need to understand what the bug-curve looks like at any one point in time. What is the current bug rate? How many bugs are outstanding at each sign-off point? Is the bug curve trending in the right direction as expected? How many critical bugs do we have versus major and minor bugs?
  3. Continuous Improvement: By analysing the bug discovery data and the bug root causes, we can use these insights to improve the effectiveness and efficiency of our design and verification methodologies. This is where continuous learning from bugs, both within a project and between projects, can really reduce costs, improve quality, and reduce time-to-market for complex products.

If you can collect bug data accurately and consistently, then many of the above insights will be available to you. Furthermore, if you can join this bug data with other interesting data sources such as test execution data, project milestone data, or resource consumption data, then there are additional powerful insights that are possible that will illuminate the cost-benefit of your engineering efforts.

Step1: Train your engineers to think like Data Analysts

In Figure 5 we described routes out of data/volume traps towards Level 1 and 2 capabilities. We can also identify three more specific stages that will need to be attained to make progress.

As we mentioned, data analysis is a core skill for Verification Engineers, whether they realise it or not. Sometimes however, the basics of data fluency are not there, and this is something you can train your engineers in. Often, data analysis can be quite basic; maybe a static extract of data that is visualised as an Excel table, or better as an Excel chart. These basic analytics are static views of data that maybe need to be updated manually and regularly and are presented as snapshots in time for project reporting or progress tracking.

Live and fully automated analytics is the way to go. Engineers and managers need to be able to access data analytics at any time and trust that what they are seeing is the latest complete and accurate data. They need to be able to self-serve these analytics and not rely on engineers or data-analysts to refresh and serve the analytics on request. This requirement leads to the need to deliver user-friendly visualisations underpinned by automated data pipelines that consume data at source and clean and transform that data into reliable data models upon which interactive visualisations can be built.

So, more skills are required here than a basic competence with spreadsheets and charts.

We advocate the training of some core data skills for engineers that will enable them to understand and present their data in a way that leads to powerful insights. Some of these activities can be outsourced to trained data analysts, but a core knowledge in this area ensures that Verification Engineers gather and analyse the right datasets and understand what data is needed and how to interpret it. It also engenders a data perspective (or data fluency) where engineers start to understand how to read data, how to manipulate and transform it, and how to be wary of pitfalls that can produce misleading results, such as many-many relationships between data elements.

  • Data Capture: Where is your data coming from? What is the provenance of the data, and is it all being collected? This usually entails some instrumentation of verification workflows to capture data and send it to a Data Lake. In turn, that means that you need to figure out the correct data schema that will capture all the required fields needed to support the analytics. This should be an automated process so that data capture is on by default. Capture the data, regardless of whether you then need to filter and sample it later for the analytics.
  • Data Cleaning: Most raw data needs some level of cleaning or processing to remove nulls or duplicates, for example, correct errors or bad entries or to backfill data gaps. This can be done in an interactive way but is best done in an automated batch processing way wherever possible. Data cleaning can be scripted with Python NumPy and pandas libraries for example, where powerful data operations can be performed on data frames with just a few steps. (Many Verification Engineers will already be using Python for verification workflow scripting and processing, so the addition of these data analysis libraries and the concepts around data frame manipulations should not be a difficult step).
  • Data Engineering: This is the step where data is transformed and manipulated into a format suitable for data visualisation. This may involve joining and merging different data sources so that important correlations are possible that will deliver key insights from the data. See Figure 4 Data Challenges. Sometimes called the data model, it is the schema that controls how different data tables are joined, using common elements (primary keys) that link them together. It may also involve pivots, aggregations, summarisations, or the generation of derived or calculated data elements. For example, verification teams might want to correlate simulation testbench execution result data with bug tracking data to understand how effective different testbenches are at finding bugs in the RTL. Additionally, data engineering competence might extend to databases – how to set up structured databases such as MySQL, or unstructured databases (or Data Lakes) such as MongoDB or Hadoop, for example. There is much to learn in this domain, and it’s an area where data engineers and data analysts will specialise, so as a Verification Engineer or Design Engineer, it may be good to understand this discipline but to outsource the data engineering work to data specialists.
  • Data Querying: This may be more of a data engineering skill set, but some basic SQL capability may be useful to support early exploration of datasets, before full data visualisations are available. Exploring datasets is a key competence when presented with new data and prior to establishing any automated analytics. SQL is a core competence for most Data Analysts.
  • Data Visualisation: Finally, the bit that will deliver results and key insights is where the data is visualised, and the end user can interact with the data. Sometimes referred to as “Business Intelligence” since it presents intelligence or insights into the state of the business (or the state of a product development project). It should not be underestimated the importance of learning good data visualisation skills, and there are multiple good tooling options that are fun to learn and can deliver impressive visualisations very quickly e.g., PowerBI or Tableau. Learning to use these tools effectively generates real interest and excitement around data so it’s a worthwhile core skill to add to the Design or Verification Engineer’s skillset.

Step2: Exploit Advanced EDA Tooling

The EDA industry has been working on ways to exploit AI and ML to enhance their tool offerings for several years now. This is enabled by both the large volumes of data generated by many EDA verification tools, and the emergence and maturing of general ML algorithms which can be suitable to many verification data problems. These tools are often offered as new versions of the existing tools that can be licensed, or enhancements of existing tools where performance and efficiencies are improved thanks to ML under-the-hood. The end user may not need to know that ML is being utilised by the tools, or change the way that they use the tools, but the tools will perform better. This presents a low barrier to adopting more advanced tooling by your verification teams should you choose to do so, and without the need to train as data scientists or learn ML. We are not going to discuss the specific offerings of the EDA vendors or attempt to survey the market here. Our point is this:

Verification teams should be encouraged to explore and evaluate the available offerings…

… to see if the cost-benefit is there for their workflows and their product development lifecycles. Since the EDA industry is constantly evolving, the tools that are on offer and verification tooling have been an area of high innovation in the EDA industry for some time. It is the responsibility of the verification team to keep abreast of the latest developments and engage with the EDA vendors to ensure their current and future requirements can be met by the EDA vendor’s technology roadmaps.

Some of the ways (but not all) that ML is enhancing EDA tool offerings are in the following areas: –

  • Debug acceleration using automated failure clustering and signature analysis.
  • Execution optimisation to ensure optimal tool settings are used for simulation runs.
  • Optimisation of formal engine selections for formal verification.
  • Coverage closure acceleration by test selection ranking and optimisation.

You can think about verification workflows as a set of data inputs and data outputs, as shown below. Both input data sets and the generated output data sets can be candidates for ML opportunities. We know how much effort can be expended on coverage analysis and parsing of test results. Even small improvements in efficiency and effectiveness in these key areas can yield worthwhile savings in cost, quality, and time to market.

Figure 6 ML for EDA tooling

Step3: Train your engineers to think like Data Scientists

So far we have talked about the core skills required to perform competent data analytics, but of course there is a whole branch of data analytics that is often referred to as Data Science, which is exciting and appealing because it offers us opportunities to exploit our data in different ways and yield further insights from the data that may not be achievable with data visualisations alone. Often referred to a ML or Machine Learning, there is a well-established discipline that is accessible to all with a bit more basic training. There are libraries of ready-made algorithms available; you can find many of these conveniently bundled in Python’s scikit-learn library for example. Curious Verification Engineers love to innovate and problem-solve around verification efficiency and effectiveness. These are engaging and challenging problems and solving them by learning and applying new ML skills can be highly motivating. Learning these new skills is also fun and enjoyable and there are many excellent on-line learning platforms that can get you from zero-to-hero in a very short time e.g., DataQuest, DataCamp, udemy, coursera, codeacademy,  to name a few.

If your engineering team has mastered basic data analytics and visualisation skills, your data pipeline is clean and accurate, and you are collecting enough data, then there are many optimisation problems in verification that may be ripe for an ML approach – e.g., regression set reduction and optimisation, prediction modelling for resource demands, coverage closure optimisation etc.

Beyond this, there is much excitement about AI today, especially the application of Generative AI to problems such as test generation or code writing. We are not going to explore that topic here but, when Verification Engineers start to think and act like data scientists, there may be many opportunities to make tangible improvements to the way that complex designs are verified using less resources, in a shorter time, and delivering higher quality products.

Summary

Hardware Verification is a data-heavy problem.

Verification Engineers have known this for some time, and their day-to-day work involves the gathering, processing, and reporting on some large datasets. The reason it is a data-heavy problem is that verification is intrinsically an open-ended problem. Engineering teams need insightful analytics to make this open-ended process measurable and finite. Some engineering teams are still working with spreadsheet level analysis and visualisation, often using static snapshots of data, and manual data manipulations that are time-consuming to update. There may be many different data sources contained in many different systems which makes it difficult to join data and make insightful correlations.

For many, the challenge is how to exploit verification data with data analytics that will reveal significant opportunities to improve hardware verification.

There are mature disciplines available to assist with this, especially in the areas of data engineering, data analytics and data visualisation. Engineering teams need to either up-skill in modern data analytics, or engage professional data engineers, data analysts, data scientists, to bring these capabilities to the product development process. The end point is a set of interactive and real-time analytics that are intuitive, accessible, accurate, and self-service. Consumers of analytics should no longer need to raise a request to see an updated report. They should access the visualisations themselves and understand how to drill-down and filter to the view they require, which they can save or embed as a favourite view, knowing that this is real-time data, and trusting that the data is accurate. Report generation becomes a less onerous task when you have live analytics at your fingertips. The improved availability, and accessibility means analysis is devolved to those that need the data, and what’s more, curiosity should reveal previously unknown insights when the data is so much easier to see and explore.

If you do nothing else, refine your bug data capture behaviours and processes…

… because bug analytics can reveal insights that can be acted on in the near term.

That’s the baseline verification data analytics to aim for. Do this first. Establish a clean, accurate and complete data pipeline where the end point is fantastic explorable data visualisations. Beyond that, there are further possibilities to explore datasets more deeply and exploit more advanced techniques such as ML or AI to understand previously unseen patterns in data and build feedback loops into processes and workflows to optimise and reduce time, effort, and cost. We note that all the mainstream EDA verification tool vendors are already building ML under the hood for many of their advanced tool offerings. These can be exploited today without the need to train your engineers as data scientists. Most verification activities involve some sort of iteration or refinement towards a result. You may be able to get there with an acceptable % accuracy in a fraction of the time using ML/AI. More advanced teams or teams who are engaging trained data scientists may be able to realise these gains as data maturity grows and engineering teams adopt a strong data culture.

Authors:
Bryan Dickman, Valytic Consulting Ltd.,
Joe Convey, Acuerdo Ltd.

Also Read:

The Quest for Bugs: “Shift-Left, Right?”

The Quest for Bugs: “Correct by Design!”

The Quest for Bugs: Bugs of Power

The Quest for Bugs: “Deep Cycles”

The Quest for Bugs: Dilemmas of Hardware Verification


Samtec Increases Signal Density Again with Analog Over Array™

Samtec Increases Signal Density Again with Analog Over Array™
by Mike Gianfagna on 10-04-2023 at 6:00 am

Samtec Increases Signal Density Again with Analog Over Array™

Samtec is well-known for its innovative signal channel solutions. Whether the application requires copper or fiber, Samtec can deliver incredible performance and flexibility. The quality of the company’s models, eval kits and design support are well-known. There is another aspect of Samtec’s innovation. I touched on it in last month’s post when I discussed how Samtec can turn bulky waveguides into flexible cables. Beyond performance, this approach has a big impact on form factor and signal density – two very important topics these days. In this post, I’ll explore another innovation from Samtec that mixes analog and digital signals in the same connection. Read on to see how Samtec increases signal density again with Analog Over Array™.

What is Analog Over Array?

Samtec Analog Over Array connectors are dense, high-frequency connectors supporting digital and analog differential or single-ended signaling. An open pin field design is used that provides the flexibility to simultaneously run differential pairs, single-ended signals, and power through the same interconnect. The figure at top of the post shows what these connectors look like.

Samtec’s high-density array connectors are already proven in high-speed, high-performance digital applications. For RF applications, Samtec adds industry-leading differential crosstalk and return loss performance beyond 8 GHz. You get performance and a denser form factor in once package.

Features of the technology include:

  • Open-pin-field design with maximum routing and grounding flexibility
  • Analog and digital signals (differential pairs and/or single-ended) plus power though the same interconnect
  • Differential ground pattern supports RF SOCs
  • Single-ended ground pattern

The approach can be used in a wide range of applications, including 5G/LTE, remote PHY, phased digital/array radar, test and measurement, low/medium Earth orbit satellites, and RF SoCs. If you are working in any of these areas, you should check out what Analog Over Array can do for your project.

There are more details and references coming, but here is a top-level summary of performance:

  • 50 Ohm system impedance (single-ended); 100 Ohm system impedance (differential)
  • Return loss (maximum): -12 dB up to 4 GHz; -10 dB up to 8 GHz
  • Crosstalk isolation between channels (minimum): -69 dBc to 4 GHz; -53 dBc to 8 GHz

How Can I Get It?

Analog Over Array capability is available through several Samtec products:

NOVARAY® 112 GBPS PAM4 ARRAY, EXTREME DENSITY ARRAYS

Features

  • 112 Gbps PAM4 per channel
  • 0 Tbps aggregate data rate – 9 IEEE 400G channels
  • PCIe® 6.0 capable
  • Innovative, fully shielded differential pair design enables extremely low crosstalk (beyond 40 GHz) and tight impedance control
  • Two points of contact ensure a more reliable connection
  • 92 Ω solution addresses both 85 Ω and 100 Ω applications
  • Analog Over Array™ capable

ACCELERATE® HP HIGH-PERFORMANCE ARRAYS

Features

  • 635 mm pitch open-pin-field array
  • 56 Gbps NRZ/112 Gbps PAM4 performance
  • Cost optimized solution
  • Low-profile 5 mm and up to 10 mm stack heights
  • Up to 400 total pins available; roadmap to 1,000+ pins
  • Data rate compatible with PCIe® 6.0 and 100 GbE
  • Analog Over Array™ capable

SEARAY™ 0.80 MM PITCH ULTRA HIGH-DENSITY ARRAYS

Features

  • 80 mm (.0315″) pitch grid
  • 50% board space savings versus .050″ (1.27 mm) pitch arrays
  • 28 Gbps NRZ/56 Gbps PAM4 performance
  • Rugged Edge Rate® contact system
  • Up to 500 I/Os
  • 7 mm and 10 mm stack heights
  • Solder charge terminations for ease of processing
  • Samtec 28+ Gbps Solution
  • Final Inch® certified for Break Out Region trace routing recommendations
  • Analog Over Array™ capable

LP ARRAY™ LOW PROFILE OPEN-PIN-FIELD HIGH-DENSITY ARRAY

Features

  • 4 mm, 4.5 mm, 5 mm stack heights
  • Up to 400 I/Os
  • 4, 6 and 8 row designs
  • .050″ (1.27 mm) pitch
  • Dual beam contact system
  • Solder crimp termination for ease of processing
  • 28 Gbps NRZ/56 Gbps PAM4 performance
  • Analog Over Array™ capable

To Learn More

Samtec will soon offer complete evaluation kits to test-drive Array Over Analog technology. Stay posted for details about these kits here.  A detailed characterization report on a SEARAY design is available here.  Other Samtec product families mentioned in the article are on the roadmap.  A White Paper on the technology is available here.  And that’s how Samtec increases signal density again with Analog Over Array™.


Transformers Transforming the Field of Computer Vision

Transformers Transforming the Field of Computer Vision
by Kalar Rajendiran on 10-03-2023 at 10:00 am

The Structure of a Transformer: Attention

Over the last few years, transformers have been fundamentally changing the nature of deep learning models, revolutionizing the field of artificial intelligence. Transformers introduce an attention mechanism that allows models to weigh the importance of different elements in an input sequence. Unlike traditional deep learning models, which process data sequentially or hierarchically, Transformers can capture dependencies between elements in parallel. This makes it possible to train much larger models more efficiently.

While originally developed for natural language processing (NLP), Transformers have started to gain prominence and adoption in a number of different applications. One such application is computer vision, the field that enables machines to interpret and understand visual information from the real world.

Computer vision has evolved over the years, from techniques based on handcrafted features to the recent surge in deep learning models. The advent of deep learning, fueled by the availability of large datasets and powerful GPUs, has revolutionized the field. Deep learning models have surpassed human-level performance in tasks such as image classification, object detection, and image generation. This field has long relied on convolutional neural networks (CNNs) for its deep learning architecture. Researchers have realized that Transformers could be adapted to tackle spatial data too, making it a promising candidate for computer vision applications. This is the context for a talk given at the 2023 Embedded Vision Summit by Tom Michiels, Principal System Architect at Synopsys.

Why Transformers for Vision

Computer vision tasks, such as image classification, object detection, image segmentation, and more, have traditionally relied heavily on CNNs. While CNNs are effective in capturing spatial hierarchies and local patterns in images, Transformers excel at capturing long-range dependencies and global contextual information within an image. This is essential for understanding relationships between distant image regions, making them suitable for complex vision tasks. Transformers process all elements in parallel, eliminating the sequential nature of CNNs. This parallelization significantly accelerates training and inference times, making large-scale vision models more feasible. Transformers can be scaled horizontally by adding more layers and parameters, allowing them to handle a wide range of vision tasks. They can be scaled vertically to handle larger or smaller input images for image classification to fine-grained object detection. Vision tasks often involve multiple modalities, such as images and text. Transformers are inherently multimodal, making them suitable for tasks that require understanding and reasoning about both visual and textual information. This versatility extends their applicability to areas like image captioning and visual question answering.

Transformers also tend to produce more interpretable representations compared to some other deep learning models. The attention maps generated by Transformers provide insights into which parts of the input are weighted more for making predictions. This interpretability is invaluable for debugging models and gaining insights into their decision-making processes.

Applications of Transformers in Computer Vision

Models like DETR (DEtection TRansformer) have demonstrated remarkable performance in object detection tasks, outperforming traditional CNN-based approaches. DETR’s ability to handle variable numbers of objects in an image without the need for anchor boxes is a game-changer. Transformers have shown significant promise in semantic and instance segmentation tasks. Models like Swin Transformer and Vision Transformer (ViT) have achieved competitive results in these areas, offering improved spatial understanding and feature extraction. Transformer-based models, such as DALL-E, are capable of generating highly creative and context-aware images from textual descriptions. This has opened up new opportunities for content generation and creative applications. Last but not least is that Transformers can generate descriptive captions for images, enriching the accessibility of visual content.

Hybrid Models

While ViTs are excellent in image classification and beat CNNs in accuracy and training time, CNNs beat ViTs in inference time. While Transformers are more helpful for recognizing complex objects, the inductive bias of a convolution is more helpful for recognizing low level features. Training large scale Transformer models for computer vision often requires extensive datasets and computational resources.

As such, a vision processing application may utilize both CNNs and Transformers for greater efficiency. Combining the strengths of Transformers with other architectures like CNNs is a growing area of research, as hybrid models seek to leverage the best of both worlds.

Synopsys ARC® NPX6 NPU IP

The Synopsys ARC® NPX6 NPU IP is an example of an AI accelerator that can handle CNNs and transformers. It leverages a convolution accelerator for matrix-matrix multiplications, as well as a tensor accelerator for transformer operations and activation functions. The IP delivers up to 3,500 TOPS performance and exceptional power efficiency of up to 30 TOPS/Watt. Design teams can also accelerate their application software development with the Synopsys ARC MetaWare MX Development Toolkit. The toolkit provides a comprehensive software programming environment that includes a neural network software development kit and support for virtual models.

To learn more, visit the product page.

Summary

The surprising rise of Transformers in computer vision spotlights a significant shift in the field’s landscape. Their unique capabilities, including the attention mechanism, parallel processing, and scalability, have challenged the dominance of CNNs and opened up exciting possibilities for computer vision applications. They offer unparalleled versatility and performance across a wide range of tasks, transforming how we interact with visual data. As researchers continue to refine and adapt Transformer models for vision tasks, we can expect further breakthroughs that will lead to smarter, more capable vision systems with broader real-world applications.

Also Read:

Power Analysis from Software to Architecture to Signoff

WEBINAR: Why Rigorous Testing is So Important for PCI Express 6.0

Synopsys Expands Synopsys.ai EDA Suite with Full-Stack Big Data Analytics Solution