Machine learning (ML) is finding its way into many of the tools in silicon design flows, to shorten run times and improve the quality of results. Logic simulation seemed an obvious target for ML, though resisted apparent benefits for a while. I suspect this was because we all assumed the obvious application should be to use ML to refine… Read More
Tag: ML
Intellectual Abilities of Artificial Intelligence (AI)
To understand AI’s capabilities and abilities we need to recognize the different components and subsets of AI. Terms like Neural Networks, Machine Learning (ML), and Deep Learning, need to be define and explained.
In general, Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed… Read More
Intelligently Optimizing Constrained Random
“Who guards the guardians?” This is a question from Roman times which occurred to me as relevant to this topic. We use constrained random to get better coverage in simulation. But what ensures that our constrained random testbenches are not wanting, maybe over constrained or deficient in other ways? If we are improving with a faulty… Read More
HLS in a Stanford Edge ML Accelerator Design
I wrote recently about Siemens EDA’s philosophy on designing quality in from the outset, rather than trying to verify it in. The first step is moving up the level of abstraction for design. They mentioned the advantages of HLS in this respect and I refined that to “for DSP-centric applications”. A Stanford group recently presented… Read More
Podcast EP81: The Future of Neural Processing with Quadric’s Steve Roddy
Dan is joined by Steve Roddy, chief marketing officer of Quadric, a leading processor technology intellectual property (IP) licensor. Roddy brings more than 30 years of marketing and product management expertise across the machine learning (ML), neural network processor (NPU), microprocessor, digital signal processor
Webinar: AMS, RF and Digital Full Custom IC Designs need Circuit Sizing
My career started out by designing DRAM circuits at Intel, and we manually sized every transistor in the entire design to get the optimum performance, power and area. Yes, it was time consuming, required lots of SPICE iterations and was a bit error prone. Thank goodness times have changed, and circuit designers can work smarter … Read More
Machine Learning Applied to IP Validation, Running on AWS Graviton2
I recall meeting with Solido at DAC back in 2009, learning about their Variation Designer tool that allowed circuit designers to quickly find out how their designs performed under the effects of process variation, in effect finding the true corners of the process. Under the hood the Solido tool was using Machine Learning (ML) techniques… Read More
An FPGA-Based Solution for a Graph Neural Network (GNN) Accelerator
Earlier this year, Achronix made a product announcement about shipping the industry’s highest performance Speedster7t FPGA devices. The press release included lot of details about the architecture and features of the device and how that family of devices is well suited to satisfy the demands of the artificial intelligence … Read More
Cerebrus, the ML-based Intelligent Chip Explorer from Cadence
Electronic design automation (EDA) has come a long way from its beginnings. It has enabled chip engineers from specifying designs directly in layout format during the early days to today’s capture in RTL format. Every advance in EDA has made the task of designing a chip easier and increased the design team productivity, enabling… Read More
Webinar: Real-time In-Chip Monitoring to Boost multi-core AI, ML, DL Systems
During the COVID-19 pandemic I’m using Zoom and attending more webinars to keep updated on semiconductor industry trends, and one huge trend is the importance of AI applied to SoCs. Using more cores to handle ML and DL makes sense, but then how do you keep the chips within their power and reliability limits while at the same … Read More