Tuesday for lunch at #56DAC I caught up to the AI/ML experts at the panel discussion hosted by Cadence. Our moderator was the affable and knowledgable Prof. Andrew Kahng from UC San Diego. Attendance was good, and interest was quite high as measured by the number of audience questions. I learned that EDA tools that use heuristics and predictions are great candidates for AI/ML upgrades.
Vishal Sarin, Analog Inference
CEO, founder, neural network processor ICs
Andrew Bell, Groq (ex Google TPU team)
SW defined compute and ML platforms.
Haoxing Ren, Nvidia
Paul Penzes, Qualcomm
VP Engineering, handle Design Technology co-optimization, ex Broadcom.
Venkat Thanvantri, Cadence
VP R&D, AI/ML in digital products
Q: What makes AI/ML relevant now, after 30 years of research?
Paul – in EDA we’re running out of traditional problem solving techniques. Short term ML can help solve new problems. Lots of new design data is now available that go into ML systems. Speed up in HW accelerators now allow faster solutions than ever, Qualcomm has HW accelerators in use now. We also see use of ML in autonomous driving.
Vishal – I graduated with a focus on Neuromorphic computing, but at the time there was not much HW available. Just now we see use cases with ML and HW scaling abilities that are enabling progress.
Q: Do you see different architectures for AI/ML workloads?
Vishal – yes, very many architectures, and they’re different for training or inference. We’re not just speeding up MACs. Increasing compute power requires higher memory bandwidth. Denard scaling has limits, so how do we move memory closer to computation? GPUs can be used to accelerate many talks, but general purpose CPUs tend to be too slow. Spiking Neural Nets look promising. Analog neural networks have the lowest energy use and work with traditional neural networks.
Haoxing – the GPU approach has been good for AI/ML workloads so far. The best architecture for workloads is still being figured out.
Q: For EDA software, where does ML get applied for better silicon?
Venkat – NP hard problems use heuristics and predictions, but these are well suited for ML applications, like reaching lower power. I’m excited to apply ML to our EDA heuristics and predictions. We’re also focused on supervised and reinforcement learning applied to EDA problems.
Q: Do GPUs change the EDA tool architectures?
Haoxing – There are two needs in EDA that a GPU can help out: computing speed and ML approach. EDA tools have changed to support multi-threading. 125 TFlops are possible with a GPU today, which is way more than general purpose CPUs. A GPU-based Placer looks promising. ML running on a GPU is useful. EDA tools are moving toward the cloud, and GPUs in the clouds are ready.
Q: Do EDA users see GPUs emerging for new tools?
Andrew – not seen yet in a big degree.
Venkat – the availability of GPUs will help out in training during ML.
Q: For IC implementation like synthesis and physical design, where does ML in EDA tools come in?
Andrew – we’ve seen ML used effectively in silicon validation, finding bugs, and correlation. Some power data mining with ML is useful, but not too effective. Floorplanning results with AI is way behind what a human can achieve today. Could floorplanning be modeled as a game, thus reinforcement learning applied?
Venkat – some support of macro placement today in Cadence tools, heuristically.
Andrew – Bayesian optimizations can tune ML models to optimize, it’s like running an EDA tool today where you sweep parameters. Small changes in RTL can take hours and days, can EDA make this ripple take only minutes? EDA tools should be using statefulness to remember previous run results.
Paul – finding yourself doing the same flow over and over again, like power optimization, this is well suited for a ML application. In DRC checking we need to check the rate of improvements to cut overall runtimes, some adaptiveness would help. During library characterization the data moves very slowly, but with a smooth trend you could infer the answers without so much simulation.
Q: What kinds of new EDA tools or methodologies would you write a big check for today? What are you missing?
Vishal – we do field programmable neural nets with analog techniques. Our architecture requires us to model neurons based on networks that you need. We are struggling to save on digital power. On the edge we need much smaller power numbers. We need reliable timing closure tools. I don’t know how much we are leaving on the table by not having an AI tool. Inferencing needs to be on the edge with very low power.
Silicon is getting expensive as we scale down, so how can AI improve my yields to lower my costs?
Andrew – HLS is appealing to explore the architectural space.
Q: You have a DAC paper this year, what EDA tools should be using ML but isn’t yet.
Haoxing – EDA tools are just scratching the surface for ML now, it’s mostly supervised learning approaches. We need to apply Deep Learning to AI and reinforcement learning, or try unsupervised learning in EDA. Many problems have no analytical solution. Analyzing the self-heat of a FinFET SoC for 10 billion transistors takes too long in SPICE, so what about a ML model that doesn’t require SPICE. We want to see DL added to EDA tools. We want to improve P&R with ML techniques.
Q: The digital EDA flow is well established now, will new ML products change the EDA flows?
Venkat – I’m seeing many new customers designing AI/ML chips. We’re starting to use ML inside some EDA tools. ML outside is making designers more productive, giving them better recommendations, fewer iterations required. Having a big data platform provides an interface layer to all digital tools to save meta data, allowing analytic applications. We want EDA customers to do their own analytics.
Q: How will ML effect the job function of digital design engineers?
Andrew – I’m a data nerd, and extract lots of data per tool run, and it’s very important. What about alternative HDLs beyond SystemC and SystemVerilog? Engineers will need to learn more programming and solvers, like in formal. Data science becomes an even more important discipline now.
Q: Will these new skill sets come from ML or inference?
Andrew – yes, the skill sets will be different. Admit that you don’t know very much.
Q: Do you see multi-die approaches being used for ML designs?
Vishal – yes, in a big way. Memory needs to be near compute, so why not integrate it. How about using RRAM or phase change memory. The logic process isn’t the same as the memory process, so multi-die makes more sense. The highest performance is made by using multi-die approach.
Q: What kind of metrics and data analytics are you using?
Andrew – during timing closure for every path we use tools like Python to help get timing closure outside of the EDA tools.
Venkat – looking at different timing cones is possible using our big data platform.
Haoxing – some open source AI/ML tools are out there now, the challenge is getting data out of the EDA tools.
Q: I do functional verification, how would I deploy ML to improve bug prediction or coverage closure?
Haoxing – Nvidia has used ML for functional verification and there are papers from DVcon. We can predict with ML the coverage closure, this is an active research area for us.
Venkat – ML is a top initiative within Cadence, and we have lots of resources in place, both analog and digital improvements have been reported and we expect even more to come.
Paul – Qualcomm invests in HW to enable and accelerate ML for us. We’re looking at where ML is an approach for new problems.
Haoxing – GPUs will continue to accelerate ML tasks. Using ML doesn’t replace digital designers.
Andrew – look out for Groq. Don’t expect HW jobs to go away any time soon, because human insight cannot be replaced.
Vishal – new DL silicon is exciting, but not general purpose AI.Share this post via: