How could we differentiate between deep learning and machine learning as there are many ways of describing them? A simple definition of these software terms can be found here. Let’s look into Artificial Intelligence (AI), which was coined back in 1956. The term AI can be defined as human intelligence exhibited by machines. While machine learning is an approach to achieve AI and deep learning is a technique for implementing subset of machine learning.
During last year 30-Year Anniversary of TSMC Forum, nVidia CEO Jen-Hsen Huang mentioned two concurrent dynamics disrupting the computer industry today, i.e.,how software development is done by means of deep learning and how computing is done through the more adoption of GPU as replacement to single-threaded/multi-core CPU, which is no longer scale and satisfy the current increased computing needs. The following charts illustrate his message.
At this month Santa Clara DesignCon2018 there were multiple well-attended sessions (2 panels and 1workshop) addressing Machine Learning Advances in Electronic Design. Highlighted by panelists coming from 3 different areas (EDA, industry and academia) were some successful snapshots of ML application in optimizing design and its potential consequences as how we should handle the generated models and methodologies.
From the industry:
Chris Cheng, a Distinguished Engineer from HPE presented a more holistic view of ML potential use coupled with test instruments as substitute for a software model based channel analysis. He also projected ML use to perform more proactive failure prediction of signal buses or complicated hardware such as solid-state drives.
Ken Wu, Google Staff HW Engineer shared his works on applying ML in channel modeling. He proposed the use of ML to predict channel’s eye-diagram metrics for signal integrity analysis. The learned models can be used to circumvent the need of performing complex and expensive circuit simulations. He believes ML opens an array of opportunity for channel modeling such as extending it to analyze the four-level pulse amplitude modulation (PAM-4) signaling, and the use of Deep Neural Network for Design of Experiment (DOE).
Dale Becker, IBM Chief Engineer of Electronic Packaging Integration, alluded to the potential dilemma imposed by ML. Does it supersede today’s circuit/channel simulation techniques, or is it synergistic? With the current design methodologies still reflecting heavy human interventions (such as in channel diagnostics, evaluation, optimization, physical implementation), ML presents an opportunity for exploration. On the other side of the equation, we need to be ready to address standardization, information sharing and IP protection.
From the EDA world, both Synopsys and Cadence were represented:
Cadence team — David White (Sr. Group Director), Kumar Keshavan (Sr. Software Architect) and Ken Willis (Product Engineering Architect) highlighted Cadence contribution in advancing ML adoption. David shared what Cadence has achieved with ML over the years on Virtuoso product and raised the crucial challenge of productizing ML. For a more in-depth coverage for David’s similar presentation on ML, please refer to another Wiki article TSMC EDA 2.0 With Machine Learning – Are We There Yet ? Kumar delved into Artificial Neural Network (ANN) concept and suggested its application for DOE of LPDDR4 bus. Ken Willis was moderating the afternoon panel and highlighted the recently introduced IBIS ML versus AMI model as well as impact of ML on solution space analysis.
Sashi Obilisetty, Synopsys R&D Director pointed out that the EDA ecosystem comprising of academic research, technology availability and industry interest) is ready and engaged. What we need is a robust, scalable, hi-performance and near real time data platform for ML application.
Several academia also shared their research progress under the auspice of Center for Advanced Electronics Through Machine Learning (CAEML) since its formation in 2016:
Prof. PaulFranzon discussed how ML could shorten IC physical design step through the use of surrogate model. The concept is to train a fast global model to evaluate from multiple evaluations of a detailed model that is slow to evaluate. Given an SOC design requiring a 40 minute per route iteration, the team needs about 50 runs to complete the Kriging based model overnight. Using this model, an optimal design can be obtained in 4 iterations which otherwise requires 20 iterations. The design has 18K gates derived from Cortex-M0 with 10ns cycle time and 45nm generic process.
Prof. Madhavan Swaminathan presented another application of ML based solution using surrogate model on channel performance simulation.
His view: Engineer (thinker) + ML (enabler) + Computers (doers) = enhanced solution. Extending ML into design optimization through active learning may ensure convergence to global optima and minimizing required CPU time.
With the increased design activities and research efforts in ML/DL applications, we should anticipate more coverage of such implementation into 2018. The next question would be if it will create a synergy and enhance design efforts through retooling and methodology adjustments, or it will create disruption that may change the human designer roles at different junctures of design capture. We should see.Share this post via: