On July 9, 2019, I attended the TechTALK session hosted by Dave Kelf of Breker Systems, Inc. titled, “Applied AI in Design-to-Manufacturing.” I was happy to hear what Dave had put together for this since it is a topic I am keenly interested in and because I have known Dave personally through music and charitable activities we have worked on together.
Dave’s intro was brief when he gave way to his first speaker Aki Fujimura, Chairman, and CEO of D2S, Inc. Again, I was having the pleasure of listening to a good friend as Aki and I were two of the co-founders of Tangent Systems when we were both relatively fresh out of college. Back then, we were working together on timing-driven place and route. We have both broadened technically since then. Aki has immersed himself in the manufacturing side of semiconductors while founding D2S in 2007. Aki’s title was a straightforward description of his talk, “Everything Needs a Digital Twin in the Deep Learning Era.” As I understood it, since there are relatively few defects to be found in actual mask designs, it is complicated to train a deep learning neural network on how to find them. Aki’s proposed solution is to create a digital twin, a simulated version of the real wafer or mask. At the same time, I thought it was deep, weird, and brilliant. I often had trouble keeping up with Aki and Steve Teig (another Tangent co-founder) when we worked together as they are both outstanding engineers. The concept of a digital twin will be something I want to follow through and study more.
Next up was Jan Rabaey of both UC Berkley and IMEC. Jan spoke about “The Cognitive Edge.” We have heard multiple interpretations of what Cloud and IoT mean for a few years now. To me, the term most inconsistently used has been the Edge. What Jan adeptly pointed out is that there are a huge number of sensors and other devices in the world, generating an overwhelming amount of data. It is neither practical nor reasonable to send all of that data to “the Cloud” for processing. I appreciated Rabaey’s presentation, and I intend to talk much more about “the EDGE” in some upcoming blogs. The point here is adding intelligence at the edge of the IoT network is required because there is far too much data to send everything to the cloud. It is impractical and inefficient. What to do about it? That will be for several future blogs.
Finally, we get to the entertainment portion of the event, Jim Hogan’s panel titled, “Are we Experiencing a Renaissance in Chip Design and EDA?” From an investment point of view, EDA has seen a decrease in start-up investment since Mike Fister told the world Cadence was not going to rely on buying start-ups anymore in 2004. The statement was not prophetic because Cadence indeed completed some very significant acquisitions after that. But in the long run, it was self-fulfilling prophesy because investors started walking away from a market where successful exits were going to significantly harder to achieve given the drop in the competitiveness that would happen with Cadence as a buyer of EDA start-ups. Still, there has been a lot of new technology coming out of a smaller number of EDA start-ups, and they are affecting the EDA marketplace substantially (see Verification 3.0). There was also a strong panel of experts not from the EDA Big 3 there to discuss it.
The panel, which of course was moderated by Jim Hogan (Vista Ventures) consisted of (alphabetically) Simon Butler (Methodics), Joe Costello (Montana Systems), Simon Davidmann (Imperas), Adnan Hamid (Breker), and Doug Letcher (Metrics). Jim’s premise, which was not refuted by the panel, was that three macro trends were powering this renewal in EDA:
- A new computing platform – infinite scalable compute capacity in the cloud along with cloud-native development tools like GitHub and Kubernetes, which enables a new SaaS business model that lets customers purchase exactly the amount of software cycles they need
- A customer demand for much higher simulation throughput driven by the increased size and complexity of chip design, coupled with high tape-out costs which means one must have the highest verification coverage to avoid any test escapes
- A new chip design opportunity – application or domain-specific processors that coupled with the cloud platform will enable startups and system companies to build specialized processors rather than the handful of giant companies building the enormously expensive general-purpose processors.
Each of these business leaders spoke on these premises supportively for a while before Joe Costello lit the fuse and declared that the big EDA’s were slowing this whole thing down by dragging their feet on truly addressing cloud-based licensing models. We need to remember that Joe was leading Cadence when they moved from perpetual licenses with maintenance fees to time-based license fees. Then, he saw what he felt his customers needed, he did it, and the rest of the industry followed. The issue today is how to provide an opportunity to scale compute power and licenses on an as-needed basis – to only pay for what you use on-demand. In Joe’s opinion, the EDA industry is not moving very quickly to this new model, or at least, the big EDA companies are not.
I have not heard much push back from Joe’s remarks, although there were many people in attendance. Maybe the conversation will start in the forum below this article? Feel free to post your remarks below.
As was mentioned above, Dave Kelf is VP, Marketing at Breker Systems. Breker Systems is holding a webinar in August titled “Eliminating Hybrid Verification Barriers Through Test Suite Synthesis.” You can read the blogabout the webinar here, or go straight to the registration page.Share this post via: