No-one could accuse Badru Agarwala, GM of the Mentor/Siemens Calypto Division, of being tentative about high-level synthesis. (HLS). Then again, he and a few others around the industry have been selling this story for quite a while, apparently to a small and not always attentive audience. But times seem to be changing. I’ve written… Read More
Tag: nvidia
Big Data Analytics and Power Signoff at NVIDIA
While it’s interesting to hear a tool-vendor’s point of view on the capabilities of their product, it’s always more compelling to hear a customer/user point of view, especially when that customer is NVIDIA, a company known for making monster chips.
A quick recap on the concept. At 7nm, operating voltages are getting much closer… Read More
Webinar: High-Capacity Power Signoff Using Big Data
Want to know how NVIDIA signs off on power integrity and reliability on mega-chips? Read on.
PPA over-design has repercussions in increased product cost and potential missed schedules with no guarantee of product success. Advanced SoCs pack more functionality and performance resulting in higher power density, but traditional… Read More
Electronic Design for Self-Driving Cars Center-Stage at DVCon India
The fourth installment of DVCon India took place in Bangalore, September 14-15. As customary, it was hosted in the Leela Palace, a luxurious and tranquil resort in the center of Bangalore, and an excellent venue to host the popular event.
As reported in my previous DVCon India trip reports, the daily and evening traffic in Bangalore… Read More
Nvidia’s Pegasus Putsch!
There hasn’t been this much excitement in Munich since the 1920’s. Nvidia’s great pivot was on display at the GPU Technology Conference Munich 2017. Digital dashboards are out and robotaxis are in as Nvidia narrows its focus on the tip of the automotive industry disruption spear.
To be clear, Nvidia is triangulating on the automotive… Read More
Semiconductor and EDA 2017 Update!
It really is an exciting time in semiconductors. The benchmarks on the new Apple A11 SoC and the Nvidia GPU are simply amazing. Even though Moore’s Law is slowing, the resulting chips are improving well above and beyond expectations, absolutely.
As I have mentioned before, non-traditional chip companies such as Apple, Amazon,… Read More
EDA Machine Learning from the Experts!
Traditionally, EDA has been a brute force methodology where we buy more software licenses and more CPUs and keep running endless jobs to keep up with the increasing design and process complexities. SPICE simulation for example; when I meet chip designers (which I do quite frequently) I ask them how many simulations they do for a … Read More
Virtualizing ICE
The defining characteristic of In-Circuit-Emulation (ICE) has been that the emulator is connected to real circuitry – a storage device perhaps, and PCIe or Ethernet interfaces. The advantage is that you can test your emulated model against real traffic and responses, rather than an interface model which may not fully capture… Read More
HLS update from Mentor about Catapult
I recall back in the late 1980’s when logic synthesis tools were first commercialized, at first they could read in a gate-level netlist from one foundry then output an optimized netlist back into the same foundry. Next, they could migrate your gate-level netlist from Vendor A over to Vendor B, giving design companies some… Read More
Machine Learning in EDA Flows – Solido DAC Panel
At DAC this year you could learn a lot about hardware design for AI or Machine Learning (ML) applications. We are all familiar with the massively parallel hardware being developed for autonomous vehicles, cloud computing, search engines and the like. This includes, for instance, hardware from Nvidia and others that enable ML … Read More