Discussion on machine learning (ML) and hardware design has been picking up significantly in two fascinating areas: how ML can advance hardware design methods and how hardware design methods can advance building ML systems. Here I’ll talk about the latter, particularly about architecting ML-enabled SoCs. This approach is … Read More
Artificial Intelligence
Machine Learning with Prior Knowledge
I commented recently on limitations in deep learning (DL), one of which is the inability to incorporate prior knowledge, like basic laws of mathematics or physics. Typically, understanding in DL must be inferred from the training set, which in a general sense cannot practically cover prior knowledge. Indeed one of the selling… Read More
Deep learning fueling the AI revolution with Interlaken IP Subsystem
AI is revolutionizing and transforming virtually every industry in the digital world. Advances in computing power and deep learning have enabled AI to reach a tipping point toward major disruption and rapid advancement. However, these applications require much higher performance and bandwidth requiring new kinds of IP and… Read More
Cadence Selected to Support Major DARPA Program
When DARPA plans programs, they’re known for going big – really big. Which is what they are doing again with their Electronics Resurgence Initiative (ERI). Abstracting from their intro, this is a program “to ensure far-reaching improvements in electronics performance well beyond the limits of traditional scaling”. This isn’t… Read More
Maximize Bandwidth in your Massively Parallel AI SoCs?
Artificial Intelligence is one of the most talked about topics on the conference circuit this year and I don’t expect that to change anytime soon. AI is also one of the trending topics on SemiWiki with organic search bringing us a wealth of new viewers. You may also have noticed that AI is a hot topic for webinars like the one I am writing… Read More
Platform ASICs Target Datacenters, AI
There is a well-known progression in the efficiency of different platforms for certain targeted applications such as AI, as measured by performance and performance/Watt. The progression is determined by how much of the application can be run with specialized hardware-assist rather than software, since hardware can be faster… Read More
Deep Learning: Diminishing Returns?
Deep learning (DL) has become the oracle of our age – the universal technology we turn to for answers to almost any hard problem. This is not surprising; its strength in image and speech recognition, language processing and multiple other domains amaze and shock us, to the point that we’re now debating AI singularities. But then,… Read More
Leveraging AI to help build AI SOCs
When I first started working in the semiconductor industry back in 1982, I realized that there was a race going on between the complexity of the system being designed and the capabilities of the technology in the tools and systems used to design them. The technology used to design the next generation of hardware was always lagging… Read More
Looking Ahead: What is Next for IoT
Over the past several years, the number of devices connected via Internet of Things (IoT) has grown exponentially, and it is expected that number will only continue to grow. By 2020, 50 billion connected devices are predicted to exist, thanks to the many new smart devices that have become standard tools for people and businesses… Read More
Being Intelligent about AI ASICs
The progression from CPU to GPU, FPGA and then ASIC affords an increase in throughput and performance, but comes at the price of decreasing flexibility and generality. Like most new areas of endeavor in computing, artificial intelligence (AI) began with implementations based on CPU’s and software. And, as have so many other applications,… Read More
Flynn Was Right: How a 2003 Warning Foretold Today’s Architectural Pivot