Ceva webinar AI Arch SEMI 800X100 250625
WP_Term Object
(
    [term_id] => 6435
    [name] => AI
    [slug] => artificial-intelligence
    [term_group] => 0
    [term_taxonomy_id] => 6435
    [taxonomy] => category
    [description] => Artificial Intelligence
    [parent] => 0
    [count] => 637
    [filter] => raw
    [cat_ID] => 6435
    [category_count] => 637
    [category_description] => Artificial Intelligence
    [cat_name] => AI
    [category_nicename] => artificial-intelligence
    [category_parent] => 0
    [is_post] => 
)

Architecting an ML Design

Architecting an ML Design
by Bernard Murphy on 08-14-2018 at 7:00 am

Discussion on machine learning (ML) and hardware design has been picking up significantly in two fascinating areas: how ML can advance hardware design methods and how hardware design methods can advance building ML systems. Here I’ll talk about the latter, particularly about architecting ML-enabled SoCs. This approach is … Read More


Machine Learning with Prior Knowledge

Machine Learning with Prior Knowledge
by Bernard Murphy on 08-09-2018 at 7:00 am

I commented recently on limitations in deep learning (DL), one of which is the inability to incorporate prior knowledge, like basic laws of mathematics or physics. Typically, understanding in DL must be inferred from the training set, which in a general sense cannot practically cover prior knowledge. Indeed one of the selling… Read More


Deep learning fueling the AI revolution with Interlaken IP Subsystem

Deep learning fueling the AI revolution with Interlaken IP Subsystem
by Daniel Nenni on 07-30-2018 at 7:00 am

AI is revolutionizing and transforming virtually every industry in the digital world. Advances in computing power and deep learning have enabled AI to reach a tipping point toward major disruption and rapid advancement. However, these applications require much higher performance and bandwidth requiring new kinds of IP and… Read More


Cadence Selected to Support Major DARPA Program

Cadence Selected to Support Major DARPA Program
by Bernard Murphy on 07-26-2018 at 7:00 am

When DARPA plans programs, they’re known for going big – really big. Which is what they are doing again with their Electronics Resurgence Initiative (ERI). Abstracting from their intro, this is a program “to ensure far-reaching improvements in electronics performance well beyond the limits of traditional scaling”. This isn’t… Read More


Maximize Bandwidth in your Massively Parallel AI SoCs?

Maximize Bandwidth in your Massively Parallel AI SoCs?
by Daniel Nenni on 07-20-2018 at 12:00 pm

Artificial Intelligence is one of the most talked about topics on the conference circuit this year and I don’t expect that to change anytime soon. AI is also one of the trending topics on SemiWiki with organic search bringing us a wealth of new viewers. You may also have noticed that AI is a hot topic for webinars like the one I am writing… Read More


Platform ASICs Target Datacenters, AI

Platform ASICs Target Datacenters, AI
by Bernard Murphy on 07-17-2018 at 7:00 am

There is a well-known progression in the efficiency of different platforms for certain targeted applications such as AI, as measured by performance and performance/Watt. The progression is determined by how much of the application can be run with specialized hardware-assist rather than software, since hardware can be faster… Read More


Deep Learning: Diminishing Returns?

Deep Learning: Diminishing Returns?
by Bernard Murphy on 07-12-2018 at 7:00 am

Deep learning (DL) has become the oracle of our age – the universal technology we turn to for answers to almost any hard problem. This is not surprising; its strength in image and speech recognition, language processing and multiple other domains amaze and shock us, to the point that we’re now debating AI singularities. But then,… Read More


Leveraging AI to help build AI SOCs

Leveraging AI to help build AI SOCs
by Tom Simon on 06-25-2018 at 12:00 pm

When I first started working in the semiconductor industry back in 1982, I realized that there was a race going on between the complexity of the system being designed and the capabilities of the technology in the tools and systems used to design them. The technology used to design the next generation of hardware was always lagging… Read More


Looking Ahead: What is Next for IoT

Looking Ahead: What is Next for IoT
by Ahmed Banafa on 06-13-2018 at 12:00 pm

Over the past several years, the number of devices connected via Internet of Things (IoT) has grown exponentially, and it is expected that number will only continue to grow. By 2020, 50 billion connected devices are predicted to exist, thanks to the many new smart devices that have become standard tools for people and businesses… Read More


Being Intelligent about AI ASICs

Being Intelligent about AI ASICs
by Tom Simon on 06-06-2018 at 12:00 pm

The progression from CPU to GPU, FPGA and then ASIC affords an increase in throughput and performance, but comes at the price of decreasing flexibility and generality. Like most new areas of endeavor in computing, artificial intelligence (AI) began with implementations based on CPU’s and software. And, as have so many other applications,… Read More