WP_Term Object
(
    [term_id] => 10
    [name] => eSilicon
    [slug] => esilicon
    [term_group] => 0
    [term_taxonomy_id] => 10
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 83
    [filter] => raw
    [cat_ID] => 10
    [category_count] => 83
    [category_description] => 
    [cat_name] => eSilicon
    [category_nicename] => esilicon
    [category_parent] => 386
)
            
WP_Term Object
(
    [term_id] => 10
    [name] => eSilicon
    [slug] => esilicon
    [term_group] => 0
    [term_taxonomy_id] => 10
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 83
    [filter] => raw
    [cat_ID] => 10
    [category_count] => 83
    [category_description] => 
    [cat_name] => eSilicon
    [category_nicename] => esilicon
    [category_parent] => 386
)

AI Hardware Summit, Report #1: Doing More to Cost Less

AI Hardware Summit, Report #1: Doing More to Cost Less
by Randy Smith on 09-26-2019 at 10:00 am

I recently had the pleasure of attending the AI Hardware Summit at the Computer History Museum in Mountain View, CA. This two-day conference brought together many companies involved in building artificial intelligence solutions. Though the focus was on building the hardware for this area, there was naturally much discussion around software and applications as well. The first session I want to summarize was presented by Dr. Carlos Macián, Senior Director, AI Strategy and Products at eSilicon.

When I saw an eSilicon presentation on the agenda, naturally I assumed it would be about their recently announced neuASIC™ IP platform. If you don’t know about that yet, you may want to read about their AI IP platform first. Instead, we were treated to a much broader presentation on controlling the total cost of ownership (TCO) of an AI hardware solution. The presentation was quite insightful and showcased just how much depth and experience eSilicon has when it comes to building these types of ASIC products.

TCO is an important concept. When deciding how to address the challenges of building a hardware solution for a specific AI application, one needs to understand how each decision affects the total cost of the product. Some decisions carry more cost in area (die cost), yield (die cost), effort (person-hours), quality (sales, reputation, returns, etc.), power (packaging and other costs) and so many other factors. The list of traits and their associated costs is quite long. Given that most companies should have a grasp of the common TCO drivers, this presentation focused on the key items to consider for state-of-the-art AI products.

From the slide above, you can see that AI designs for data centers have some familiar drivers that are exacerbated by the need to move to massive parallelism – hyperscale. Hyperscale computing refers to the systems and architecture in distributed computing environments that must efficiently scale from a few servers to thousands of servers. Hyperscale computing is used in environments such as big data and cloud computing – today’s massive data centers.

Carlos clearly explained the biggest challenges to AI hyperscale implementation, along with the enabling technologies that have been rolled out at several companies now. Recent announcements, such as Intel’s announcement at HotChips of their Lakefield processor built using Foveros 3D technology, are a clear sign that these technologies are available now. The challenge is to find a partner who understands all of these enabling technologies, something that eSilicon has already demonstrated.

The presentation then went on to focus on an example of solving these AI design challenges by utilizing one of the enabling technologies – 3D memory overlays. The presentation demonstrated if you stack parts of the solution vertically (e.g., xRAM, SRAM+IO, and compute) on different die in the same package that there are huge efficiencies to be gained. One dramatic gain is yield. Manufacturing several smaller die that can be stacked increases yield dramatically. In the example shown at the event, yield improved from 15.7% to 68.6%. This yield improvement provides a tremendous decrease in the cost of production and therefore a dramatic improvement in the TCO.

Despite the difficulties some will encounter in getting these hyperscale AI designs to function at a reasonable cost, I think eSilicon has shown it has the expertise to get them across the finish line. They also disclosed that they are already working with suppliers on the next set of challenges as the degree of scaling increases – new die bonding technologies, vertical signal density, thermal density, combined yield, and many others. I will be anxious to hear more on these items when eSilicon is ready to discuss them.

eSilicon seems well prepared to deliver AI hardware designs. You can learn more about their NeuASIC AI capabilities here. You can learn more about their 2.5D/HBM2 packaging solutions here. As I have mentioned before, as an IP vendor, I referred my licensees to eSilicon before where their success lead to us getting our clients to volume quickly. That is why I recommend them highly.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.