hip webinar automating integration workflow 800x100 (1)
WP_Term Object
(
    [term_id] => 50
    [name] => Events
    [slug] => events
    [term_group] => 0
    [term_taxonomy_id] => 50
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 1227
    [filter] => raw
    [cat_ID] => 50
    [category_count] => 1227
    [category_description] => 
    [cat_name] => Events
    [category_nicename] => events
    [category_parent] => 0
)

Linley Spring Processor Conference Kicks Off – Virtually

Linley Spring Processor Conference Kicks Off – Virtually
by Mike Gianfagna on 04-13-2020 at 10:00 am

The popular Linley Processor Conference kicked off its spring event at 9AM Linley GwennapPacific on Monday, April 6, 2020. The event began with a keynote from Linley Gwennap, principal analyst and president at The Linley Group. Linley’s presentation provided a great overview of the application of AI across several markets. Almost all of the conference is focused on AI.

Before getting into Linley’s keynote, I want to comment on the overall event. Delivering a live event through the internet is challenging. Holding attention spans, dealing with network glitches and capturing the spontaneous nature of the interaction between the speaker and the audience is not easy to accomplish. I suspect there are a lot of newly minted web meeting aficionados these days, so you know what I mean.

Simply put, the Linley Processor Conference appears to be doing a thoughtful and well-planned job of delivering the closest thing to a live, in-person event. Each presentation is followed by a relatively short Q&A.  Questions are queued from written requests from the audience. This definitely works much better than opening everyone’s audio and hoping you can hear just one person at a time. After the short Q&A, there are separate break-out meetings with each speaker at the end of each day. These tend to be smaller meetings and some speakers do open up audio for these events to foster an interactive discussion.

Mike Demler, senior analyst at The Linley Group, moderated several presentations on ultra-low power AI during the first day. Each presentation was quite engaging, using slides, real-time demos and full-motion video of the speaker. I dropped in on all of the break-out sessions. All had good attendance (with Linley having the largest audience). These Q&A sessions were less formal than the presentations.

Thanks to the strong presenters and highly engaged audience, these sessions GrAI Matter Labs demotouched on all sorts of relevant and useful topics. I particularly liked the way Jonathan Tapson, chief scientific officer at GrAI Matter Labs demonstrated how his company achieves sparse processing with a real-time self-driving car demo. There are also breaks sprinkled throughout the event with slide shows from the various sponsors. A good time to check out these technologies or get another cup of coffee. The sessions run from 9AM to 12:45PM over four days. Another good move as a full-day web meeting is too much for most.

If you weren’t able to register for the event, keep watching the Linley site. The Linley Group will develop presentation materials and videos of the conference and make them available sometime after the event concludes.

Back to Linley’s keynote. The tropics covered include:

  • Deep learning trends
  • AI in the data center
  • AI in automotive
  • AI at the edge
  • Ultralow power AI

I won’t attempt to capture all the information presented here. You can catch the replay of Linley’s keynote for that. I will offer a few nuggets presented on each topic.

Deep learning trends: Model growth is exploding. Image processing models are growing at 2X per year – increased accuracy means increased size. The same is true for natural language processing.  Some models have 17 billion parameters. That’s not a typo. Architectures support both large numbers of simple processors (hundreds of thousands per chip) as well as a smaller number of complex processors. The decision of which way to go depends on the workload and your business plan. Convolution accelerators, systolic arrays, sparse computing, in-memory computing, binary neural networks, analog computing and more are all touched on.

Data center: NVIDIA is still the leader but, there is a lot of competition in this multi-billion-dollar market. What will the new announcements from NVIDIA be? Competitors discussed in this market include Cerebras, Intel (with its Habana acquisition to replace Nervana), Huawei, Graphcore, Groq, Xilinx, SambaNova, Alibaba, Google, Microsoft, Amazon and Baidu.  The challenges of developing a new software stack is discussed as well.

Automotive: Autonomous driving deployment is taking longer than expected. Limited Level 3 capability is available now. Level 4 is next, likely implemented as commercial fleets (taxis, trucking, etc.). Vendors discussed include GM, Tesla, Waymo, Intel/Mobileye and NVIDIA.  Will Level 5 ever happen?  Listen to the keynote.

The edge: The general move from the cloud to the edge motivated by things like power, latency, scalability, reliability and privacy were discussed. The edge is really a hierarchy of processing capability. AI accelerators in smartphones is also discussed. AI for embedded applications was also discussed. The barrier to entry here is lower, so this is a potential area of large growth. There is a long list of companies mentioned.

Ultralow power: Power optimization was discussed throughout the presentation. The TinyML Foundation and the TinyML Summit were discussed. Much of this work focuses on embedded applications.

That’s a quick overview of Linley’s keynote. If you missed it, I highly recommend you watch the replay. All event proceedings and video replays are available here.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.