Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/ai-hardware-update-gtc-2019-impressions-xilinx-cnn-ip.11254/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

AI Hardware Update (GTC 2019 Impressions, Xilinx CNN IP)

Al Gharakhanian

New member
[h=2]GTC 2019 Observations[/h]Last week Nvidia held its annual GPU Technology Conference (GTC) in San Jose Convention Center. On display there were a whole host of products and technologies in many domains including Automotive, Gaming, Deep Learning, and Hyperscale Computing. While there are no shortages of articles and posts describing the glory details of the new announcements, I have chosen to focus on less obvious nuances and insights that in my opinion have dramatic implications.
[h=3]Making Things “AI Capable” is Surprisingly Simple and Inexpensive[/h]NVIDIA announced the availability Jetson Nano, a production ready System on Module (SOM) for embedded edge computing applications. The module is based on an older GPU architecture (Maxwell, containing 128 CUDA cores) and will be available at the cost is $129 (a development kit is going for $99). It delivers 472 Gflop of compute power and 4GB of memory burning 5 Watts. Actually, Jetson Nano is not the only inexpensive Deep Learning-capable module in the market and there are a number of other companies (the likes of Intel-Movidius, Google . . .) that have made such modules available. The key takeaway here is the availability of such self-contained embedded solutions makes it relatively easy and inexpensive to make any device, instrument, or appliance “AI Capable”. Utilizing such tiny modules, any gadget with few watts to spare and Wi-Fi or 4G/LTE connectivity; can be enabled to do some serious local inferencing. Needless to say, that there are ample low-footprint pre-trained Deep Learning models available that are small enough to run on such constricted form factors
[h=3]It Is Not The “Chip” Stupid[/h]I anticipate NVIDIA’s dominance in Deep Learning, Graphics, and Hyperscale Computing to grow and be long-lived for the following reasons:
1. NVIDIA’s CUDA-X software acceleration libraries are so pervasive and popular that makes migration to a new hardware platform costly and painful to say the least
2. NVIDIA has been a master of forming a vibrant ecosystem consisting of companies offering a plethora of hardware, software, tools, services, and intellectual properties revolving around there many generations of GPUs. A vibrant ecosystem is what it takes to have "value longevity" for end customers, and that assures brand loyalty
3. NVIDIA is amazingly vertically integrated. They sell chips, modules, servers, and workstations. Strength in one end of the spectrum fuels the rest of the pipeline and vice versa
4. They have excellent position in all of the three pillars of their business (Graphics, AI, and Hyperscale Computing) and modern data-centers have to support all three. In other words, NVIDIA’s core GPUs are the technological common denominator for all these three key initiatives that modern data-centers have to support by mandate

Despite such a dominance, the history of high technology is filled with examples that an upstart with disruptive technology and flawless execution has been able to unseat dominant forces.
[h=3]How About Other Deep Learning Chip Companies?[/h]I made a point to visit the booth of most of the leading server companies to see if there is any awareness of other Deep Learning chip solutions (i.e. Graphcore, Wave Computing, SambaNova, Habana, . . . .). To my surprise I found none. Admittedly folks with booth duty in shows such as GTC 2019 are probably not the right individuals to be privy of advanced developments in their companies, but I still expected to see some awareness or recognition.
[h=3]The “Edge Computing” is Vague and Overused[/h]Edge computing is used to describe a wide variety of applications. Among them include robots, drones, autonomous vehicles, surveillance cameras, among many others. All of the above have vastly different performance requirements and power and cost budgets. I think it is time to find ways to subcategorize the term “Edge Computing”. I will be lying if I claim that I am the one with bright ideas here.
I was particularly impressed by the products from the following vendors:
1. aetina (www.aetina.com/): Edge AI Computing Platforms for Medical, Robotics, Drones, and self-driving applications
2. ADLINK (www.adlinktech.com): IoT Edge Devices
3. NEXCOM (www.nexcom.com): Mobile Transportation Edge Devices
[h=2]Xilinx CNN IP[/h]Xilinx joined the ranks of companies such as CEVA, Videantis, Cadence, and Imagination and introduced the availability of Deep Learning Processor Unit (DPU) IP, a programmable engine for CNN applications. The DPU IP can be integrated as a block in the programmable logic (PL) of the selected Zynq®-7000 SoC and Zynq UltraScale™+ MPSoC. DPU supports popular models such as VGG, ResNet, GoogLeNet, YOLO, SSD, MobileNet, FPN, etc.

Al Gharakhanian
al@cogneefy.com
 
I read something that xlinix, arm, and tsmc has a processor for edge computing.
 
Hello Portland,
Thanks for your feedback. I am aware of IPs from Xilinx and Arm. I am unsure what TSMC has in this domain. Update me if you come across a blurb in this regards.
Thanks
 
Back
Top