Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/ai-hardware-update-fracture-in-semis-innovation-from-openai.11215/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

AI Hardware Update - Fracture in Semis - Innovation from OpenAI

Al Gharakhanian

New member
AI Hardware Update
Google’s Edge-TPU Development Board (for only $150)
The most significant development in the world of AI hardware last week was the availability of deep learning development kit from Google. This board is based on Google’s Edge-TPU machine learning accelerator chip that was introduced last July. Edge-TPU is the lower cost and optimized variation of Google’s third generation TPU and unlike its parent is only intended for edge inference applications. This development board can be purchased for $150 is able to support state-of-the-art mobile vision models such as MobileNet v2 at 100+ fs in a power efficient manner.
So, what does $150 buy you? In addition to the Edge TPU, the board contains an NXP i.MX SoC, plenty of Flash and DRAM, 2x2 MIMO WiFi, Bluetooth 4.1, USB ports (2.0/3.0), Gigabit Ethernet ports and various other interfaces including video and audio. Don’t be bugged down by the technical minutiae. There are two key takeaways here. The first one is that the masses now have access to very affordable development platform with serious processing horsepower and they can easily incorporate deep learning capabilities in just about any edge application. My second conclusion is that the added hardware cost for doing serious deep learning at the edge is (or soon will be) zero.
A New ReRAM (Resistive RAM) Player in Deep Learning Domain
There are a number of fabless vendors that are utilizing various flash-based technologies to build deep learning inference chips, Mythic (www.mythic-ai.com) being the most prominent. The promise of flash-based devices is significant power savings compared to traditional approaches and that comes super handy in power restricted battery-operated applications. Crossbar seems to be the latest entrant in this domain. Crossbar (www.crossbar-inc.com), a memory IP (ReRAM) company; announced a partnership with Gyrfalcon, mates Neural Networks, and Robosensing. These four companies are the founding members of a consortium called SCAiLE (SCalable AI for Learning at the Edge) with the end goal of providing system-level solutions for IoT edge devices. The four members of this consortium certainly have what it takes to build a complete low-power IoT edge devices. Crossbar and Gyrfalcon have the enabling chip technology while mtes Neural Networks and Robosensing are the purveyors of IoT hardware and its respective software stack. It is not clear whether Crossbar’s IP plays a role in the actual implementation of the neural nets or it is merely an area and power efficient on-chip flash storage in conventional accelerators such as the ones from Gyrfalcon.

Few Ominous Signs for the Semiconductor Industry

DRAM contract pricing in Q1 of 2019 is nearly 30% lower than a year ago. This has been the largest decline since 2011. The troubling aspect is that such a dramatic drop has taken place when the inventory levels are at a record low. Typically, lower inventories translate to higher prices. Aside from DRAMs let us examine few other chip end markets. There are corroborated indications that the cloud hardware spending is also slowing dramatically. Don’t take me wrong the end markets are still growing but at a much slower rate. As for the handset market, the story of saturated unit volumes remains. Hopefully emergence of 5G and its plethora of use cases will reverse this trend. As for IoT, the unit shipments are growing but at a slower rate than the original estimates. It is not all gloom and doom and there are shiny slivers in the market. The unit growth in AI chips is growing at an extremely healthy clip. Last but not least the market for high performance analog devices is (and has been for the last 20 years) red hot.

Latest from OpenAI

OpenAI released a new transformer-based language model called GPT-2 that has outstanding performance for a variety of natural language processing tasks (question answering, machine translation, reading comprehension, and summarization). The novelty of this approach is that the model is trained in an unsupervised fashion alleviating the need for having large labelled datasets. The sole objective of the model is to predict the next word given previous words within some text. This is a very promising approach for building language processing systems that perform a given task from their naturally occurring demonstrations. The model has 1.5 billion parameters, trained on a dataset of 8 million web pages. A simple way of looking at this innovation is that it is now possible to have language processing models able to perform very specific tasks by merely exposing it to a huge amount of unstructured and unlabeled text during its training.
Al
al@cogneefy.com
@cogneefy
 
Back
Top