Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/ai-hardware-update-a-quick-look-at-ai-enabled-edge-servers.11271/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

AI Hardware Update | A Quick Look At AI-Enabled Edge Servers

Al Gharakhanian

New member
[h=2]A Definition First[/h]Edge Servers are servers that are not located in the cloud or a at the data center. They are primarily mounted in a closet in factory floors, airports, train and bus stations, trains, buses, autonomous vehicles, oil rigs, and virtually hundreds of other applications. At one end they are connected to a large number of “intelligent and connected things” such as sensors, actuators, cameras, pumps, factory equipment, smartphones and the likes while maintaining their uplink to the corporate data centers. Edge servers have the uninspiring but critical task of collecting, analyzing, and acting upon, massive amounts of data generated by the connected “Things”. Additionally, they have the added responsibility of converting raw collected data to “Operational Insights” and forwarding them to upper layers.

[h=2]Where Does AI Fit in This Picture?[/h]The real value of edge servers can only be recognized if they have the ability to locally process the collected data and make real-time decisions and predictions locally with no reliance on remote resources. This can only happen if edge servers are able to host pre-trained deep learning models and have the computational resources to do real-time inferences locally. In most circles such servers are referred to as “Smart Edge Servers”. Latency and locality are key factors at the edge since data transport latencies and upstream service interruptions are intolerable in mission critical applications. As an example, a small traffic camera on a lamp post should be able to detect a speeding car without relying on computational resources in the cloud.

[h=2]Unique Characteristics of Edge Servers[/h]Edge servers are an extension of an organization’s IT infrastructure. They must be able to run the same workloads that run in data centers, this includes virtual machines, containers, databases, and software defined storage just like other servers. Any deviation from this scenario will be very costly when it comes to IT resource management and logistics. In a way, smart edge servers are designed to support enterprise-class compute, management, security, and storage in one enclosure able to handle harsh environmental conditions. This mandates vendors to build much smaller units installable in various spots (and not server racks). Another common denominator in edge servers is their ability to support a wealth of wireless technologies such Wi-Fi, 4G/LTE, Bluetooth, and a variety of other IoT-centric wireless technologies (Lo-RaWan, LTE-M, . . . .). To support legacy industrial applications some need to support various wireline interfaces such as ethernet, USB, CANbus and the likes. This is typically done natively or with the help of plugins modules.
As for the support for deep learning tasks, most existing smart edge servers rely on either PCIe accelerator cards or carrier cards mostly powered by various flavors of NVIDIA’s GPUs. It is also noteworthy that AI-enabled edge servers are not the only intelligent edge devices. There are many vendors that build standalone AI-enabled appliances as well as embedded boards that are used strictly at the edge.

[h=2]Opportunity for AI Chip Vendors[/h]The leadership boundaries for deep learning datacenter accelerator chips are pretty much drawn (at least for now). Clearly NVIDIA has cornered the bulk of this market and there are solid indications that solutions from the likes of Graphcore, Intel (Nervana), and Wave Computing are gaining some ground. On the other hand, the market for AI accelerator chips for edge applications is wide open and desperately seeking fresh ideas. The existing AI-Enabled edge servers almost entirely rely on costly modules and plugins from NVIDIA or a handful of third-party vendors using NVIDIA GPUs. The market for edge servers is estimated to grow at a rate of 35% annually over the next five years and such a dramatic growth rate will invariably create fierce competition among suppliers and they all have to compete on price, performance, and power dissipation. This is when the BOM cost will be front and center and the high cost of existing accelerator modules will stick out like a sore thumb. Fancy packaging, fan-cooling, general purpose implementations have to go. There will be a need for cheaper, lower power, and optimized accelerator chips from newcomers and that is where I see opportunities. New device entrants will also be able to add value by building application specific features to optimize for certain use cases having to do with computer vision or voice processing.
 
Placement of functionality in an Edge server saves you a few milliseconds in round trip compared to reaching a server in a regional data center (about 100 km fiber length per ms), most of us are within 10ms these days. Then the question becomes what functionality you want from the server. Many services in the cloud are actually composites which depend upon large collections of computers within microseconds of each other, so it is worth the 10ms to get access to that kind of resource. Things at the edge are mostly limited to caching or stand-alone functions. Any stand-alone function is liable to be optimized so that it can run at the point of need, especially if it is needed in a vehicle which can support complex chips onboard. Mobility is also a complicating factor as your service needs to migrate to follow you. In the end there is just a small sliver of stuff that needs to be in servers but can't tolerate the extra 10ms.
 
Placement of functionality in an Edge server saves you a few milliseconds in round trip compared to reaching a server in a regional data center (about 100 km fiber length per ms), most of us are within 10ms these days.

I don't think main reason for edge servers is for latency but for reducing the bandwidth need to the upstream server. If you would have a bunch of high bandwidth demanding IoT devices in your network latency would go up.
 
Back
Top