Pooling GPU compute at the edge translates 5G latency gains into user experience

Edge compute is seen as a key piece of delivering monetizable, enterprise-facing 5G services. But what kind of compute in service of which applications has been an open question as carriers are still in the early days of 5G network deployment and haven’t articulated how that capex will equate to new service revenues. Based on a new trial, Verizon sees tapping graphic processing units distributed to the edge of the network to support latency-sensitive augmented, cinematic, mixed and virtual reality-type use cases.

According to Verizon, the operator put together a home-brewed “GPU-based orchestration system” that could “enable the development of scalable GPU cloud-based services.” In a statement, Verizon said its team “developed a prototype using GPU slicing and management of virtualization that supports any GPU-based service and will increase the ability for multiple user-loads and tenants.” Tests focused on computer vision and a gaming service; in both cases, the new tech significantly increased the number of the concurrent users.

“Creating a scalable, low cost, edge-based GPU compute [capability]is the gateway to the production of inexpensive, highly powerful mobile devices,” said Nicki Palmer, Chief Product Development Officer. “This new innovation will lead to more cost effective and user friendly mobile mixed reality devices and open up a new world of possibilities for developers, consumers and enterprises.”

Earlier this year Verizon tested the combo of 5G and edge computing, also in Houston, using its 5G network and edge equipment to conduct AI-based facial recognition.

To fully take advantage of the latency reductions 5G enables, cloud-type infrastructure usually associated with centralized data centers has to be brought closer to the end user. This idea of building edge clouds is something Qualcomm executives discussed at a recent event in the context of enabling real-time artificial intelligence.

In a blog post,  VP of Engineering John Smee wrote, “Today, we’re already enabling a wide range of power-efficient on-device AI inference use cases such as computer vision and voice recognition,” he wrote. “While AI is often considered to be cloud centric, we envision AI to become increasingly distributed in the future with lifelong on-device learning, bringing benefits like enhanced personalization and improved privacy protection. The advanced capabilities of 5G make it ideal for playing the role of connecting distributed on-device AI engines and allowing them to be further augmented by the edge cloud — a concept we call the wireless edge.”

Trade group 5G Americas also took the subject up in a recent report titled, “5G at the Edge.” discusses AR and video analytics in the context of 5G and edge computing. Regarding video analytics, the report authors wrote, “Video analytics has a significant role to play in a variety of industries and use cases. For example, face recognition from traffic and security cameras is already playing an essential role in law and order. Several other types of analytics can be performed on video content such as object tracking, motion detection, event detection, flame and smoke detection, AI learning of patterns in live stream or archive of videos, and etcetera. Presently, video analytics is done on the cloud or on dedicated private servers depending upon the need and the functions to be established. Performing video analytics at the edge poses as both a requirement as well as an opportunity for several fields.”

 

 

The post Verizon eyes enterprise applications of 5G, edge and XR appeared first on RCR Wireless News.