We live in an era where everything is moving extremely fast. The pace of technology innovation is wiring our brains to expect instant gratification – be it from devices and applications we use or sadly even from people we interact with. The instant download of a movie, instant streaming of live events, instant payment at a retail outlet, instant online purchases, instant upload of pictures and videos on social networks – from anywhere – are some of the common demands today. However, the definition of “instant gratification” – or I must say “the tolerance factor” to receive the gratification – varies, but in general, it is reducing with every generation of people and new technology delivered.
The networks we use to connect our devices and applications play an important role in ensuring this “instant gratification”. Latency, one of the technical terms used in network design, is the time required for something to travel between two points. So, lower latency points to the faster transfer of information. The hype of 5G promises that ultra-low latency (a.k.a. super-fast network responsiveness) is going to enable new experiences that were not previously possible in a wireless network. While the pitch sounds intriguing, let’s unpack why low-latency is critical for the success of enterprise digital initiatives, and explore how systems can achieve these goals today.
What does low-latency mean to the enterprise?
Let’s start by understanding that, while latency can mean different things, in the grand scheme, latency points to “delay.” The primary attribute of lag comes from a number of factors, from the scope of the distance between two points to the process of building and maintaining a reliable connection. But what matters most to the enterprise CIO and to users in a network is the time taken to receive the response to a request, also called round-trip-time and round-trip-delay. This interval is the amount of time it takes for a request to travel to the destination, get processed, and returned and presented to the sender. When I am processing stock transactions, time is of the essence. I want to maximize my successful trades (and minimize failures!) and eliminating portions of seconds can make a huge difference in the outcome of each transaction, and milli-seconds if we are talking about high-frequency automated trading. So, I need an end-to-end connection that has very low latency. On the other hand, when I am tracking sports scores, my urgent desire to get that information is not critical. I can afford to have the score arrive in a few seconds after my request, so low latency is not as crucial as my stock trade orders. So, in essence, the notion of “instant gratification” varies from applications to applications.
Let’s take this concept to the business setting where wireless is the primary mode of connectivity for running back-end operations, using business applications, and providing services to customers. In 5G, we hear talk of ultra-low latency connections, which means single digit millisecond latency.
The types of applications that demand ultra-low-latency communications are automated processes that make time-sensitive decisions. Think along the lines of machine controls in manufacturing and logistics processing, financial transactions, communications between first responders, and autonomous vehicles. These services need to minimize transit time to reach the decision-maker, which in most cases is another machine and not a human being.
Isn’t it intriguing that networks can receive the information from a device or machine without any wires; send it to the processor that is sitting somewhere far away in the cloud to determine the action; and then receive the response with the processed data or task – all within a fraction of a second?
Is 5G delivering lower latency today?
The answer depends on the frame of reference and the expected application experience. When people ask “Is 5G delivering lower latency today?”, they are mostly trying to gauge if 5G can provide ultra-fast connectivity today for mission-critical applications. Often, we are distracted by a shiny star, in this case “ultra-low-latency” in finding merits of a technology. But the reality is 5G can considerably lower the network latency and make wireless a viable option for many existing applications that were not possible before or suffer due to lack of consistent user experience. HD video conferencing being one of them. The 5G radio network is optimized to minimize the latency between the device and the radio in today’s 5G, but more optimization is being incorporated as 5G standards continue to evolve. These 5G-based connections can transfer user packets today near the five-millisecond mark, which is a significant improvement compared to previous generations of wireless networks.
While this interval may seem incredibly fast, consider that the 5G device and 5G radio are at most a few miles away from each other, so the information may be traveling up to two miles in 5 milliseconds to get from the user to the cell site. As a comparison, fixed transport network performance metrics from AT&T indicate that average latency between Philadelphia and Washington DC – which are 152 miles apart – is only four milliseconds. The considerable difference in the ratios between the 5G user-to-radio scenario (2 miles in 5 milliseconds) versus the Philly-to-DC network path (152 miles in 4 milliseconds) is what illustrates that 5G technology has a lot to catch up.
So, how is 5G improving network latency?
Early 5G deployments still use the 4G core network, where there are multiple links that the user data traverses to reach the endpoint. We all know that non-stop flights help us reach our destinations faster when we are traveling the friendly skies. This same concept of limiting the connections in the network can help reduce network latency. One of the techniques 5G uses is Control and User Plane Separation (CUPS). This new function helps reduce latency by directly managing the number of “legs” that user data traverses on its way to the destination. CUPS can establish a more-direct path between the processing application and the user to reduce network latency. This significant improvement is evident in multimedia communications and gaming, but more reduction in latency is necessary for applications that provide machine-control and automation.
5G needs to be 5G from end to end
Enter the end-to-end 5G Stand Alone network that supports network slicing and edge computing – services that 3GPP is close to finalizing in the standards. This 5G architecture uses a cloud native 5G core network to manage the 5G RAN, eliminating the legacy 4G core network. When using the 5G Core, the 5G RAN is able to deliver optimized low-latency connections and further reduce end-to-end latency by routing user traffic to near-by networks for processing. Edge computing in the 5G network creates user paths to applications hosted very near the users, without sending the data through the managed core network routers as required in prior mobile networks. Likewise, network slicing establishes resource pools to preserve the resources to create that a connection with specific metrics. These two techniques provide the new approach that significantly decreases the distance – and latency – that data travels, radically reducing response times for mission-critical applications, like control systems in smart manufacturing facilities and the vehicle to anything connections for autonomous vehicles.
Today’s networks are evolving to the new 5G technology much quicker than prior transitions. The key here is the change, not the complete replacement. Business leaders can count on 5G access with 4G Core to facilitate access to higher bandwidth and lower latency, but to achieve ultra-low latency connections, the network needs to be end-to-end 5G.
The post Can today’s 5G meet the expectations of instant gratification? (Reader Forum) appeared first on RCR Wireless News.