Verizon and AT&T want to virtualize the 5G Network Core and use Mobile Edge Computing


As we reported earlier this week, Verizon announced the first deployment of it’s mobile 5G network with Chicago and Minneapolis going live on April 11. The nation’s largest mobile network operator says the service will be available in “select areas” in those markets, and it plans to bring an additional 30 markets online later this year.

Verizon engineers have been preparing for 5G by migrating network core and edge processing functions from the physical world to the virtual world for about three years now, said Adam Koeppe, SVP of network planning at Verizon.  “Today, in the (proprietary) 5G network that we’ve already launched in our four 5G Home markets (FWA), those software functions that are used for the core of the 5G network are 100 percent virtual. Unlike LTE where you had to start physical and move to virtual, they’re native 5G network functions, those all start as virtual,” said Koeppe.

Similar to other carriers’ 5G roadmaps, Verizon’s initial pre-standard 5G deployments are based on 3GPP Release 15 NR NSA (non-standalone) architecture.  It’s using parts of the 4G network core (EPC) and signaling with a 5G radio access network for the data plane.   “All those functions in that path for 5G are virtual regardless of whether they’re 4G core that you’re using to support 5G or native to 5G functions,” Koeppe said.

“We’re trying to get the processing capabilities required on a network session as close to the consumer as possible, and the reason for that is one of the promises and realities of 5G is that you have the ability to have much lower network latency,” he said. Multi-access edge compute equipment (MEC) and network slicing are key components of that effort. “You have to make fundamental architectural changes to how your core works if you want to provide very low-latency services.”

Verizon currently manages different network use cases manually, by identifying the class of service for each device running on its wireless network. Network slicing and virtualization would change that significantly, and software plays a critical role, Koeppe said. “All the network functions that are providing that service need to be virtualized, because I can’t autonomously spin up physical capacity. That has to be done by a person. But if it’s virtual capacity I can spin that up from a machine through orchestration and machine learning.”

When you have 15 to 20 different use cases, “you have a very sophisticated network that is all virtualized and all programmable. Some of that you physically just can’t do with LTE today. Much of those 5G use cases will rely on that type of programmability of your network and you can’t do that without having a virtualized network function,” Koeppe added.

Verizon wants to put the capabilities of its 5G network and the MEC network into the hands of innovators who can drive use cases beyond what’s possible with 4G today, according to Koeppe. “These are radically different network capabilities, a lot goes in to ensuring that the hardware and the software works well together. And that’s the phase we’re in right now with our deployment.”


ITU-T Standards status of network softwarization:

Question 21 of ITU-T SG13 is studying network softwarization including: network slicing, SDN, and orchestration which are highly expected to contribute to IMT-2020.  Question 21 met during the SG13 meeting, from 4 to 14 March 2019 at Victoria Falls, Zimbabwe under the chairmanship of co-Rapporteur Ms.Yushuang Hu (China Mobile, China) and Mr. Kazunori TANIKAWA (NEC, Japan).

On March 14, 2019, ITU-T SG13 has consented to two new Recommendations:

  1. ITU-T Y.IMT2020-ML-Arch “Architectural framework for machine learning in future networks including IMT-2020” (Ref. SG13-TD355/WP1)
  2. ITU-T Y.3115 (formerly Y.NetSoft-SSSDN). It describes SDN control interfaces for network slicing, which especially focuses on the control of front haul networks such as PON.



AT&T is on a similar path with virtualized network functions and MEF.  According to Light Reading, AT&T has virtualized 65% of its core network during the past five years, and is on track to meet its goal of virtualizing 75% of its network functions by the end of 2020.

“We see the cloud fragmenting again and certain workloads being pushed out to the edge — at customer [premises] and in the network — with more heavy-duty storage, and the back end being in the centralized cloud,” Roman Pacewicz, AT&T Business’s chief product officer, told Light Reading during an interview conducted at MWC 2019 in Barcelona.

Nowhere is [virtualization] more important than in our rollout of 5G,” Pacewicz says. “If we didn’t have a network edge cloud environment that takes the mobile core out to the edge of the network, those deployments would be complicated and longer. The whole strategy of virtualization and cloudification of the network (see IEEE Techblog posts on ITU-T SG13 recommendations related to IMT 2020) becomes more important in upgrading the infrastructure to 5G, because everything is virtualized and software-enabled.”

A new generation of services enabled by 5G will require low latency, and therefore require compute and storage resources close to the edge of the network, Pacewicz says.  That’s where MEC comes in to play a huge role in 5G (as well as real time critical IoT applications).  We previously reported that AT&T has a joint project with Microsoft to deliver Microsoft Azure cloud services from the AT&T network edge. The goal is reduced latency and increased network resiliency.  For applications such as AI, mixed reality and augmented reality, latency needs to be no greater than 20 milliseconds and that requires data to be processed closer to the edge of the network and closer to the end user, Pacewicz says.

A retailer with 8,000–10,000 stores can’t have dedicated compute at every site, but needs low latency to create new types of experience and networks need 2 millisecond latency for safe interactions between robots and human beings, Pacewicz claims.

–>Of course latency includes the mobile access network, mobile packet core, and edge network.  We are a very long way from achieving 20 milliseconds one way latency let alone round trip!

AT&T is teaming with Israeli startup Vorpa on projects to monitor the location of drones around sensitive locations such as aircraft and airports, alert authorities if they’re flying in restricted areas, and identify the location of a drone’s controller. Those types of applications require low latency enabled by mobile edge computing, Pacewicz says.  He concluded the Light Reading interview by highlighting SD-WAN is a key part of making the network more intelligent and flexible to accommodate 5G applications by optimizing traffic routing, particularly as edge devices don’t just consume data, but also generate lots of data.

–>While the SD-WAN market is growing, there are no standard definitions, interfaces or any specs for UNI or NNI interoperability.

AT&T’s CFO John Stephens said that several trends are conspiring to potentially lower AT&T’s CAPEX. He cited the company’s move to network functions virtualization (NFV) and software-defined networking (SDN), which are technologies intended to replacing expensive, proprietary vendor hardware/equipment with less expensive, software-powered equivalents that run on commodity compute servers, white boxes and bare metal switches. Stephens said that more than half of AT&T’s network functions have been virtualized, and that the company remains on track to reach its goal of virtualizing fully 75% of its network functions by 2020.  “All of this leads to an efficiency opportunity on a going forward basis,” he said.


ITU-T SG13 Non Radio Hot Topics and Recommendations related to IMT 2020/5G



7 thoughts on “Verizon and AT&T want to virtualize the 5G Network Core and use Mobile Edge Computing

  1. SK Telecom builds 5G mobile edge computing open platform, tests 4G-5G dual connectivity

    SK Telecom unveiled its mobile edge computing (MEC) open platform, which it says can enhance response times in 5G data communications. The operator plans to open up its MEC platform to enterprise customers to enable them to offer new services.

    MEC, which will be used in 5G networks to deliver ultra-low latency data, enables operators to cut down on latency by installing tiny data centers at 5G base stations. SK Telecom says MEC can cut latency by 60%. Applications such as AR/VR services, cloud gaming services, autonomous driving and fleet management, and real-time live broadcasting will all make use of MEC in 5G networks.

    “By opening up the ‘5G Mobile Edge Computing Platform’, SK Telecom will secure the basis for expanding the MEC-related ecosystem and accelerating the release of 5G services,” said Park Jin-hyo, CTO of SK Telecom, in a statement. “SK Telecom will join hands with diverse companies throughout the globe to boost the adoption of MEC-based services.”

  2. ADLINK, Charles Industries demo mobile edge AI/ML solution

    Test and measurement company ADLINK Technology and telecoms, marine and industrial manufacturer Charles Industries have developed the industry’s first pole-mounted multi-access Edge AI and machine learning solution.

    The solution, a complete micro-edge low latency AI, machine and deep learning solution can be co-located on LTE small cell poles or 5G radios, is specifically designed for outdoor telecoms use cases.

    The solution, which can be either pole or wall mounted, integrates ADLINK’s latest AI Edge Server with a Charles Industries Mico Edge Enclosure.

    According to the companies, the solution has the potential to enable a range of new and advanced services, including autonomous vehicles/pods, virtual and augmented reality applications, and vision analytics.

    ADLINK’s mobile edge computing platform has been designed to fully comply with the Open Data Center Committee’s Open Telecom IT Infrastructure standard to meet the 5G requirements of ultra-low latency, high bandwidth, and real-time access to the radio network.

  3. April 1 2019 BARRON’S: AT&T and Verizon Are Going In Opposite Directions
    After years spent running their businesses in virtual lockstep, America’s largest phone companies are furiously heading in opposite directions.

    Verizon Communications (ticker: VZ) is doubling down on its network, while AT&T (T) is rapidly diversifying beyond the phone business. The Time Warner acquisition came just three years after a $66 billion deal for satellite-TV provider DirecTV.

    Both companies are reacting to the same external forces. Nine out of 10 Americans now have wireless phone service, and they are increasingly paying up for unlimited data plans. In that respect, the wireless market has never been better—but it may also be as good as it gets.

    “The industry right now is as healthy as it has been since 2012,” says J.P. Morgan analyst Philip Cusick, noting that years of bruising price wars have finally come to an end.

  4. HP: 7 Reasons Why We Need to Compute at the Edge

    1. Latency. What good is an Internet-connected car if there’s a lag between when a child appears in front of the car and when the system actually tells the car to stop? Ideally, there should be no latency at all, but there usually is. Even worse, there’s a chance that the connection can be lost entirely. (Ever experience a dropped call on your cell phone?) For some mission-critical functions, latency is intolerable and you must compute on the edge. This is true even when speeds increase. When 5G rolls out commercial in 2018, for instance, it will be an improvement from the current latency but still not as fail-safe as edge computing is today.

    2. Bandwidth. Sending data from edge devices to the cloud or a data center (Bradicich points out that the difference is academic because “A cloud is just a data center that no one is supposed to know where it is”) can use a tremendous amount of bandwidth. Fearing that such devices will be a drag on the system, some have proposed creating a separate network for the IoT. You can greatly curtail that drag by eliminating the need to send data back and forth. Many companies simply can’t handle the bandwidth needs of IoT right now.

    3. Compliance. There are laws or policies in certain countries governing the regional transfer of data. Companies that embrace IoT often run up against such compliance issues.

    4. Security. If you are going to send data all over the place, it will be vulnerable to attacks and breaches. Already, hackers have found ways to breach everything from cars to baby monitors that are connected to the Internet.

    5. Cost. Extra bandwidth and extra security will inevitably cost extra as well. Since companies are often motivated to save money by realizing efficiencies via IoT, keeping costs down is of prime concern.

    6. Duplication. If you are going to collect data and send it to the cloud there will inevitably be some duplication. While it might not reach 100%, if you collect 10 TB of data on the edge and then send it to the cloud, then that’s a duplication.

    7. Data corruption. Even without any nefarious activity from hackers, data will be corrupted on its own. Retries, drops and missed connections will plague edge-to-data-center communications. Obviously, that’s a bigger deal for mission-critical applications.

    1. No thanks. You already have the url for the IEEE Techblog:

Comments are closed.