AT&T has introduced a high speed “4G” service in the form of LTE-Licensed Assisted Access (LAA) in Indianapolis, IN. LTE-LAA uses unlicensed spectrum. According to AT&T it will provide theoretical gigabit speeds to some areas of the city. LTE-LAA has reached a peak of 979 Mbps in a San Francisco, CA trial.
“Demand continues to grow at a rapid pace on our network,” the Bill Soards, President AT&T Indiana in a press release. That’s why offering customers the latest technologies and increased wireless capacity by combining licensed and unlicensed spectrum is an important milestone.”
The U.S. mega telco recently announced plans to roll out its 5G Evolution program in Minneapolis. That initiative – which aims to provide networks with the capability to support 5G when it is ready – already is in use in parts of Indianapolis and in Austin, TX. It features LTE Advanced features such as 256 QAM, 4×4 MIMO and 3-way Carrier Aggregation.
AT&T says that it invested $350 million in its wired and wireless network infrastructure in Indianapolis between 2014 and 2016.
In her October 31st keynote at the Fog World Congress, Alicia Abella, PhD and Vice President – Advanced Technology Realization at AT&T, discussed the implications of edge computing (EC) for network service providers, emphasizing that it will make the business case for 5G realizable when low latency is essential for real time applications (see illustration below).
The important trends and key drivers for edge computing were described along with AT&T’s perspective of its “open network” edge computing architecture emphasizing open source software modules.
Author’s Note: Ms. Abella did not distinguish between edge and fog computing nor did she even mention the latter term during her talk. We tried to address definitions and fog network architecture in this post. An earlier blog post quoted AT&T as being “all in” for edge computing to address low latency next generation applications.
AT&T Presentation Highlights:
- Ms. Abella defined EC as the placement of processing and storage resources at the perimeter of a service provider’s network in order to deliver low latency applications to customers. That’s consistent with the accepted definition.
“Edge compute is the next step in getting more out of our network, and we are busy putting together an edge computing (network) architecture,” she said.
- “5G-like” applications will be the anchor tenant for network provider’s EC strategy. augmented reality/virtual reality, Multi-person real time video conferencing, and autonomous vehicles were a few applications cited in the illustration below:
Above illustration courtesy of AT&T.
“Size, location, configuration of EC resources will vary, depending on capacity demand and use cases,” said Ms. Abella.
- Benefits of EC to network service providers include:
- Reduce backhaul traffic
- Maintain quality of experience for customers
- Reduce cost by decomposing and disaggregating access function
- Optimize current central office infrastructure
- Improve reliability of the network by distributing content between the edge and centralized data centers
- Deliver innovative services not possible without edge compute, e.g. Industrial IoT autonomous vehicles, smart cities, etc
“In order to achieve some of the latency requirements of these [5G applications?] services a service provider needs to place IT resources at the edge of the network. Especially, when looking at autonomous vehicles where you have mission critical safety requirements. When we think about the edge, we’re looking at being able to serve these low latency requirements for those [real time] applications.”
- AT&T has “opened our network” to enable new services and reduce operational costs. The key attributes are the following:
- Modular architecture
- Robust network APIs
- Policy management
- Shared infrastructure for simplification and scaling
- Network Automation platform achieved using INDIGO on top of ONAP
- AT&T will offer increased network value and adaptability as traffic volumes change:
- Cost/performance leadership
- Improved speed to innovation
- Industry leading security, performance, reliability
“We are busy thinking about and putting together what that edge compute architecture would look like. It’s being driven by the need for low latency.”
In terms of where, physically, edge computing and storage is located:
“It depends on the use case. We have to be flexible when defining this edge compute architecture. There’s a lot of variables and a lot of constraints. We’re actually looking at optimization methods. We want to deploy edge compute nodes in mobile data centers, in buildings, at customers’ locations and in our central offices. Where it will be depends on where there is demand, where we have spectrum, we are developing methods for optimizing the locations. We want to be able to place those nodes in a place that will minimize cost to us (AT&T), while maintaining quality of experience. Size, location and configuration is going to depend on capacity demand and the use cases,” Alicia said.
- Optimization of EC processing to meet latency constraints may require GPUs and FPGAs in additional to conventional microprocessors. One such application cite was running video analytics for surveillance cameras.
- Real time control of autonomous vehicles would require a significant investment in roadside IT infrastructure but have an uncertain return-on-investment. AT&T now has 12 million smart cars on its network, a number growing by a million per quarter.
- We need to support different connectivity to the core network and use “SDN” within the site.
- Device empowerment at the edge must consider that while mobile devices (e.g. smart phones and tablets) are capable of executing complex tasks, they have been held back by battery life and low power requirements.
- Device complexity means higher cost to manufacturers and consumers.
- Future of EC may include “crowd sourcing computing power in your pocket.” The concept here is to distribute the computation needed over many people’s mobile devices and compensate them via Bitcoin, other crypto currency or asset class. Block chain may play a role here.
Timon Sloane of the Open Networking Foundation (ONF) provided an update on project CORD on November 1st at the Telecom Council’s Carrier Connections (TC3) summit in Mt View, CA. The session was titled:
Spotlight on CORD: Transforming Operator Networks and Business Models
After the presentation, Sandhya Narayan of Verizon and Tom Tofigh of AT&T came up to the stage to answer a few audience member questions (there was no real panel session).
The basic premise of CORD is to re-architect a telco/MSO central office to have the same or similar architecture of a cloud resident data center. Not only the central office, but also remote networking equipment in the field (like an Optical Line Termination unit or OLT) are decomposed and disaggregated such that all but the most primitive functions are executed by open source software running on a compute server. The only hardware is the Physical layer transmission system which could be optical fiber, copper, or cellular/mobile.
Author’s Note: Mr. Sloane didn’t mention that ONF became involved in project CORD when it merged with ON.Labs earlier this year. At that time, the ONOS and CORD open source projects became ONF priorities. The Linux Foundation still lists CORD as one of their open source projects, but it appears the heavy lifting is being done by the new ONF as per this press release.
A reference implementation of CORD combines commodity servers, white-box switches, and disaggregated access technologies with open source software to provide an extensible service delivery platform. This gives network operators (telcos and MSOs) the means to configure, control, and extend CORD to meet their operational and business objectives. The reference implementation is sufficiently complete to support field trials.
Illustration above is from the OpenCord website
Highlights of Timon Sloane’s CORD Presentation at TC3:
- ONF has transformed over the last year to be a network operator led consortium.
- SDN, Open Flow, ONOS, and CORD are all important ONF projects.
- “70% of world wide network operators are planning to deploy CORD,” according to IHS-Markit senior analyst Michael Howard (who was in the audience- see his question to Verizon below).
- 80% of carrier spending is in the network edge (which includes the line terminating equipment and central office accessed).
- The central office (CO) is the most important network infrastructure for service providers (AKA telcos, carriers and network operators, MSO or cablecos, etc).
- The CO is the service provider’s gateway to customers.
- End to end user experience is controlled by the ingress and egress COs (local and remote) accessed.
- Transforming the outdated CO is a great opportunity for service providers. The challenge is to turn the CO into a cloud like data center.
- CORD mission is the enable the “edge cloud.” –>Note that mission differs from the OpenCord website which states:
“Our mission is to bring datacenter economies and cloud agility to service providers for their residential, enterprise, and mobile customers using an open reference implementation of CORD with an active participation of the community. The reference implementation of CORD will be built from commodity servers, white-box switches, disaggregated access technologies (e.g., vOLT, vBBU, vDOCSIS), and open source software (e.g., OpenStack, ONOS, XOS).”
- A CORD like CO infrastructure is built using commodity hardware, open source software, and white boxes (e.g. switch/routers and compute servers).
- The agility of a cloud service provider depends on software platforms that enable rapid creation of new services- in a “cloud-like” way. Network service providers need to adopt this same model.
- White boxes provide subscriber connections with control functions virtualized in cloud resident compute servers.
- A PON Optical Line Termination Unit (OLT) was the first candidate chosen for CORD. It’s at the “leaf of the cloud,” according to Timon.
- 3 markets for CORD are: Mobile (M-), Enterprise (E-), and Residential (R-). There is also the Multi-Service edge which is a new concept.
- CORD is projected to be a $300B market (source not stated).
- CORD provides opportunities for: application vendors (VNFs, network services, edge services, mobile edge computing, etc), white box suppliers (compute servers, switches, and storage), systems integrators (educate, design, deploy, support customers, etc).
- CORD Build Event was held November 7-9, 2017 in San Jose, CA. It explored CORD’s mission, market traction, use cases, and technical overview as per this schedule.
Service Providers active in CORD project:
- AT&T: R-Cord (PON and g.fast), Multi-service edge-CORD, vOLTHA (Virtual OLT Hardware Abstraction)
- Verizon: M-Cord
- Sprint: M-Cord
- Comcast: R-Cord
- Century Link: R-Cord
- Google: Multi-access CORD
Author’s Note: NTT (Japan) and Telefonica (Spain) have deployed CORD and presented their use cases at the CORD Build event. Deutsche Telekom, China Unicom, and Turk Telecom are active in the ONF and may have plans to deploy CORD?
- This author questioned the partitioning of CORD tasks and responsibility between ONF and Linux Foundation. No clear answer was given. Perhaps in a follow up comment?
- AT&T is bringing use cases into ONF for reference platform deployments.
- CORD is a reference architecture with systems integrators needed to put the pieces together (commodity hardware, white boxes, open source software modules).
- Michael Howard asked Verizon to provide commercial deployment status- number, location, use cases, etc. Verizon said they can’t talk about commercial deployments at this time.
- Biggest challenge for CORD: Dis-aggregating purpose built, vendor specific hardware that exist in COs today. Many COs are router/switch centric, but they have to be opened up if CORD is to gain market traction.
- Future tasks for project CORD include: virtualized Radio Access Network (RAN), open radio (perhaps “new radio” from 3GPP release 15?), systems integration, and inclusion of micro-services (which were discussed at the very next TC3 session).
Addendum from Marc Cohn, formerly with the Linux Foundation:Here’s an attempt to clarify the CORD project responsibilities:
- CORD is an open reference architecture. In that sense, CORD is similar to the ETSI NFV Architectural Framework, ONF SDN Architecture, and MEF LifeCycle Services Orchestration (LSO) reference architectures.
- As it is a reference architecture, it is not an implementation, and is maintained by the Open Networking Foundation (ONF), which merged with ON.LAB towards the end of 2016.
- OpenCORD is a Linux Foundation project announced in the summer of 2016. It is focused on an open source implementation of the CORD architecture. OpenCord was derived from the work undertaken by ON.LAB, prior to the merger with ONF in 2016.
- For technical details, visit the OpenCORD Wiki
- Part of the confusion is that if one visits the Linux Foundation projects page, CORD is listed, but the link is to the OpenCord website.
Telit, an Israeli based semiconductor company specializing in Internet of Things (IoT) silicon, today announced that its LE910B1-NA, LE910B1-SA, LTE Category 1 (Cat 1) and LE910B4-NA, LTE Category 4 (Cat 4) received certification for operation on AT&T’s LTE nationwide network. The aforementioned modules also support Voice over LTE (VoLTE).
Telit also received certification for its 600 Mbps, LTE Category 11 (Cat 11) LM940 global (single SKU) PCI Express Mini (mPCIe) data card targeted at segments including network routers and gateways, and the mobile computing industry.
Certification enables IoT integrators and providers to immediately integrate and test their devices with the certified modules and data card and start leveraging the reliability and coverage of AT&T’s LTE Cat 1, Cat 4-VoLTE and Cat 11 services for the IoT.
For more information on the LE910B1/4-xA:
For more information on the LM940 Cat 11 data card:
“Voice over LTE is an absolute necessity for the IoT particularly for the American market where operators need to turn off spectrum-inefficient circuit switch voice technology. Our existing customers using 2G, 3G, and non-VoLTE LTE modules from the xE910 family can now simply drop in the VoLTE variants, go through required testing with our help and start deploying voice capable products endowed with a very long life,” said Yosi Fait, Interim CEO, Telit.
“The LM940, now certified for immediate activation, remains the only global product for the router and gateway segment to allow OEMs to leverage 3x carrier aggregation capabilities currently available from AT&T,” he added.
The LE910B1/4-xA module is a member of Telit’s best-selling xE910 family and can easily be applied as a pin-to-pin replacement for existing devices based on the family’s modules for 2G, 3G, LTE Categories 1, 3 and 4. With the company’s design-once-use-anywhere philosophy, developers can cut costs and development time by simply designing to the xE910 LGA common form factor, giving them the freedom to deploy technologies best suited for the application’s environment.
The LM940 boasts an exceptionally power efficient platform and is the ideal solution for commercial and enterprise applications in the network appliance and router industry, such as branch office connectivity, LTE failover, digital signage, kiosks, pop-up stores, vehicle routers, construction sites and more. The data card includes Linux and Windows driver support.
Telit also features the broadest portfolio of certified LTE IoT Category modules in the industry.
For more information about the Telit portfolio of LTE modules: https://www.telit.com/products/cellular-modules/
Last week, GCT Semiconductor  announced an LTE device which will also support the (proprietary) Sigfox wireless IoT interface. The GDM7243I chip features low power consumption, which will allow it to be used for tracking devices to connect using the Sigfox wireless IoT network for several years without the need for frequent battery re-charging.
Note 1. GCT Semiconductor’s engineering development team is in South Korea. Marketing and sales are in San Jose, CA.
“We’re pleased to be working closely with Sigfox to bring this capability to market and support ultra-long battery life and global coverage for our IoT customers,” said John Schlaefer, CEO of GCT Semiconductor, speaking at Sigfox World IoT Expo 2017 in Prague, Czech Republic.
GDM7243I based tracking devices operate on the Sigfox network for location tracking but will switch to the cellular network as required.
Hybrid IoT devices can connect to the Sigfox wireless IoT network and operate in low-power mode to send and receive notifications only. The Sigfox network can also provide backup connectivity to IoT hybrid devices in case of cellular network coverage limitations, congestion, breakdown, or jamming of security/alarm systems.
The Calliope LTE Platform for IoT is a member of Sequans’ StreamliteLTE™family of LTE chipset products. Calliope is designed specifically for wearables and other Category 1 M2M and IoT devices. Calliope comprises baseband and RF chips, an integrated IoT applications processor running Sequans’ carrier-proven LTE protocol stack, an IMS client, and a comprehensive software package for over-the-air device management and packet routing. It includes Sequans’ powerful interference rejection technology, Sequans AIR™.
Calliope can add Cat 1 LTE connectivity to M2M and IoT modules and is also suitable for wearables and M2M devices for metering, home automation, and automotive applications.
- Certified by Verizon Wireless, AT&T Wireless, NTT Docomo and T-Mobile
- Throughput: up to Category 1 – 10 Mbps DL/ 5 Mbps UL
- Ultra low power consumption
- 3GPP Release 10; software-upgradable to Release 11
- FDD and TDD, up to 20 MHz LTE channels
- Embedded application CPU
- Wafer-level packaging
- Supports VoLTE and location based services
- Host environments: Android, Android Wear, Linux, Windows, Real Time OS
- Versatile interfaces to host system: UART, USB, HSIC
- Includes Sequans AIR™ interference cancelation technology
- Certified for VoLTE by Verizon Wireless
For more info:
The Next Generation Mobile Networks Alliance (NGMN), an industry association of mobile carriers, has defined requirements for 5G including data rates, transmission speeds, spectral efficiency and latency.
So has ITU-R WP 5D- the only real standards body for 5G (AKA IMT 2020). However, the wireless networking industry has yet to agree on the Radio Access Network (RAN) and related 5G standards, despite 3GPP release 15 on “New Radio.” 5G standards won’t be completed until very late in 2020.
As we’ve reported in several IEEE techblog posts, AT&T and Verizon are conducting 5G trials in the US while other trials are proceeding in Europe and Asia.
Bullish Opinions on 5G:
Broad deployment of 5G networks is not expected until the 2020 timeframe, according to Sam Lucero, a senior principal analyst for M2M at IoT at IHS Markit. Yet despite the lack of standards, a number of speakers at last month’s Mobile World Congress (MWC) Americas in San Francisco were more bullish on 5G and expectations for its rollout.
“We expect 5G to come faster and be broader than originally thought,” said Rajeev Suri, president and CEO of Nokia. Suri said Nokia expects 5G networks to be deployed in 2019, with widespread trials next year.
“4G is like a really good rock band,” said Andre Feutsch, CTO at AT&T. “5G is like a finely tuned orchestra.” He added that he sees n 5G a tremendous opportunity for advancing and “frankly making the network more relevant.”
“From a network perspective, [5G] is an evolution,” said Gordon Mansfield, vice president of RAN and device design at AT&T. “However, from a capability perspective it will be a revolution as it unfolds.”
“The 4G network is foundational to 5G,” said Nicki Palmer, chief network officer at Verizon. She added, “It’s hard to really peel 4G and 5G apart in some ways. The good news is that the investments we make today [in 4G] lead us down the 5G path.”
“We’ve been trying to define what 5G is for the past five years,” said Ron Marquardt, vice president of technology at Sprint. “We are getting close to being able to define that. We need to educate industries on how 5G can and will disrupt them.”
Feutsch said 5G technology will enable carriers to provide solutions to a greater number of use cases. He said a lot of the work that has been done to date with pre-standards trials of 5G “were really to gain a lot of insights that helped us feed right back into the standards work.”
He added that standardization and openness would be critical to creating the healthy ecosystem that is required to enable 5G to flourish.
“We’ve got to standardize on this and avoid proprietariness as much as possible” to build a healthy 5G ecosystem Feutsch said. He said a lot of innovation for 5G would come from smaller companies — “disruptors” that need to rely on standards to make the technology they are developing fit into the 5G landscape.
Derek Peterson, chief technology officer at Boingo Wireless, a provider of mobile Internet access, also emphasized the importance of standards and urged audience members to participate in standards efforts. “Participating in standards is very important because it is going to take a collaborative effort to make all of these things work together,” he said.
The densification required for 5G transmission speeds will rely on a far greater number of smaller cell sites than previous generations of wireless technology. The process of getting the cell sites approved can vary widely from place to place, and often be one of the biggest roadblocks to 5G.
“It can take a year to get a permit for something that it takes an hour to hang on a pole,” Mansfield said.
“The biggest barrier is going to be the density that you need for 5G is something that we have never seen before,” said John Saw, Sprint’s CTO. “It’s going to be more than putting 5G on the towers that we know and love today. We need to change how we get permits for this.” Saw added.
With the wireless industry prepared to spend an estimated $275 billion to deploy 5G, governments need to streamline permitting processes.
“I think public policy makers get to have a say in how fast we spend it and where we spend it. They need to get used to the fact that there may be hundreds and perhaps thousands of permits being requested to get this density that is required,” Saw concluded.
Panelists in an IoT session said that the primary barriers to enterprise IoT adoption include limited battery capacities and insufficient interoperability between connected devices, including VPN support, cloud service compatibility and other technologies. No mention was made of 5G for low latency IoT applications.
AT&T has brought its fixed wireless broadband service to nine more states, bringing the total coverage to more than 160,000 rural locations in 18 states. The service, partly funded by the U.S. federal Connect America Fund (CAF) program, provides homes and businesses with download speeds of at least 10 Mbps with a minimum of 1 Mbps upstream. The service uses licensed WCS (Band 30) 2.3 GHz spectrum.
This fixed wireless service has broadband usage caps of 160 GB per month, with additional 50 GB increments of data charged at $10 per month. It’s priced at $60 per month when bundled with other AT&T services.
The additional 9 states include:
They join Alabama, Florida, Georgia, Kentucky, Louisiana Mississippi, North Carolina, South Carolina and Tennessee, where this AT&T rural broadband service is already available in certain markets. AT&T has plans to reach 400,000 locations by the end of this year, and over 1.1 million locations by 2020. This AT&T rural broadband expansion is partially funded by the Connect America Fund (CAF), the FCC’s program to expand rural broadband access.
“Closing the connectivity gap is a top priority for us,” said Cheryl Choy, vice president, wired voice and internet products at AT&T in a press release announcing the expansion. “Access to fast and reliable internet is a game changer in today’s world.”
AT&T may gain some competition for this fixed wireless service, at least in Mississippi. C Spire just announced their intention to aggressively expand fixed wireless service in Mississippi this week. They cited the advantage their 25 Mbps fixed wireless service has over certain CAF funded 10 Mbps fixed wireless options, a specific reference to AT&T.
“For many rural families and communities, the introduction of this service from AT&T will mark a new era of increased broadband speeds and access to cheaper and more diverse content.” said Bret Swanson, president, Entropy Economics. “AT&T’s move into these new communities will also yield additional economic benefits and can help create new jobs.”
To learn more about Fixed Wireless Internet from AT&T, go to att.com/internet/fixed-wireless.html.
AT&T Expands G.fast & FTTH Deployments:
In sharp contrast to Verizon’s decision NOT to deploy G.fast, AT&T has announced expansion of its G.fast service for multi-dwelling units (MDUs) and its fiber-to-the-home network (AT&T Fiber).
The mega telco will extend its all-fiber network in two markets — Biloxi-Gulfport, MS and Savannah, GA. AT&T will also be offering its hybrid fiber-coax service for MDUs in 22 metropolitan markets.
The AT&T G.fast deployments will use “fiber runs to the telecom closet on the property, and individual coax runs to each apartment unit,” an anonymous AT&T spokesperson said to Telecompetitor.
Residents of properties served will also be able to obtain DIRECTV service without installing a dish at their individual units. Instead, the video service will be delivered over D2 Advantage, which the AT&T spokesperson described as “a centrally wired satellite dish that is shared among residents in the property.”
AT&T announced eight metro areas where G.fast can be deployed immediately, including Boston, Denver, Minneapolis, New York City, Philadelphia, Phoenix, Seattle and Tampa. In 14 other markets, consumers in target MDUs can order service now for deployment in “the near future,” the company said.
AT&T is one of multiple carriers that are looking at G.fast as part of their broadband strategy. The technology can support considerably higher speeds than DSL or fiber-to-the-neighborhood (FTTN) services – and although bandwidth is lower than it might be for a fiber-to-the-home deployment, the cost is considerably less.
The news that AT&T is deploying G.fast is not surprising, as the company already has conducted a trial of the service in Minneapolis and executives have indicated deployment plans. At this year’s Open Network Summit (ONS), AT&T’s Tom Anschutz told an audience that G.fast would improve the speed and signal quality of data transmission on older, low grade twisted pair, which is used in many MDUs and in condominium complexes (where this author lives). He hinted that market segment would be a focus area for AT&T.
AT&T is extending the reach of its fiber network:
AT&T claims to have the largest fiber network in its 21-state home broadband footprint, reaching more than 5.5 million residential and commercial locations across the 57 markets after adding over 1.5 million sites since January 1st. Plans call for extending service availability to another 1.5 million locations by year’s end, boosting the total to 7 million.
Of those 5.5 million homes and businesses now reached by AT&T Fiber, the mega telco said it has signed up more than 2 million broadband subscribers. The company did not, however, break out how many of those subs are new ones, as opposed to DSL customers who have been upgraded to the new FTTH network.
However, the mega telco ranks #1 on Vertical Systems U.S. Fiber Lit Buildings (Fiber to commercial buildings) leaderboard:
“Without some of those advantages [from the new Xeon Scalable processors] and capabilities that have been created in the software space, we wouldn’t be able to do it,” said AT&T’s Chris W. Rice, SVP of AT&T Labs and Domain 2.0 architecture. “It is a key underpinning in our SDN-network virtualization journey. Intel pushed the technology into the ecosystem, the capabilities and the chips, and then we can pull it through the ecosystem.” Rice added.
1. AT&T buys Compute Servers which contain Intel Xeon processors:
It’s important to recognize that AT&T does NOT buy processor chips from Intel or any other semiconductor company. It buys compute servers which contain Intel Xeon processors. While the compute server vendor(s) have not been disclosed, it’s likely one or more Chinese or Taiwanese ODMs.
According to IDC, X86 machines dominated the compute server market in 2016. Servers using mostly Xeon processors accounted for $11.2 billion in sales, down 3.1 percent. Server machines using other processor architectures, including Itanium, Power, Sparc, ARM, and a smattering of others, drove $1.3 billion in revenues, but fell 30 percent year on year. Intel X86 compute hardware had a 99.2 percent shipment share and an 89.6 percent revenue share, said IDC in a research report.
In January 2016, AT&T joined the Open Compute Project which is specifying open source hardware (e.g. compute servers and Ethernet switches) for use in data centers. AT&T has repeatedly stated it wants to make its Central Offices look like cloud resident data centers.
2. AT&T’s Cloud & Virtualization Platforms:
The AT&T Integrated Cloud (AIC) is a data center design that includes top-of-rack switches, storage, servers, and software at the hypervisor. When complete, AIC will encompass more than 1,000 zones distributed around the globe. AIC is based on the open source OpenStack cloud management framework.
AT&T’s Universal CPE (uCPE) is the hardware foundation of its Network Functions on Demand service. It’s an AT&T-branded Intel x86 server (presumably made by a Chinese ODM) that sits at the enterprise premises and can mix and match software-based VNFs, depending on what functions are needed at each location. The uCPE was designed and manufactured to AT&T’s specifications to enable customers to run multiple VNFs on one device.
According to SDx Central, AT&T has deployed two workloads on Intel’s Xeon Scalable processors and says others are in the queue. The two workloads are AT&T’s virtual Content Distribution Network (vCDN) and its virtual VPN Internet Gateway (vVIG).
vVIG is a virtual machine that acts as an IPSec gateway between unsecure and secure networks, providing data security at the IP packet level. It uses Intel’s Data Plane Development Kit (DPDK) to speed up the cryptographic processing of IPSec data packets.
Using the new Intel processors allows the vVIG to support a larger data throughput for less cost and a smaller footprint. This includes up to 30 percent performance improvement in PPS handling compared to the earlier Intel processor.
AT&T’s vCDN (virtual Content Distribution Network) is a service that allows customers to manage and distribute video and multi-media web content across networks.
“We saw 25 to 30 percent performance improvements from moving it (vCDN) to Purley,” Rice said, referring to the Intel processors’ code-name. “It was a pretty seamless transition, moving it from the older Intel CPUs onto the new one. We are able to do more with fewer processors, and we’re able to get more capabilities out of our CDN and grow it horizontally as well.
“And all of the improvements, whether on the process side or the architecture side, they all have some networking improvement piece as well,” Rice added.
These performance improvements are helping AT&T move closer toward its goal of virtualizing 75 percent of its network by 2020. During its second-quarter earnings call last month, AT&T CFO John Stephens told investors that the company has virtualized more than 40 percent of its network functions. It’s making progress toward its network functions virtualization goal of 55 percent by year-end.
“We want to make sure the whole ecosystem moved with us toward network virtualization,” Rice said. “We didn’t want to have something special just for AT&T. We wanted it to be for the whole industry.”
Additionally, achieving network performance improvements requires automation, Rice said. “You’ll never get to those percentages without automation being a key part, he added.
In an earlier interview with UBB2020, Rice said:
“As we move down an automation path, as we move down a machine-learning path to drive more automation, [having open interfaces on network elements] is really a necessary first step — these open interfaces that cannot be skipped over or overlooked. I don’t know that people understand the significance of that.”
Below is TBR’s commentary on AT&T’s 2Q17 earnings. Contact Steve Vachon at +1 (603) 929-1166 or firstname.lastname@example.org for additional commentary.
AT&T is improving its value proposition as competition within the mobile and video markets intensify
AT&T’s consolidated revenue fell 1.7% year-to-year to $39.8 billion in 2Q17 due to declines across all of the company’s core businesses, with the exception of its International division. AT&T’s profitability improved in the quarter, however, as operating margins rose 220 basis points year-to-year to 18.4%, aided by the company’s emphasis on non-subsidized wireless device plans.
Pricing pressures, smartphone saturation and stronger competition from OTT providers are creating obstacles for AT&T to grow its mobility and video businesses, which is spurring the carrier to become more reliant on bundles combining both services to improve its value proposition. Though TBR believes AT&T trailed all of its Tier 1 competitors in postpaid phone net additions in 2Q17, the launch of its unlimited data plans helped to mitigate declines as the carrier’s postpaid phone losses improved in the quarter to -89,000, compared to -180,000 in 2Q16.
In June AT&T Unlimited Choice customers gained the option to add DirecTV Now to their accounts for $10 per month, a benefit previously offered only to Unlimited Plus customers. TBR believes the move will boost wireless and DirecTV subscriber additions, but will come at the expense of limiting postpaid phone ARPU as customers now have less incentive to select AT&T Unlimited Plus plans, which have a starting price point that is $30 more expensive than Unlimited Choice plans.
AT&T is relying on the low price point and flexibility of DirecTV Now, which gained 152,000 customers in 2Q17, to help offset declines within its U-verse TV and DirecTV satellite businesses, which lost a combined 351,000 subscribers in the quarter. Though AT&T increased Video Entertainment revenue by 2.1% year-to-year in 2Q17, TBR believes sustaining revenue growth in the segment will be increasingly challenging as total video subscribers decrease and the company trades linear TV subscribers for lower ARPU DirecTV Now connections.
New features such as the inclusion of additional live local channels and upcoming 4K HDR and cloud DVR support provide added incentives to attract DirecTV Now customers, but addressing the platform’s streaming capacity is critical as recent service interruptions will drive some subscribers to switch to rivals such as SlingTV and Hulu Live.
AT&T deepens emphasis on the public sector and software-mediated network services to improve Business Solutions revenue
To improve Business Solutions revenue, which decreased 2.7% year-to-year in 2Q17 due primarily to lower legacy voice and data revenue, AT&T is targeting growth from government customers. In April AT&T announced it is consolidating its government and education operations, which generated about $15 billion in sales in 2016, into the new Global Public Sector division to improve cohesiveness and foster partnerships across agencies in different sectors. Additionally, AT&T will be able to provide first responders with more reliable connectivity through its collaboration with First Net, which has already attracted contracts from five states as of July.
AT&T will improve the profitability of Business Solutions long-term by adopting NFV and SDN technologies. Integrating open-source technologies and white box hardware will provide cost savings by enabling the carrier to become less dependent on more costly, proprietary infrastructure. Additionally, TBR expects the acquisition of Brocade’s Vyatta network operating system will enable AT&T to meet its goal of virtualizing 75% of its network by 2020.
In addition to cost savings, AT&T is creating revenue streams by introducing new software-mediated network services to its portfolio, including an upcoming SD-WAN service in collaboration with VeloCloud. However, AT&T will be disadvantaged by its relatively late entry into the SD-WAN market as competitors including Verizon and CenturyLink have already begun to cement leading positions within the segment.
http://edge.media-server.com/m/p/gz5k2iq4/lan/en (Recording of earnings call)