Note: This is a condensed summary of two previously published articles on this excellent conference.
The telco data center  (DC) is likely to be the first place network operators deploy Network Virtualization/Network Functions Virtualization (NFV). That was the opening statement at the Light Reading conference on NFV and the Data Center, held Sept 16, 2014 in Santa Clara, CA. A network virtualized data center was defined by the conference host as a “cloudified DC which integrates virtualized telecom network functions utilizing Virtual Network Functions (VNF) or Distributed VNFs.”
Note 1. Larger network operators (e.g. AT&T, Verizon) already operate “telco DCs” for web hosting, storage, cloud computing, managed services, and back end network management/OSS/BSS. It will be easier for them (compared to those operators with no DC) to implement a NFV based telco DC. See Heavy Reading Survey results below for more details on this topic.
Concepts and reference architectures from the ETSI NFV specifications group were predicted to alter the data center from a siloed IT-centric model (separate compute /storage/ networking equipment) to a harmonized network and IT domain model, in which virtualized telecom functions, e.g. policy control and application orchestration, are added to the growing list of computing demands on servers. According to Light Reading, NFV will drive an entirely new set of storage, automation, management, and performance requirements, which are only now starting to be defined.
[One must assume that these VNF’s will be implemented as software in the DC compute servers, perhaps with some hardware assist functionality. Realizing that vision will eliminate a lot of network equipment (hardware) in a telco’s DC and provide much more software control of network functions and services.]
Key industry trends discussed at this excellent 1 day conference included:
•The need for service providers to shorten their service delivery cycles and adopt agile approaches to delivering new services.
•The key role that automation of network processes will play in helping operators deliver more user control and network programmability.
•Taming network complexity remains a significant challenge.
•Services in the era of virtualization must still maintain security and reliability for which telecom has been known.
Key findings from Heavy Reading’s January 2014, multi-client study are presented. Next, we summarize network operator keynotes from Century Link. Part 2. will review the Orange, and NTT Communications keynotes as well as our summary and conclusions.
NFV requires operators to find new ways of looking at basic network attributes like performance, reliability and security. For example, performance metrics may change in migrating to NFV – from raw/aggregate performance to performance per cubic meter, or performance per watt. Virtualization will transform many ways of configuring and managing network resources.
However, a business case must be established for an operator to move towards network virtualization/NFV/NVFs. The cost and ROI must be justified. Heavy Reading analyst Roz Roseboro opined that projects which get funded are those that will affect the top line, meaning increased revenues from new and existing services. In that sense, NFV is more likely to get funding than SDN, because it will greatly help an operator increase service velocity/time to market and thereby realize more money. SDN is more about OPEX reductions and efficiency she said.
Century Link Keynote: James Fege, VP of Network Strategy & Development
CenturyLink is counting on its Savvis acquisition to make them hugely successful in cloud computing and to build a “cloudified” telco DC for traditional network services. Acquired in 2011, Savvis is a separate vertical entity within CenturyLink (which includes the former Embarq, U.S. West, Qwest and other companies. CenturyLink has successfully integrated the cloud orchestration and software development of Tier 3, and the platform-as-a-service capabilities of AppFog in their cloud computing capabilitites).
According to Feger, “Cloud is not ‘rust resistant.’ It must be: programable, self-service, and offer on-demand services.”
The CenturyLink Cloud process and operations involve the following attributes:
- Agile methodology
- 21- to 30-day release cycles
- DevOps team2
- Minimum viable product (not explained)
- Building block architecture which is API based
- Constant feedback to improve operations and services
Note 2: While network operations is traditionally a stand-alone function with dedicated staff, the DevOps model eliminates the hand-off from development to operations, keeps the developers in the feedback loop, and incentivizes developers to resolve problems or complications on their own instead of passing them to the Operations department.
The realization of the above cloud attributes is via open applications programming interfaces to software that exists above the physical network. Open source software will allow developers to offer their apps or services regardless of the underlying infrastructure, Feger said. He was firm in his view that “agility combined with our network platform is CenturyLink’s differentiator.”
Feger was quite honest during his talk. He confessed that the service cycles on the network side are still measured in months, not weeks or days. By incorporating the agile technology approaches of the CenturyLink Cloud and the use of a DevOps model, CenturyLink hopes to improve on that. But not this year or next.
“It will be a multi-year project to migrate our network to a cloud like set of capabilities, while minimizing (existing) customer disruptions.”
The take away here is that CenturyLink is attempting to leverage their highly regarded cloud capabilities to offer “cloud-like” L1 to L3 network services, e.g. IP MPLS VPN, Ethernet services, private line (e.g. T1/T3/OC3), broadband Internet access, video, and other wire-line services. Service delivery times must become a lot shorter, while programmability, orchestration, and automation are necessary components to make this happen.
Orange Keynote: Christos Kolias, Sr. Research Scientist, Orange – Silicon Valley
Christos first described the the NFV Concept and Vision from his perspective as a founding member of the ETSI NFV specifications group. It’s a quantum shift from dedicated network equipment to” virtual appliances.”
In the NFV model, various types of dedicated network appliance boxes (e.g. message router, CDN equipment, Session Border Controller, WAN acceleration, Deep Packet Inspection (DPI), Firewall, Carrier grade IP Network Address Translation (NAT), Radio/Fixed Access Network Nodes, QoS monitor/tester, etc.) become “virtual appliances,” which are software entities that run on a high performance compute server.
In other words, higher layer network functions become software based virtual appliances, with multiple roles over the same commodity hardware and with remote operation possible. “It’s a very dynamic environment, where (software based) network functions can move around a lot. It’s extremely easy to scale,” according to Christos.
[One assumes that each such virtual appliance would have an open or proprietary API for orchestration, automation, and management of the particular function(s) performed.]
A few examples were cited for a network virtualized telco DC:
• Security functions: Firewalls, virus scanners, intrusion detection systems, spam protection
•Tunnelling gateway elements: IP-SEC/SSL VPN gateways
•Application-level optimization: Content Delivery Networks (CDNs), Cache Servers, Load Balancers, Application Accelerators, Application Delivery Controllers (ADCs)
•Traffic analysis/forensics: DPI, QoE measurement
•Traffic Monitoring: Service Assurance, SLA monitoring, Test and Diagnostics
Note: This author DESPISES TLAs=three letter acronyms. In many cases, the TLA used in a presentation/talk is much more recognizable in another industry, e.g. ADC =Analog to Digital Converter, rather than Application Delivery Controller. Hence, I’ve tried to spell out most acronyms in this and the preceeding article on the NFV conference. It takes a lot of effort as I’m not familiar with most of the TLAs used glibly by speakers.
Kolias said that the migration from network hardware to software based virtual appliances won’t be easy. Decoupling NVFs from underlying hardware presents management challenges: services to NFV mapping, instantiating VNFs, allocating and scaling resources to VNFs, monitoring VNFs, support of physical/software resources.
NFV components in a virtualized telco DC might include: server virtualization, management and orchestration of functions & services, service composition, automation, and scaling (up and/or down according to network load). There are lots of servers, storage elements, and L2/L3 switches in such a DC. There’s also: security hardware (firewalls, IDS/IPS), load balancers, IP NAT, ADC, monitoring, etc.
NFV in the Data Center will be more energy efficient, according to Kolias. “It’s the greenest choice for an operator,” Christos said. With many fewer hardware boxes, NFV can bring the most energy efficiency to a data center (less energy consumed and lower cooling requirements). That’s a top consideration for those massively power-hungry DC facilities. “You have to dispose of telecom hardware, but when we move things into software, it becomes more eco-friendly,” Kolias said. “So yes, there is absolutely a fit for NFV in the Data Center,” he concluded.
Christos thinks it’s probably easier and faster to implement NFV in a telco DC, because there’s less compliance/ regulation and it’s a less complex environment – both technically and operationally.
Service chaining was referred to as “service composition and insertion,” with policies determining the chain order. Customized service chains are possible with NFV, Kolias added. Ad-hoc, on-demand, secure virtual tenant networks are also possible. For example, tunnels/overlays using the VxLAN protocol (spec from Arista, VMWare and Juniper).
Kolias also cited other benefits of “cloudification” — a term he admittedly hates. “For example, consolidating multiple physical network infrastructures in a cloud-based EPC (LTE Evolved Packet Core) can lead to less complexity in the network and produce better scalability and flexibility for service providers in support of new business models,” he noted.
Several other important points Christos made about NFV in the telco DC:
1. Virtual switches can be key functional blocks for management of multiple virtual switches and for programmable service chains.
2. The Control plane could become part of management and orchestration in a unified, policy-based management platform, e.g. OpenStack.
[That’s radically different than the pure SDN model (Open Network Foundation), where the Control plane resides in a separate enitity, which communicates with the Management/ Orchestration platform (e.g. OpenStack) via a “Northbound” API.]
3. Hardware acceleration can play a role in Network Interface Cards (NICs) and specialized servers. However, they should be programmable.
4. Challenges include: Performance (e.g. increased VM-VM traffic requirements), Security Hybrid environment, and Scaling.
APIs will be important for plug-n-play, especially for open platforms like Google, Facebook, Microsoft, eg. WebRTC. They can enable a plethora of innovative (e.g. ad-hoc/customized) services and lead to new business models for the telcos. That would translate into monetization opportunities (e.g. for new residential and business/ enterprise customers, virtual network operators (VNOs), and others) for service providers.
Christos predicts that many service providers will move from function/service based networks to app-based models. They will deploy resources, including Virtual Network Functions (VNFs) on-demand, as an application when the user needs them. He predicted that smart mobile devices and the Internet of Things (IoT) will precipitate the adoption of APIs for telco apps.
Kolias summed up: “NFV can propel the move to the telco cloud. When this happens we will have succeeded as an NFV community! NFV removes the boundaries and constraints in your infrastructure. It breaks the barriers and opens up unlimited opportunities.”
NTT Com Keynote: Chris Eldredge, Executive VP of Data Center Services for NTT America (NA subsidiary of NTT Com)
Background: NTT Com is one of the largest global network providers in the world, in third place behind Verizon and AT&T. They provide global cloud services, managed services, and connectivity to the world’s biggest enterprises. NTT Com has a physical presence in 79 countries, $112B in revenues, and 242K employees. It’s network covers 196 countries and regions. The company spent $2.5B in R&D last year, with a North American R&D center in Palo Alto, CA. Finally, they claim to be the #1 global data center and IP backbone network provider in the world. [Chris said Equinix has more total square footage in their data centers than NTT, but they don’t have the IP backbone network.]
NTT Com’s enterprise customers mostly use cloud for development and test applications. “It’s bursty in nature. They turn it up and turn it down,” Eldredge said. It’s also used for OTT broadcasts of sporting events and concerts. On January 1, 2014, NTT spun up 200,000 virtual machines (VMs) to meet demand for Europeans watching soccer matches on their mobile devices. After the soccer match was over, those VMs were de-activated.
With the Virtela acquisition, NTT Com has recently deployed their version of NFV capabilities in both their DCs and global network along with SDN based provisioning.
“SDN/NFV is a more scalable network technology that NTT Com is now using to provide cloud and managed services to a broad range of clients,” Eldredge said. “It allows us to specialize and provide custom solutions for our customers,” he added.
The NFV (higher layer) services NTT Com is now offering include: virtual firewall, network hosted applications accelerator, Secure Sockets Layer (SSL) VPN, IP-SEC gateway, automated customer portal (for full control of services, self deployment, self management, and full visibility), on premises harware based managed services which provide a fully integrated managed solution for NTT Com customers.
The above NFV enabled services can be easily applied, monitored and rapidly changed. NTT Com can customize applications performance and service levels for specific users and profiles. In conclusion, Chris said that “NFV has become the next phase of the virtualized DC, extending the enterprise DC into the cloud. [Such an extension, by definition, would be a hybrid cloud]
In answer to this author’s question on when and if NTT Com would use NFV to deliver pure (L1-L3) network connectivity services, Chris confessed that it wasn’t on their roadmap at this time.
Summary and Conclusions:
Operators are planning for NFV and some – like NTT Com – already have implemented several NFV enabled services. Examples of NFV capabilities were clearly stated by Kolias of Orange Silicon Valley and Eldredge of NTT Com. It starts with higher layer (L5-L7) network functions/capabilities, cloud and managed services. However, it will take considerable time before the entire network is virtualized. “NFV everywhere by 2020” is too aggressive, according to some. And don’t expect mainstream connectivity functions (including Carrier Ethernet services, private lines, circuit switching, etc) to be virtualized anytime soon.
Early NFV adopters will be challenged as they work through internal issues like breaking down their organizational silos and adapting their business models to a quicker, more agile manner of provisioning and controlling network resources and services.
What happens to the network IT guy when the majority of network equipment disappears and is transformed into virtual appliances? Who maintains a compute server that’s also implementing many higher layer networking functions? What trouble shooting tools will be available for NFV entities?
Automation and self service are crucial for the network operator to deploy services quicker and hence realize more revenues. CenturyLink’s Feger said it best: “If you’re on a nine-month release strategy, your network isn’t really programmable.”
“Agility is an asset. You can only tame complexity,” noted Heavy Reading analyst and event host Jim Hodges, who quoted Brocade’s Kelly Herrell from an earlier presentation. “As an industry, we realize complexity is an inherent part of what we’re doing, but it’s something we have to address.”