This new Gartner Group report is on the key impacts of digital business, cloud and orchestration strategies. In particular, IT leaders must continue to focus on meeting enterprise needs for expanded WAN connectivity, application performance and improved network agility, without compromising performance.
- As enterprises increasingly rely on the internet for WAN connectivity, they are challenged by the unpredictable nature of internet services.
- Enterprises seeking more agile WAN services continue to be blocked by network service providers’ terms and conditions.
- Enterprises seeking more agile network solutions continue to be hampered by manual processes and cultural resistance.
- Enterprise’s moving applications to public cloud services frequently struggle with application performance issues.
IT leaders responsible for infrastructure agility should:
- Reduce the business impact of internet downtime by deploying redundant WAN connectivity such as hybrid WAN for business-critical activities.
- Improve WAN service agility by negotiating total contractual spend instead of monthly or annual spend.
- Improve agility of internal network solutions by introducing automation of all operations using a step-wise approach.
- Ensure the performance of cloud-based applications by using carriers’ cloud connect services instead of unpredictable internet services.
- Improve alignment between business objectives and network solutions by selectively deploying intent-based network solutions.
Strategic Planning Assumptions:
Within the next five years, there will be a major internet outage that impacts more than 100 million users for longer than 24 hours.
- By 2021, 25% of enterprise telecom contracts will evolve to allow for greater flexibility such as canceling services or introducing new services within the contract period, up from less than 5% today.
- By 2021, productized network automation (NA) tools will be utilized by 55% of organizations, up from less than 15% today.
- By YE20, more than 30% of organizations will connect to cloud providers using alternatives to the public internet, which is a major increase from 5% in 3Q17.
- By 2020, more than 1,000 large enterprises will use intent-based networking systems in production, up from less than 15 today.
Gartner Group has five predictions that represent fundamental changes that are emerging in key network domains, from internal networking to cloud services and WAN services.
two key aspects that the majority of Gartner clients struggle with:
- The increased interest in utilizing the internet for WAN connectivity continues to raise concerns about the performance of public internet services and performance of applications deployed in public cloud services. We discuss the risk that enterprises encounter due to the unpredictable nature of the internet, and we discuss how an enterprise can use MPLS to connect directly to public cloud services instead of using the internet.
- Enterprises continue to need new business solutions deployed faster, but remain hampered by the inability of network solutions and network services to respond fast enough and rectify performance issues fast enough. We discuss three options to improve network operations as well as network services.
Source: Gartner (December 2017)
Strategic Planning Assumption: Within the next five years, there will be a major internet outage that impacts more than 100 million users for longer than 24 hours.
Analysis by: Andrew Lerner, Greg Young
- We are increasingly seeing organizations use the internet as a WAN, and estimate that approximately 20% of Gartner clients in many geographic regions have at least some critical branch locations entirely connected via the internet.
- Most IT teams don’t have a detailed understanding of the multitude of applications and services that are being used on the public internet and/or their criticality. This is because of years of line of business (LOB)-centric buying and the proliferation of SaaS.
- While the internet is highly resilient, there are specific infrastructure and technology hot spots that, if compromised, could threaten the internet as a whole or large portions of it. This could be the result of natural disasters, man-made accidents or intentional acts.
- Natural disasters and man-made acts that could impact large portions of the internet include earthquakes, solar flares, electronic pulses, meteors, tsunamis, hurricanes, major cable cuts and network operator errors.
- Intentional acts include hacktivism, terrorism toward critical infrastructure, and/or coordinated distributed denial of service (DDoS) attacks, attacks against carrier- and ISP-specific components, and protocols (e.g., SS7).
While the probability of each of these events individually is small, the likelihood that at least some of them will occur over an extended period of time is actually surprisingly high. For example, even if there is only a 1% chance that any of the 11 examples identified above results in an outage within a year, there is a statistical likelihood of over 45% that at least one of them will occur over a five-year period. Further, to date, there have been indications that the internet is vulnerable to sizable outages:
- In 2008, millions of users and large portions of the Middle East and India were impacted by a cable cut. 1
- In 2016, a large DDOS attack resulted in many large e-commerce sites going down, including Twitter, Netflix, Reddit and CNN. 2
- In 2015, Telekom Malaysia created a routing problem that rendered much of the Level 3 network unavailable. 3
- It has been widely reported that 70% of all internet traffic goes thru Northern Virginia 4 and, while this might be an overstated, there’s no doubt that there are several major chokepoints in the internet infrastructure.
At a minimum, an extended and widespread internet outage would cause dramatic revenue loss for enterprises, and could even create life-threating situations depending on what business the organizations is in. Initially, many organizations often brush this off by saying, “Well there’s not much we can do about it anyway” or “If there is a large internet outage due to a natural disaster, then personal safety is the priority and the enterprise connectivity is the least of our concerns.” However, there are very specific and actionable items that infrastructure and operations (I&O) leaders should take to mitigate the impact of a large outage.
Strategic Planning Assumption: By 2021, 25% of enterprise telecom contracts will evolve to allow for greater flexibility such as canceling services or introducing new services within the contract period, up from less than 5% today.
Analysis by: Danellie Young
- Enterprise telecom contracts are typically fixed in both term duration and for the services required for procurement.
- Most larger revenue contracts ($1 million annually) require the enterprise to agree to minimum revenue commitments on an annual basis.
- Major WAN decisions are made by 31% to 47% of enterprises each year, including equipment refresh or carrier renegotiations (assuming the refresh cycle on routers is six years, and the average enterprise WAN service contract is three years).
- A large majority of enterprises are struggling with the cost, performance and flexibility of their traditional WAN contracts, further exacerbated by the proliferation of public cloud applications.
Enterprise telecom contracts remain rigid and fixed, with specified services required to ensure compliance. Typically such contracts penalize customers when services are disconnected midterm. Enterprise telecom contracts are typically negotiated on 36-month cycles, based on either full-term or revenue commitments. Revenue commitments are set based on monthly spend, annual spend or total contract spending. Upon meeting the contract’s revenue commitment, the enterprise can then renegotiate or consider alternative services or providers since their financial obligation has been met. Terminating contracts early for convenience will typically levy penalties on the enterprise. These penalties range from 100% of the monthly recurring charges (MRCs) to a percentage of the MRCs to a declining portion through the remainder of the term (i.e., 100% in the first 12 months, 75% in months 13 to 24 and 50% through the end of the term).
Currently, contracts are split between term and revenue commit contracts, whereby most of the revenue commitments are made on an annualized basis. Alternatively, a small number (5%) are offered or negotiated with total contract values tied to them. Total contract revenue commitments enable the enterprise to meet the obligation earlier in their contract and provide the opportunity to negotiate new lower rates and a new contract, and to solicit competitive proposals before the full 36-month cycle terminates.
In addition to traditional voice and data services, many networking vendors now offer SD-WAN functionality products, while carriers and managed service providers (MSPs) are beginning to launch and roll out managed SD-WAN services as an alternative to managed routers. Contract flexibility will be needed to allow the enterprise the flexibility to migrate to new solutions, without financial risk or paying early termination fees on services. Thus, while we anticipate rapid adoption of SD-WAN and virtualized customer premises equipment (vCPE) solutions in the enterprise, SD-WAN by itself will not improve contractual conditions.
A Gartner-conducted software-defined (SD)-WAN survey has identified the key drivers for SD-WAN adoption and preferences for managed services from non-carrier providers. Despite its relative immaturity, the perceived benefits create incentives for IT leaders responsible for networking to leap into SD-WAN pilots now.
- Please refer to our report on IHS-Markit analysis of the SD-WAN market. Cisco and VMware are the top two vendors due to recent acquisitions of Viptela and Velocloud respectively. Cisco also bought Meraki which provides a SD-WAN solution as well as business WiFi networks.
- According survey data from Nemertes Research, enterprises are not discarding their MPLS networks as they deploy SD-WANs. “Fully 78% of organizations deploying SD-WAN have no plan to completely drop MPLS from their WAN,” Nemertes John Burke reports. “However, most intend to reduce and restrict their use of it (MPLS), if not immediately then over the next few years.”
- “Although it brings a lot of benefits to the table, SD-WAN still uses the public Internet to connect your sites,” points out Network World contributor Mike C. Smith. “And once your packets hit the public Internet, you will not be able to guarantee low levels of packet loss, latency and jitter: the killers of real-time applications.”
Key Findings of Gartner Survey:
- Enterprise clients cite increased network availability, reliability and reduced WAN costs resulting from less-expensive transport as the top benefits of software-defined WAN.
- Enterprise clients are concerned about the large number of SD-WAN vendors and anticipate market consolidation, making some early choices risky.
- A lack of familiarity with the technology, the instability of the vendors, and skepticism about performance and reliability are the most common concerns when deploying SD-WAN.
- Nearly two-thirds of the organizations we surveyed prefer buying managed SD-WAN, demonstrating a preference for presales and postsales support. A preference for type of managed service provider does not align with legacy carrier MSP adoption rates.
To maximize new SD-WAN opportunities, infrastructure and operations leaders planning new networking architectures should:
- Include SD-WAN solutions on their shortlists if they’re aggressively migrating apps to the public cloud, building hybrid WANs, refreshing branch WAN equipment and/or renegotiating a managed network service contract.
- Include a diverse range of management solutions related to SD-WAN considerations; don’t just look at carrier offers to determine the best option available to meet enterprise requirements.
- Compare each vendor’s current features and roadmaps with enterprise requirements to develop a shortlist, and use pilots and customer references to confirm providers’ ability to deliver on the most desirable features and functionality.
- Focus pilots on specific, critical success factors and negotiate contract terms and conditions to support service configuration changes, fast site roll-out and granular application reporting.
- Negotiate flexible WAN or managed WAN services contract clauses to support evolution to SD-WAN when appropriate.
Gartner has forecast SD-WAN to grow at a 59% compound annual growth rate (CAGR) through 2021 to become a $1.3 billion market (see Figure 1 and “Forecast: SD-WAN and Its Impact on Traditional Router and MPLS Services Revenue, Worldwide, 2016-2020”). Simultaneously, the overall branch office router market is forecast to decline at a −6.3% CAGR and the legacy router segment will suffer a −28.1% CAGR through 2020.
SD-WAN equipment and services dramatically simplify the complexity associated with the management and configuration of WANs. They provide branch-office connectivity in a simplified and cost-effective manner, compared with traditional routers. These solutions enable traffic to be distributed across multiple WAN connections in an efficient and dynamic fashion, based on performance and/or application-based policies.
The survey data highlights that most of the respondent organizations are in the early stages of their SD-WAN projects. To qualify, respondents must be involved in choosing, implementing and/or managing network services and equipment for their company’s sites, while their primary role in the organization is IT-focused or IT-business-focused. We intentionally searched for companies that plan to use or are using SD-WAN. Of those surveyed, 93% plan to use SD-WAN within two years or are piloting and deploying now, with approximately 73% in pilot or deployment mode. These results do not reflect actual market adoption rates, because Gartner estimates that between 1% and 5% of enterprises have deployed SD-WAN. Although the results differ numerically, the qualitative feedback is compelling.
Related to specific number of sites, the responses are shown in Figure below:
Respondents using SD-WAN; n = 21 (small sample size; results are indicative). Totals may not add up to 100%, due to rounding.
Source: Gartner Group (November 2017)
Enterprises cite their lack of deep technology familiarity as a key barrier to using SD-WAN. In fact, of those who plan for SD-WAN, nearly 50% have concerns about their lack of technical familiarity, followed by concerns over the stability of vendors and concerns about performance and reliability.
Editor’s Note: Surprisingly, enterprises don’t seem to be concerned with the lack of SD-WAN standards which dictates a single vendor solution/lock-in.
With more than 30 SD WAN vendors in the market and consolidation accelerating, this doesn’t come as a surprise.
Other key findings include:
- Vendor stability is a major concern. Among the 51% of respondents who selected performance and reliability as key drivers (n = 44), nearly half (45%) had concerns about the stability of the vendors.
- Many among the 50% who see agility as a key driver (n = 36) expressed concern about their lack of familiarity with the technology.
- Among organizations with fewer than 1,000 employees (n = 53), the most common concern is lack of familiarity with the technology (51%). Organizations with 1,000 to 9,999 employees (n = 38) find the ROI of the investment to be most common challenge (50%).
- Among the EMEA respondents (n = 48), half were most concerned about the stability of the vendors, followed closely by concerns about proven performance and reliability.
To purchase the complete Gartner SD-WAN report go to:
Timon Sloane of the Open Networking Foundation (ONF) provided an update on project CORD on November 1st at the Telecom Council’s Carrier Connections (TC3) summit in Mt View, CA. The session was titled:
Spotlight on CORD: Transforming Operator Networks and Business Models
After the presentation, Sandhya Narayan of Verizon and Tom Tofigh of AT&T came up to the stage to answer a few audience member questions (there was no real panel session).
The basic premise of CORD is to re-architect a telco/MSO central office to have the same or similar architecture of a cloud resident data center. Not only the central office, but also remote networking equipment in the field (like an Optical Line Termination unit or OLT) are decomposed and disaggregated such that all but the most primitive functions are executed by open source software running on a compute server. The only hardware is the Physical layer transmission system which could be optical fiber, copper, or cellular/mobile.
Author’s Note: Mr. Sloane didn’t mention that ONF became involved in project CORD when it merged with ON.Labs earlier this year. At that time, the ONOS and CORD open source projects became ONF priorities. The Linux Foundation still lists CORD as one of their open source projects, but it appears the heavy lifting is being done by the new ONF as per this press release.
A reference implementation of CORD combines commodity servers, white-box switches, and disaggregated access technologies with open source software to provide an extensible service delivery platform. This gives network operators (telcos and MSOs) the means to configure, control, and extend CORD to meet their operational and business objectives. The reference implementation is sufficiently complete to support field trials.
Illustration above is from the OpenCord website
Highlights of Timon Sloane’s CORD Presentation at TC3:
- ONF has transformed over the last year to be a network operator led consortium.
- SDN, Open Flow, ONOS, and CORD are all important ONF projects.
- “70% of world wide network operators are planning to deploy CORD,” according to IHS-Markit senior analyst Michael Howard (who was in the audience- see his question to Verizon below).
- 80% of carrier spending is in the network edge (which includes the line terminating equipment and central office accessed).
- The central office (CO) is the most important network infrastructure for service providers (AKA telcos, carriers and network operators, MSO or cablecos, etc).
- The CO is the service provider’s gateway to customers.
- End to end user experience is controlled by the ingress and egress COs (local and remote) accessed.
- Transforming the outdated CO is a great opportunity for service providers. The challenge is to turn the CO into a cloud like data center.
- CORD mission is the enable the “edge cloud.” –>Note that mission differs from the OpenCord website which states:
“Our mission is to bring datacenter economies and cloud agility to service providers for their residential, enterprise, and mobile customers using an open reference implementation of CORD with an active participation of the community. The reference implementation of CORD will be built from commodity servers, white-box switches, disaggregated access technologies (e.g., vOLT, vBBU, vDOCSIS), and open source software (e.g., OpenStack, ONOS, XOS).”
- A CORD like CO infrastructure is built using commodity hardware, open source software, and white boxes (e.g. switch/routers and compute servers).
- The agility of a cloud service provider depends on software platforms that enable rapid creation of new services- in a “cloud-like” way. Network service providers need to adopt this same model.
- White boxes provide subscriber connections with control functions virtualized in cloud resident compute servers.
- A PON Optical Line Termination Unit (OLT) was the first candidate chosen for CORD. It’s at the “leaf of the cloud,” according to Timon.
- 3 markets for CORD are: Mobile (M-), Enterprise (E-), and Residential (R-). There is also the Multi-Service edge which is a new concept.
- CORD is projected to be a $300B market (source not stated).
- CORD provides opportunities for: application vendors (VNFs, network services, edge services, mobile edge computing, etc), white box suppliers (compute servers, switches, and storage), systems integrators (educate, design, deploy, support customers, etc).
- CORD Build Event was held November 7-9, 2017 in San Jose, CA. It explored CORD’s mission, market traction, use cases, and technical overview as per this schedule.
Service Providers active in CORD project:
- AT&T: R-Cord (PON and g.fast), Multi-service edge-CORD, vOLTHA (Virtual OLT Hardware Abstraction)
- Verizon: M-Cord
- Sprint: M-Cord
- Comcast: R-Cord
- Century Link: R-Cord
- Google: Multi-access CORD
Author’s Note: NTT (Japan) and Telefonica (Spain) have deployed CORD and presented their use cases at the CORD Build event. Deutsche Telekom, China Unicom, and Turk Telecom are active in the ONF and may have plans to deploy CORD?
- This author questioned the partitioning of CORD tasks and responsibility between ONF and Linux Foundation. No clear answer was given. Perhaps in a follow up comment?
- AT&T is bringing use cases into ONF for reference platform deployments.
- CORD is a reference architecture with systems integrators needed to put the pieces together (commodity hardware, white boxes, open source software modules).
- Michael Howard asked Verizon to provide commercial deployment status- number, location, use cases, etc. Verizon said they can’t talk about commercial deployments at this time.
- Biggest challenge for CORD: Dis-aggregating purpose built, vendor specific hardware that exist in COs today. Many COs are router/switch centric, but they have to be opened up if CORD is to gain market traction.
- Future tasks for project CORD include: virtualized Radio Access Network (RAN), open radio (perhaps “new radio” from 3GPP release 15?), systems integration, and inclusion of micro-services (which were discussed at the very next TC3 session).
Addendum from Marc Cohn, formerly with the Linux Foundation:Here’s an attempt to clarify the CORD project responsibilities:
- CORD is an open reference architecture. In that sense, CORD is similar to the ETSI NFV Architectural Framework, ONF SDN Architecture, and MEF LifeCycle Services Orchestration (LSO) reference architectures.
- As it is a reference architecture, it is not an implementation, and is maintained by the Open Networking Foundation (ONF), which merged with ON.LAB towards the end of 2016.
- OpenCORD is a Linux Foundation project announced in the summer of 2016. It is focused on an open source implementation of the CORD architecture. OpenCord was derived from the work undertaken by ON.LAB, prior to the merger with ONF in 2016.
- For technical details, visit the OpenCORD Wiki
- Part of the confusion is that if one visits the Linux Foundation projects page, CORD is listed, but the link is to the OpenCord website.
2017 SPIFFY Awards:
Seven pioneering start-up companies were recognized by the Service Provider Innovation Forum (SPIF) at the 10th Annual SPIFFY Awards held Wednesday evening November 1st at TC3 Summit.
Since 2001, the Telecom Council has worked to identify and recognize companies who represent a broad range of cutting-edge telecom products and services. From there, dozens of young companies are presented each month to the Service Provider Innovation Forum (SPIF), ComTech Forum, IoT Forum, and Investor Forum.
SPIF members, who represent cutting-edge telcos from over 50 countries and who serve over 3B subscribers, selected seven companies from hundreds of presenting communication startup companies and 30 SPIFFY nominees as best-in-class in their respective categories. Each winner, who is set apart for their dedication, technical vision, and interest from the global service provider community, is a company to watch in the telecommunication industry.
The winners below represent the best and brightest in their respective categories:
- The Graham Bell Award for Best Communication Solutions – Sightcall : a cloud API that enables any business to add rich communications (e.g. video), accessible with a single touch, in the context of their application.
- Edison Award for Most Innovative Startup – DataRPM: cognitive preventive maintenance platform.
- San Andreas Award for Most Disruptive Technology – Veniam: networking solution for future autonomous vehicles; mobile WiFi done right.
- Core Award for Best Fixed Telecom Opportunity – Datera: storage and data management for service providers, private cloud, digital business via “Datera elastic data fabric software.”
- Zephyr Award for Best Mobile Opportunity – AtheerAir: augmented reality solutions for industrial enterprises.
- Ground Breaker Award for Engineering Excellence – Cinova: virtual reality streaming at practical bit rates using Cinova’s cloud server technology.
- Prodigy Award for the Most Successful SPIF Alumni – Plex: streaming media server and apps to stream video, audio and photo collections on any device.
This year’s entrepreneurs had a chance to vote on the operators as well, to give a shout out to those telcos who were supportive, approachable, and helpful to young and growing telecom companies. The entreprenneurs chose Verizon.
- Fred & Ginger Award for the Most Supportive Carrier – Verizon.
The SPIFFY nominees attended the awards ceremony along with 50 global fixed and wireless communications companies and over 300 industry professionals. Photos of the event can be found on Telecom Council’s blog and Instagram pages. Note that none of this year’s SPIFFY award winners, with the possible exception of Veniam, actually provide a connectivity (PHY, MAC/Data Link layer) solution.
Author’s Notes on three impressive start-ups that presented at TC3 on November 1st (only day I attended 2017 TC3):
1. In a session titled “Closing the Rural Broadband Gap,” Skyler Ditchfield, CEO of GeoLinks, provided an overview of his company’s success in providing high-speed broadband to schools and libraries using fixed wireless technologies, specifically microwave radio operating in several frequency bands. The company’s flagship service is ClearFiber™, which offers customers fixed wireless broadband service on the most resilient and scalable networkSkyler described the advantages of their 100% in house approach to engineering, design, land procurement, construction and data connectivity. GeoLinks approach offers gigabit plus speeds at a fraction of the cost of fiber with lower latency and rapid deployment across the country.
A broadband fixed wireless installation on Santa Catalina island was particularly impressive. Speeds on the island (which GeoLinks says is 41 miles offshore) are typically 300 Mbps, and the ultra-fast broadband connection provides support for essential communications services, tourism services, and commerce. GeoLinks successfully deployed Mimosa Network´s fiber-fast broadband solutions to bring high-speed Internet access to the island community for the first time in its history. Connecting the island to the mainland at high speeds was very challenging. GeoLinks ultimately selected Mimosa for the last mile of the installation, deploying Mimosa A5 access and C5 client devices throughout the harbor town of Avalon.
Another ClearFiber™ successful deployment was at Robbins Elementary school in California. It involved 19 miles of fixed broadband wireless transport to provide the school with broadband Internet access.
Skyler said that next year, GeoLinks planned to deliver fixed wireless transport at 10G b/sec over 6 to 8 miles in the 5Ghz unlicensed band- either point to point OR point to multi-point. The company is considering 6GHz, 11GHz, 18Ghz and 20Ghz FCC licensed bands. He said it would be important for GeoLinks to get licensed spectrum for point to multi-point transmission.
More on GeoLinks value proposition here and here. And a recent blog post about Skyler Ditchfield who told the TC3 audience he grew up fascinated by communications technologies. This author was very impressed with Skyler and GeoLinks!
2. In a panel on “Startup Success Stories,” Nitin Motgi, founder and CEO of Cask (a “big data” software company) talked about how long it took to seal a deal with telcos. It’s longer than you might think! In one case, Nitin said it was 18 months from the time an unnamed telco agreed to purchase Cask’s solution (based on a proof of concept demo) till the contract was actually signed and sealed. Nitin referred to the process of selling to telcos as “whale hunting.” However, he said that if you succeed it’s worth it because of the telco’s scale of business.
3. Tracknet Co-Founder and CEO Hardy Schmidbauer presented a 5 minute “fast pitch” to the Telecom Council Service Provider Forum. He talked about his company’s highly scalable LPWAN/ IoT network solutions: “TrackNet provides LoRaWAN IoT solutions for consumers and industry, focusing on ease of use and scalability to enable a “new era” of exponentially growing LPWAN deployments.” The company is a contributing member of the LoRa Alliance and the TrackNet team has been instrumental in specifying, building, and establishing LoRaWAN and the LoRa Alliance for more than five years. The founding Tracknet team includes veterans from IBM and Semtech who were instrumental in the development of LoRa and LoRaWAN.
With “Tabs,” Tracknet combines a WiFi connected IoT home and tracker system with LoRaWAN network coverage built from indoor Tabs hubs.
About the Telecom Council: The Telecom Council of Silicon Valley connects the companies who are building communication networks, with the people and ideas that are creating them – by putting those companies, research, ideas, capital and human expertise from across the globe together in the same room. Last year, The Telecom Council connected over 2,000 executives from 750 telecom companies and 60 fixed and wireless carriers across 40 meeting topics. By joining, speaking, sponsoring, or simply participating in a meeting, there are many ways telecom companies of any size can leverage the Telecom Council network. For more information visit: https://www.telecomcouncil.com.
A follow up TC3 blog post will provide an update on project CORD (Central Office Re-architected as a Data Center) from the perspective of the Open Network Foundation (ONF) with panelists from AT&T and Verizon.
Editor’s Note: Why Single Vendor Solutions Dominate New Networking Technologies
There are no accredited standards for exposed interfaces or APIs* in SD-WANs, NFV “virtual appliances,” Virtual Network Functions (VNFs), and access to various cloud networking platforms (each cloud service provider has their own connectivity options and APIs). Those so called “open networking” technologies are in reality closed, single vendor solutions. How could there be anything else if there are no standards for multi-vendor interoperability within a given network?
In other words, “open” is the new paradigm for “closed” with vendor lock-in a given.
* The exception is Open Flow API between Control and Data planes-from ONF.
Yet Gartner Group argues in a new white paper (available free to clients or to non clients for $195), that IT end users should always adopt multi-vendor network architectures. This author strongly agrees, but that’s not the trend in today’s networking industry, especially for the red hot “SD-WANs” where over two dozen vendors are proposing their unique solution in light of no standards for interoperability or really anything else for that matter within a single SD-WAN.
Yes, we know Metro Ethernet Forum (MEF) has started working on SD-WAN policy and security orchestration across multiple provider SD WAN implementations. They’ve also written a white paper “Understanding SD-WAN Managed Services,” which defines SD-WAN fundamental characteristics and service components. However, neither MEF or any other fora/standards body we know of is specifying functionality, interfaces for interoperability within a single SD-WAN.
Here are a few excerpts from the Gartner white paper is titled:
“IT leaders should never rely on a single vendor for the architecture and products of their network, as it can lead to vendor lock-in, higher acquisition costs and technical constraints that limit agility. They should segment their network into logical blocks and evaluate multiple vendors for each.”
Vendors tend to promote end-to-end network architectures that lock clients with their solutions because they are focused on their business goals, rather than enterprise requirements.
Enterprises that make strategic network investments by embracing vendors’ architectures without first mapping their requirements often end up with solutions that are overhyped, over-engineered and more expensive.
Enterprises that do not create and actively maintain a competitive environment can overpay by as much as 50% for the same equipment from the same vendor. Savings can be even greater when comparing to other vendors with a functionally equivalent solution.
IT and Operations leaders focused on network planning should:
- Divide the network into foundational building blocks, defining how they interwork with each other, to enable multiple vendor options for each block.
- Remove proprietary components from the network, replacing them with industry standard elements as they are available, to facilitate new vendors to make competitive proposals.
- Get a technical solution that meets needs at the lowest market purchase price by competitively bidding on each building block.
- Ensure that operations can deal with multiple vendors by planning for network management solutions and processes that can cope with a multivendor environment.
Network technologies have matured in the last 20 years and are a routine component of every IT infrastructure. No vendor can claim a unique “core competency” nor “best-of-breed” capabilities in every area of the network, so there is no reason to treat the network as a monolithic infrastructure entrusted to a single supplier. However, we regularly speak to clients that still give credit to the myth of the single-vendor network. They believe that having only one networking vendor provides the following advantages:
- There is no need to spend time designing a solution, as you simply get what leading vendors recommend.
Products from the same vendor are designed to work seamlessly together, with limited or no integration challenges.
The procurement process is simplified with only one vendor, and there’s no need to deal with time-consuming, vendor-neutral RFPs.
A higher volume of purchases with one vendor would result in a better discount.
You only have a single vendor to hold accountable in case you run into problems, and one that will respond quickly given the loyalty and volume of purchases.
However, these perceived advantages are largely a myth, as much as open networking and complete vendor freedom is a myth. The harsh reality that we frequently hear from clients that followed this single-vendor strategy includes:
- Holistic designs recommended by vendors are not necessarily the best. They are often over-engineered, include products that are not aligned with enterprise needs and are ultimately more expensive to buy and maintain.
- Diverse product lines from the same vendor share the brand, but they are rarely designed to work together from the start, since they often come from independent BUs or acquisitions, making them difficult to integrate.
- A higher volume of purchases does not automatically translate into better discounts. For most vendors, their best discounts are reserved for competitive situations and will generally offer savings of 15% to 50% when compared with the best-negotiated sole-source deals.
- Having to deal with just one vendor for technical issues is simpler, but does not necessarily translate in shorter time to repair and better overall network availability, which is the real goal.
Clients that pursue a multivendor strategy report that time spent on RFPs and evaluation of different vendors is not a waste, because it increases teams’ skills, motivates them to stay abreast of market innovations, prevents suboptimal decisions and pays off — technically and financially.
Divide the Network Into Foundational Building Blocks to Enable Multiple Vendor Options for Each Block
Network planners and architects must break the network infrastructure into smaller, manageable blocks to plan, design and deploy a “fit-for-purpose” infrastructure that addresses the defined usage scenarios and control costs (Figure 1 shows typical building blocks).
*Security is not addressed in this document. Note: There is no hierarchy associated with block positioning in this picture.
Source: Gartner (October 2017)
The key objectives of this activity are to:
- Identify network blocks that have logical and well-defined boundaries.
- Document and standardize as much as possible the interfaces between the various building blocks, to allow choice and enable use of multiple vendors.
This building block approach is useful because not all network segments have the same properties. In some segments little differentiation exists among suppliers, and there is a high degree of substitution within a building block, so enterprises can seek operational and cost advantages. For example, wired LAN switching solutions for branch offices are largely commoditized, and the difference between vendors is hard to discern in the most common use cases.
In other cases, such as in the data center networking market, there is more differentiation among vendors, and the segmentation approach ensures that enterprise architectural decisions align with IT infrastructure strategies and business requirements.
There are no hard-and-fast industry rules on where the boundaries between blocks must be drawn. Each enterprise has to split network infrastructure in a way that makes sense for them. The most common approach is segmentation around functional areas, such as data center leaf and spine switches, WAN edge, WAN connectivity, LAN core and LAN access. Each segment could further be split. For example, LAN access includes wired and wireless, while WAN edge might include WAN optimization and network security services. Another complementary segmentation boundary can be the geographical place, as a large organization with subsidiaries in multiple locations could select different vendors on a regional or country basis for some blocks. Disaggregation is creating another possible segmentation, since hardware and software can be awarded to different vendors for some solutions like white-box Ethernet switching.
Defining building blocks also protects organizations from the “vendor creep” trap. As vendors acquire small companies and startups in adjacent markets, they often encourage enterprises to add these new products or capabilities to the “standardized” solution. If the enterprise defines its foundational requirements, it can easily determine whether the new functionality truly solves a business need, and whether any additional cost is warranted.
Remove Proprietary Components From the Network to Facilitate New Vendors to Make Competitive Proposals
Employment of proprietary protocols and features inside the network limits the ability to segment the network into discrete blocks and makes this activity more difficult.
Within building blocks, it is acceptable to use proprietary technologies, as long as enterprises compare vendors against their business requirements (to avoid over-engineering) and the solution provides a real and indispensable functional advantage. It is important to express the business functionality as a requirement and not to tie requirements to specific proprietary technologies.
Between building blocks, it is critical to avoid proprietary features and use standards, since proprietary protocols favor using certain vendors and disfavor others, leading to loss of purchasing power. Sometimes it’s necessary to employ a proprietary protocol, for example:
To obtain functionality that uniquely meets a critical business need. If so, then it’s critical that these protocols be reviewed regularly and are not automatically propagated into new buying criteria over the long term.
In the early stages of market development, before standards have caught up to innovation. However, once standards exist, or the technology has started to move down the commodity curve, it is imperative that network architects and planners migrate to standards-based solutions (as long as business requirements aren’t compromised). Examples of industry standards that replace previous proprietary solutions are Power over Ethernet Plus (PoE+) and Virtual Router Redundancy Protocol (VRRP) (see Note 1).
In these cases it is essential to document and motivate the exception, so that it can be periodically reviewed. Proprietary technologies should always be avoided in the interface between the network and other components of IT infrastructure (for example, proprietary trunking to connect servers to the data center network).
Get a Technical Solution That Meets Needs at the Lowest Market Purchase Price by Competitively Bidding on Each Building Block
Dividing the network provides a clear definition of what is really needed within each building block, which in turns enables a fit-for-purpose approach and a competitive bidding process.
–>The goal is not to bid on the best technical solution for each block, but on one that is good enough to meet requirements.
This enables real competition across vendors and provides maximum price leverage, since all value-adds to the common denominator can be evaluated separately and matched with the cost difference.
By introducing competition in this thoughtful manner, Gartner has seen clients typically achieve sustained savings of between 10% and 30% and of as much as 300% on specific components like optical transceivers.
Discern the Relationships Between Networking Vendors and Network Management Vendors
You may also find that networking vendors have some level of leverage with certain other vendors specialized in network management. Therefore, it is valuable to understand the arrangement of any partner agreement and whether this can be leveraged to your organization’s benefit.
Editor’s Closing Comment:
The advice provided above by Gartner Group seems very reasonable and mitigates risk of using only a single vendor for a network or sub-network. If so, how can any network operator or enterprise networking customer justify the single vendor SD-WAN solutions that are proliferating today?
Readers are invited to comment in the box below the article (can be anonymous) or contact the author directly (email@example.com).
Two years ago, we reported that “Verizon has completed a field trial of NG-PON2 fiber-to-the-premises technology that could provide the infrastructure for download speeds up to 10 Gbps for residential and business customers.”
This past January, Verizon completed its first interoperability trial of NG-PON2 technology at its Verizon Labs location in Waltham, MA. During the trial, Verizon demonstrated that equipment from different vendors on each end of a single fiber—one on the service provider’s endpoint and that the customer premises—can deliver service without any end-user impact.
In an October 16th press release in advance of the Broadband Forum’s Access Summit, Verizon said NG-PON2 represent a paradigm shift in the access space and a more certain path towards long-term success.
“Technologies such as NG-PON2 present exciting new opportunities for vendors, such as delivering residential and business services on multiple wavelengths over the same fiber,” said Vincent O’Byrne, Director of Technology at Verizon.
“Not only does NG-PON2 parse business and residential customer traffic to isolate and resolve potential problems in the network, it can also scale to achieve speeds of 40 Gbps and above,” O’Byrne added.
“Technologies such as NG-PON2 present exciting new opportunities for vendors, such as delivering residential and business services on multiple wavelengths over the same fiber,” said O’Byrne. “Not only does NG-PON2 parse business and residential customer traffic to isolate and resolve potential problems in the network, it can also scale to achieve speeds of 40 Gbps and above.”
At the Broadband Forum’s Access Summit, The Verizon executive will address how the fiber access space is constantly evolving, with emerging PON technology providing solutions to some of the issues around cost and reliability during the Broadband World Forum, at the Messe Berlin on Tuesday, Oct. 24th.
Verizon has been an active participant in driving awareness about how NG-PON2 can work in a real-world carrier environment. The company completed NG-PON2 interoperability with five vendors for its OpenOMCI (ONT Management and Control Interface) spec, bringing it one step closer toward achieving interoperable NG PON systems.
The mega telco plans to offer it’s own OpenOMCI specification , which define the optical line terminal (OLT)-to-optical network terminal (ONT) interface, to the larger telecom industry.
Note 1. OpenOMCI specification was developed and is owned by Verizon, rathr than a formal standards/spec writing body like the ITU-T or Optical Internetworking Forum (OIF). Is this the new way of producing specs (like “5G” used in trials)?
Bernd Hesse, Chair of the Broadband Access Summit and Senior Director Technology Development at Calix, said:
“We will be exploring NG-PON2 in depth and the use cases that underpin the decisions to deploy them. I look forward to the debate, hearing from the experts in the industry and welcoming the community to these new Forum events.”
NOTE: This article complements others we’ve recently posted on U.S. carriers move to broadband fixed wireless access for rural and under-served geographical areas.
In many rural communities, where available broadband speed and capacity barely surpass old-fashioned dial-up connections, residents sacrifice not only their online pastimes but also chances at a better living. Counties without modern internet connections can’t attract new firms, and their isolation discourages the enterprises they have: ranchers who want to buy and sell cattle in online auctions or farmers who could use the internet to monitor crops. Reliance on broadband includes any business that uses high-speed data transmission, spanning banks to insurance firms to factories.
Rural counties with more households connected to broadband had higher incomes and lower unemployment than those with fewer, according to a 2015 study by university researchers in Oklahoma, Mississippi and Texas who compared rural counties before and after getting high-speed internet service.
“Having access to broadband is simply keeping up,” said Sharon Strover, a University of Texas professor who studies rural communication. “Not having it means sinking.”
Ensuring access to an open, thriving online ecosystem through modern and even-handed internet rules is critical for every American, but much more so for the 60 million rural Americans who rely on the internet to connect them to a rapidly evolving global economy. Studies show that as rural communities adopt and use broadband services, incomes go up and unemployment falls. Broadband providers support protections that ensure consumers and innovators alike don’t have to worry about blocked websites or throttled service. Rural areas need more investment, not less. And modern Open Internet rules will encourage this needed progress.
Full Story: ustelecom.org
Sidebar – Fast Internet Service:
About 39% of the U.S. rural population, or 23 million people, lack access to broadband internet service—defined as “fast” by the Federal Communications Commission—compared with 4% of the urban residents.
Fast Internet service, according to the FCC, means a minimum download speed of 25 megabits per second, a measure of bandwidth known as Mbps. That speed can support email, web surfing, video streaming and graphics for more than one device at once. It is faster than old dial-up connections—typically, less than 1 Mbps—but slower than the 100 Mbps service common in cities.
A recent Forbes article titled “Don’t Forget Rural America…..” by Richard Boucher stated:
In announcing the “Restoring Internet Freedom” rulemaking, the FCC stated that “[o]ur actions today continue our critical work to promote broadband deployment to rural consumers and infrastructure investment throughout our nation, to brighten the future of innovation both within networks and at their edge, and to close the digital divide.” This past July, the Commission declared August to be “Rural Broadband Month” at the FCC.
Two years following the 2015 reclassification of broadband as a common carrier telecommunications service, it’s clear that broadband investment has declined in rural America. Representatives of internet service providers (ISPs) from states like Arkansas, Washington, Kentucky, and Nebraska have all offered evidence detailing how regulatory uncertainty arising from the “Title II” decision has retarded and, in many situations, stopped investment in their regions.
The Wireless Internet Service Providers Association (WISPA), which has a large rural membership, said that the switch to Title II has led to “vast uncertainty and significant negative economic impacts for WISPA members who have built their networks from scratch using their own at-risk capital without federal subsidies[.]”
The formula for bringing high-speed internet connectivity to everyone in rural America is multi-faceted. It requires a combination of wired and wireless deployments, and government – through the FCC’s Universal Service programs and loans and grants from the U.S. Departments of Commerce and Agriculture – all have a role to play. But indispensable to success is the creation of a regulatory framework that incentivizes private capital to deploy broadband everywhere, including rural America. As long as the regulatory uncertainty of Title II remains, rural America to a large extent will be cut off from essential private broadband deployment funding and, as a result, fall even further behind.
The discussion, as well as a fair amount of heated rhetoric, are sure to continue over the next few weeks regarding the proper classification for broadband. Meanwhile, don’t forget rural America. The best way to ensure that all corners of the country get the connectivity they need is for the FCC to restore the classification of broadband as an information service. Thereafter, Congress should enact legislation that codifies open internet rules and at long last puts to rest a debate that has raged for more than a decade.
Another approach to delivering rural broadband are co-ops like this one:
AT&T is not the only U.S. carrier attempting to provide broadband fixed wireless access to rural areas. CenturyLink has requested an experimental license from the Federal Communications Commission for a test to reach isolated rural areas via a fixed wireless service over the 3.4 GHz to 3.7 GHz spectrum band.
The trial is aimed to evaluate the use of wireless spectrum to provide broadband services to those rural areas where it’s difficult to make wire-line infrastructure/facilities available.
“The testing seeks to understand the viability of new technologies in this band,” CenturyLink wrote in an FCC filing.
“CenturyLink seeks confidential treatment for the Exhibit on the basis that it contains confidential commercial information, technical data and trade secrets concerning CenturyLink services under development and related testing processes, all of which CenturyLink customarily guards from public disclosure,” CenturyLink said.
Besides the 3.4-3.7 GHz bands, CenturyLink is looking at how it might work with other network service providers rolling out future 5G wireless networks.
Glen Post, CEO of CenturyLink, told investors during the Goldman Sachs Communacopia Conference in September that it would be open to such partnerships to accelerate the speed at which it is rolling out service to rural areas under the CAF-II program.
“On the wireless side, we want to partner with 5G providers and other wireless providers where we can bring higher speeds to customers at less costs,” Post said. “If some of the proposed wireless build-outs occur in the CAF-II areas we cover, we think it will be a lower-cost opportunity to reach those customers and cover higher speeds for a lot more customers with that type of technology.”
CenturyLink joins several other rural-centric providers like Frontier, Consolidated and Windstream are seeing similar potential. As we’ve previously noted, AT&T’s rural wireless broadband recently added 9 more states.
Frontier confirmed it was conducting tests of how it can use fixed wireless to address the broadband availability problem in very rural areas via the FCC’s CAF-II funds.
Frontier joined Consolidated and Windstream in a joint FCC filing (PDF) related to a request to create flexible use of spectrum bands between 3.7 and 24 GHz.
Consolidated and Windstream also expressed interest in being able to use 3.7-4.2 GHz band spectrum for rural fixed point-to-multipoint deployments, such as through the rules proposed by the Broadband Access Coalition.
The service providers said that these spectrum bands would “provide another key tool in the toolbox to reach the hardest to serve rural Americans.”
AT&T has brought its fixed wireless broadband service to nine more states, bringing the total coverage to more than 160,000 rural locations in 18 states. The service, partly funded by the U.S. federal Connect America Fund (CAF) program, provides homes and businesses with download speeds of at least 10 Mbps with a minimum of 1 Mbps upstream. The service uses licensed WCS (Band 30) 2.3 GHz spectrum.
This fixed wireless service has broadband usage caps of 160 GB per month, with additional 50 GB increments of data charged at $10 per month. It’s priced at $60 per month when bundled with other AT&T services.
The additional 9 states include:
They join Alabama, Florida, Georgia, Kentucky, Louisiana Mississippi, North Carolina, South Carolina and Tennessee, where this AT&T rural broadband service is already available in certain markets. AT&T has plans to reach 400,000 locations by the end of this year, and over 1.1 million locations by 2020. This AT&T rural broadband expansion is partially funded by the Connect America Fund (CAF), the FCC’s program to expand rural broadband access.
“Closing the connectivity gap is a top priority for us,” said Cheryl Choy, vice president, wired voice and internet products at AT&T in a press release announcing the expansion. “Access to fast and reliable internet is a game changer in today’s world.”
AT&T may gain some competition for this fixed wireless service, at least in Mississippi. C Spire just announced their intention to aggressively expand fixed wireless service in Mississippi this week. They cited the advantage their 25 Mbps fixed wireless service has over certain CAF funded 10 Mbps fixed wireless options, a specific reference to AT&T.
“For many rural families and communities, the introduction of this service from AT&T will mark a new era of increased broadband speeds and access to cheaper and more diverse content.” said Bret Swanson, president, Entropy Economics. “AT&T’s move into these new communities will also yield additional economic benefits and can help create new jobs.”
To learn more about Fixed Wireless Internet from AT&T, go to att.com/internet/fixed-wireless.html.
Research conducted by IHS Markit and Point Topic was published today by the European Commission (EC). The Broadband Coverage in Europe 2016 study found that at the end of June 2016, more than three-quarters of EU homes have access to high-speed broadband services and 4G LTE coverage was nearly ubiquitous with 96 percent of EU households covered by 4G LTE networks.
This is the fourth edition of the study delivered by IHS Markit and Point Topic to the EC which provides data and analysis on availability of broadband services by various technologies in 31 countries across Europe (EU-28, Iceland, Norway, and Switzerland).
The final report and accompanying data tables are available at the EC website.
- In the 12 months to the end of June 2016, 12.8 million new EU households gained access to high-speed broadband delivered via Next-Generation Access (NGA) networks
s:By mid-2016, high-speed broadband services (at least 30 Mbps download speeds) were available to 75.9 percent of EU households
- Very-high-speed-DSL (VDSL) continues to be the key driver of NGA coverage growth across the EU, increasing by 7.1 percentage points and reaching nearly a half (48.2 percent) of EU homes
- 4G-LTE networks expanded at a fast pace and covered 96 percent of EU households by the end of June 2016
- The gap between rural and national NGA coverage is closing, but remains significant with only 39.2 percent rural households across the EU having access to high-speed broadband services
“Availability of 4G-LTE services has become near-universal in many study countries,” said Alzbeta Fellenbaum, principal analyst at IHS Markit and manager of the project. “In 11 countries, LTE coverage reached 99 percent of households and overall, LTE coverage now reaches similar levels to those of 3G HSPA networks. This is a major improvement compared to just four years ago, when 4G LTE services were available to only 59.1 percent of EU homes.”
Copper upgrades continue to be key for high-speed broadband growth in Europe
Broadband network operators across Europe continue to focus their deployment strategies on upgrading existing copper DSL networks instead of investing in the typically more expensive deployments of fibre optic networks all the way to customers’ property.
“Since 2013, VDSL has been the fastest growing fixed broadband technology tracked by the study, and some countries have seen dramatic year-on-year growth in VDSL,” Fellenbaum said. “For instance, VDSL coverage in Italy more than doubled during the twelve-month period to mid-2016, as coverage increased by 33.6 percentage points. Iceland, Germany, Hungary and Slovakia also witnessed double-digit growth in VDSL coverage during the twelve-month period to mid-2016.”
Portugal breaks Baltic leadership in super-fast FTTP broadband availability for the first time
Availability of fiber-to-the-premise (FTTP) services in Portugal improved by 10.7 percentage points during the twelve-month period to mid-2016 and as a consequence of this growth, Portugal with 86.1 percent of home passed by FTTP networks has now surpassed Latvia (85.2 percent) and Lithuania (81.4 percent) to rank first in terms of FTTP coverage among all study countries.
However, big differences remain among European countries in terms of FTTP availability and while FTTP access is on offer in all study countries, in some of the countries FTTP services are available only on a very limited basis.
As in previous years, Greece and Belgium reported the lowest levels of FTTP coverage, at 0.6 percent and 0.4 percent. In the UK, FTTP coverage was only slightly higher at 1.8 percent. “This reflects the preference of operators in these countries to prioritise their deployment strategies on upgrading existing VDSL networks, rather than investing in the typically more expensive FTTP technology,” Fellenbaum reiterated.
Gap in rural broadband coverage shrinking
Access to broadband services in rural areas remains a key priority for the EU. At the end of June 2016, 92.6 percent of rural households across the EU28 had access to at least one fixed broadband technology. However, only 39.2 percent (12.0 million rural households) could benefit from NGA broadband.
Nevertheless, rural NGA coverage increased by 9.5 percentage points by mid-2016 and in total, 2.9 million additional rural households gained access to next generation broadband services between the end of June 2015 and 2016.
“Moreover, we have seen that the gap between rural and national coverage, for both overall fixed and NGA technologies, is declining compared to previous editions of the study suggesting increasing investment in rural broadband,” Fellenbaum said.
For information about purchasing IHS Markit information, contact the sales department at IHS in the Americas at (844) 301-7334 or AmericasLeads@ihs.com; in Europe, Middle East and Africa (EMEA) at +44 1344 328 300 or firstname.lastname@example.org; or Asia-Pacific (APAC) at +604 291 3600 or technology_APAC@ihs.com.
The State of Broadband 2017: Broadband Catalyzing Sustainable Development report has been released by the UN Broadband Commission for Sustainable Development.
According to the report, while 48% of the global population is now online, some 3.9 billion people still do not have Internet access, with the digital gap growing between developed and developing countries.
In addition, only 76% of the world’s population lives within access of a 3G signal, and only 43% of people within access of a 4G connection. The disparities in gender access are also becoming wider in developing countries.
“Broadband is crucial to connecting people to the resources needed to improve their livelihoods, and to the world achieving the Sustainable Development Goals,” said ITU Secretary-General Houlin Zhao, who serves as co-Vice Chair of the Commission with UNESCO Director-General Irina Bokova.
“The goals for education, gender equality and infrastructure include bold targets for information and communication technology. The State of Broadband 2017 report outlines how broadband is already contributing to this and makes valuable recommendations for how it can increase this contribution into the future.”
Sheikh Saud Bin Nasser Al Thani, Group CEO, Ooredoo, said:
“The report shines a crucial light on the ongoing global challenge to help people across the world access the life-changing benefits of internet access. At Ooredoo, we continue to invest in mobile technology, people and resources that enable our communities – in particular underserved women and youth – to enjoy the internet and use it as a means to improve their lives and achieve their full potential. As we deploy the power of digital technology to give people access to the services and support they need, we urge governments, operators and regulators to continue working closely together to address the deepening digital inequality in global connectivity.”
Issued annually, The State of Broadband report is a unique global snapshot of broadband network access and affordability, with country-by-country data measuring broadband access against key advocacy targets set by the Commission in 2011.
The report also examines global trends in broadband connectivity and technologies, reflects on policy and regulatory developments, as well as the applications of broadband for sustainable development. It also presents several policy recommendations.
Promoting investment in broadband connectivity from a broad range of sectors, the report notes, can help achieve the full potential of these technologies and bring the world closer to the goal of an inclusive digital society accessible by all.