On Thursday the FCC gave formal approval to a plan by SpaceX to build a global broadband satellite network using low-Earth orbit satellites. The FCC order approving SpaceX’s application came with some conditions, like avoiding collisions with orbital debris in space. Some of the other conditions imposed by the FCC relate to signal power levels and preventing interference with other communications systems in various frequency bands.
SpaceX intends to start launching operational satellites as early as 2019, with the goal of reaching the full capacity of 4,425 satellites in 2024. The FCC approval just requires SpaceX to launch 50 percent of the satellites by March 2024, and all of them by March 2027. SpaceX has been granted authority to use frequencies in the Ka (20/30 GHz) and Ku (11/14 GHz) bands.
“This is the first approval of a U.S.-licensed satellite constellation to provide broadband services using a new generation of low-Earth orbit satellite technologies,” the Federal Communications Commission said in a statement.
The Federal Aviation Administration said on Wednesday that SpaceX plans to launch a Falcon 9 rocket on April 2 at Cape Canaveral, Florida. “The rocket will carry a communications satellite,” the FAA said.
FCC Chairman Ajit Pai in February had endorsed the SpaceX effort, saying: “Satellite technology can help reach Americans who live in rural or hard-to-serve places where fiber optic cables and cell towers do not reach.”
About 14 million rural Americans and 1.2 million Americans on tribal lands lack mobile broadband even at relatively slow speeds.
FCC Commissioner Jessica Rosenworcel, a Democrat, said on Thursday that the agency needs “to prepare for the proliferation of satellites in our higher altitudes.” She highlighted the issue of orbital debris and said the FCC “must coordinate more closely with other federal actors to figure out what our national policies are for this jumble of new space activity.”
SpaceX’s network (known as “Starlink”) will need separate approval from the International Telecommunication Union (ITU). The FCC said its approval is conditioned on “SpaceX receiving a favorable or ‘qualified favorable’ rating of its EPFD [equivalent power flux-density limits] demonstration by the ITU prior to initiation of service.” SpaceX will also have to follow other ITU rules.
Like other operators, SpaceX will have to comply with FCC spectrum-sharing requirements. Outside the US, coexistence between SpaceX operations and other companies’ systems “are governed only by the ITU Radio Regulations as well as the regulations of the country where the earth station is located,” the FCC said.
SpaceX and several other companies are planning satellite broadband networks with much higher speeds and much lower latencies than existing satellite Internet services. SpaceX satellites are planned to orbit at altitudes of 1,110km to 1,325km, whereas the existing HughesNet satellite network has an altitude of about 35,400km.
SpaceX has said it will offer speeds of up to a gigabit per second, with latencies between 25ms and 35ms. Those latencies would make SpaceX’s service comparable to cable and fiber, while existing satellite broadband services have latencies of 600ms or more, according to FCC measurements.
“SpaceX states that once fully deployed, the SpaceX system… will provide full-time coverage to virtually the entire planet,” the FCC order said.
The FCC previously approved requests from OneWeb, Space Norway, and Telesat to offer broadband in the US from low-Earth orbit satellites. SpaceX is the first US-based operator to get FCC approval for such a system, the FCC said in an announcement.
“These approvals are the first of their kind for a new generation of large, non-geostationary satellite orbit [NGSO], fixed-satellite service [FSS] systems, and the Commission continues to process other, similar requests,” the FCC said.
SpaceX launched the first demonstration satellites for its broadband project last month. In addition to the 4,425 satellites approved by the FCC, SpaceX has also proposed an additional 7,500 satellites operating even closer to the ground, saying that this will boost capacity and reduce latency in heavily populated areas. It’s not clear when those satellites will launch.
FCC approval of SpaceX’s application was unanimous. But the commission still has work to do in preventing all the new satellites from crashing into each other, FCC Commissioner Jessica Rosenworcel said.
“The FCC has to tackle the growing challenge posed by orbital debris. Today, the risk of debris-generating collisions is reasonably low,” Rosenworcel said. “But they’ve already happened—and as more actors participate in the space industry and as more satellites of smaller size that are harder to track are launched, the frequency of these accidents is bound to increase. Unchecked, growing debris in orbit could make some regions of space unusable for decades to come. That is why we need to develop a comprehensive policy to mitigate collision risks and ensure space sustainability.”
FCC rules on satellite operations were originally “designed for a time when going to space was astronomically expensive and limited to the prowess of our political superpowers,” Rosenworcel said. “No one imagined commercial tourism taking hold, no one believed crowd-funded satellites were possible, and no one could have conceived of the sheer popularity of space entrepreneurship.”
SpaceX still needs to provide an updated debris prevention plan as part of a condition the FCC imposed on its approval.
The commission order said:
Although we appreciate the level of detail and analysis that SpaceX has provided for its orbital debris mitigation and end-of-life disposal plans, we agree with NASA that the unprecedented number of satellites proposed by SpaceX and the other NGSO FSS systems in this processing round will necessitate a further assessment of the appropriate reliability standards of these spacecraft, as well as the reliability of these systems’ methods for de-orbiting the spacecraft. Pending further study, it would be premature to grant SpaceX’s application based on its current orbital debris mitigation plan. Accordingly, we believe it is appropriate to condition grant of SpaceX’s application on the Commission’s approval of an updated description of the orbital debris mitigation plans for its system.
The approval of SpaceX’s application is conditioned on the outcome of future FCC rulemaking proceedings, so SpaceX would have to follow any new orbital debris rules passed by the FCC. We detailed the potential space debris problem in a previous article. Today, there are more than 1,700 operational satellites orbiting the Earth, among more than 4,600 overall, including those that are no longer operating.
SpaceX’s plan alone would nearly double the total number of orbiting satellites. SpaceX told the FCC that it has plans “for the orderly de-orbit of satellites nearing the end of their useful lives (roughly five to seven years) at a rate far faster than is required under international standards.”
Opposition from competitors
SpaceX’s application drew opposition from other satellite operators, who raised concerns about interference with other systems and debris. The FCC dismissed some of the complaints. For example, OneWeb wanted an unreasonably large buffer zone between its own satellites and SpaceX’s, the FCC said:
[T]he scope of OneWeb’s request is unclear and could be interpreted to request a buffer zone that spans altitudes between 1,015 and 1,385 kilometers. Imposition of such a zone could effectively preclude the proposed operation of SpaceX’s system, and OneWeb has not provided legal or technical justification for a buffer zone of this size. While we are concerned about the risk of collisions between the space stations of NGSO systems operating at similar orbital altitudes, we think that these concerns are best addressed in the first instance through inter-operator coordination.
If operators fail to agree on a coordination plan in the future, “the Commission may intervene as appropriate,” the FCC said.
The Federal Communications Commission u (FCC) has received a petition from Windstream, CenturyLink, Frontier and Consolidated urging the agency to continue to support census tract license sizes in rural areas. The providers said this would help them deploy 3.5 GHz broadband wireless access technology under the agency’s Connect America Fund (CAF) II.
In a joint FCC filing, these telecom service providers, which are leveraging CAF-II funding to provide 10/1 Mbps rural broadband services, said they are testing and deploying 3.5 GHz-compatible broadband wireless technology in areas where deploying fiber and related facilities is cost prohibitive. By offering neutral access to the 3.5 GHz band in rural areas, these providers say they would be able to accelerate rural broadband rollouts. A key issue for these providers is that Partial Economic Areas (PEAs) aren’t aligned with how rural CAF areas are structured. A recent WISPA study illustrated that rural CAF areas tend to cluster around the edges of PEAs.
“To make these types of rural broadband deployments possible, the FCC must preserve census tract license sizes in rural areas—partial economic areas and even counties would preclude meaningful participation in the band by companies focused on providing broadband in the most rural areas,” the service providers said in the joint filing. “By licensing the 3.5 GHz band in rural areas on a census tract basis, the FCC can help enable faster broadband to more rural Americans.”
The other issue raised in rural areas is the amount of available spectrum. Although there’s a shortage of spectrum in urban areas like Boston and New York City, the service providers say “there is a relative spectrum abundance in rural areas.”
Proving out the economics for broadband deployments is also a challenge. In an urban area, there could be several providers vying for consumers’ and business’ attention, but in a rural area service providers rely on subsidies like the CAF-II program to make investments. Finally, the service provider group said that secondary markets are too costly and slow to allow for rural deployments.
“Rural players have not been able to realistically obtain spectrum in other bands,” the providers said. “At the same time, package bidding coupled with census tract license sizes reduces exposure risk for larger companies while promoting competition.”
The providers added that “there should be no concern that carriers are going to ‘cherry-pick’ licenses in rural areas.”
To make more spectrum available to service providers expanding rural broadband access, these service providers proposed that the FCC should allocate additional spectrum available for rural areas. Specifically, the group said that the FCC should consider allocating 80 MHz of spectrum as part of the CAF program.
“80 MHz in CAF CBs would enable carriers to deploy sustained speeds greater than 25/3 or more to over 200 customers per site,” the service provider group said. “Technology advances will allow for faster speeds in the future.”
Frontier is currently rolling out 25 Mbps speeds in some of its rural markets using the 3.5 GHz spectrum and will consider higher speeds as it procures new equipment and spectrum over time.
While it’s going to take time to see how these providers apply broadband wireless to their rural builds, it’s clear that they are showing some commitment to finding new ways to serve markets that have been traditionally ignored.
Ongoing tests, deployments
Seeing the 3.5 GHz band as another tool to reach rural Americans, these providers are in some stage of either testing or deploying broadband wireless.
Frontier, for one, confirmed in September it was testing broadband wireless with plans to deploy it in more areas if it meets its requirements. The service provider is also exploring 3.5 GHz deployments, including as a member of the CBRS Alliance, which is exploring CBRS specifications and spectrum use rules.
Already, the service provider has been making progress with its CAF-II commitments, providing broadband to over 331,000 and small businesses in its CAF-eligible areas, and the company has improved speeds to nearly 875,000 additional homes and businesses. The deployments reflect a combination of Frontier capital investment and resources that the FCC has made available through the CAF program.
Perley McBride, CFO of Frontier, told investors during a conference in September 2017 that broadband wireless could be a “good solution” to the deployment challenge “in very rural America[,] and if it works the way [Frontier is] expecting it to work, . . . [Frontier] will deploy more of that next year.”
Windstream is trialing fixed wireless and modeling 3.5 GHz deployments and is also a member of the CBRS Alliance.
CenturyLink is also advancing its CAF-II commitments, reaching over 600,000 rural homes and businesses with broadband over the past two years. While it has not revealed any specific plans yet, CenturyLink has obtained an experimental license for 3.5 GHz spectrum.
“The testing seeks to understand the viability of new technologies in this band that may be useful in providing fixed broadband services,” CenturyLink said in the filing.
At the Deutsche Bank Media, Telecom and Business Services Conference, John Stephens, senior executive vice president and chief financial officer, AT&T discussed the company’s plans for 2018 and beyond. Mr. Stephens said AT&T remains confident that it is on the right track to get its wireline business services back to positive growth as more customers transition to next-generation strategic services like SD-WAN and Carrier Ethernet. However, the drag from legacy services will continue to be an issue for the near term. He then outlined the company’s priorities for 2018, which include closing its pending acquisition of Time Warner and investing $23 billion in capital to build the best gigabit network in the United States.
On the entertainment side of the business, AT&T plans to launch the next generation of its DIRECTV Now video streaming service in the first half of 2018. The new platform will include features like cloud DVR and a third video stream. Additional features expected to launch later in 2018 include pay-per-view functionality and more video on demand. Note that DIRECT TV Now can operate over a wireline or wireless network with sufficient bandwidth to support video streaming. Stephens said during the interview:
“……Giving us this opportunity to come up with a new platform later in this first half of this year, the second-generation plant for giving customers Cloud DVR, additional ability to pay per view and most sporting events and movies, and all kinds of other capabilities is what we’re seeing here, that’s what we want to do with regard to that entertain business and transitioning and we’re confident that we’re on the right track and it’s going quite well.”
The company’s 2018 plans also include improved profitability in its wireless operations in Mexico and, after the Time Warner acquisition closes, deployment of a new advertising and analytics platform that will use the company’s customer data to bring new, data-driven advertising capabilities within premium video. And, as always, AT&T remains laser-focused on maintaining an industry-leading cost structure.
AT&T’s investment plans include deployment of the FirstNet network, America’s first nationwide public safety broadband network specifically designed for our nation’s police, firefighters, EMS and other first responders.
“We were 56 out of 56, 50 states, 5 territories and the districts, probably all choose to put their public safety network, their FirstNet, their first responder network with AT&T, so that’s thrilling for us, that gives us the full funding of the program, it gives us the full authority to be the public service provider for the country, we’re really proud of that, and only because of the business aspects that’s serving our fellow citizens and being able to participate in the honorable job of saving lives and protecting people. So we’re really jazzed up about that.
Secondly, our plans were made last year for how to build out, and we’ve now been given the authority and the official build plans, approved build plans from the FirstNet authority. We spend last year investing in the core network, I think if people filed us in the fourth quarter; they said we actually got a $300 million reimbursement from the FirstNet authority for the expenditures we incurred last year. So the relentless preemption, the prioritized service refers to prices for police and fire and handling some emergency medical personal; all of that’s been done and now we’re out deploying the network, not only the 700 but also our AWS and WCS, our inventoried network that we now get to put into service on a very economic basis because we can do one tower client, we have the crane out there once, we have the people out there once and they’ve put all three pieces of spectrum at it once.”
The company will also enhance wireless network quality and capacity and plans to be the first to launch mobile “5G” service in 12 cities by the end of the year. AT&T announced in February that Atlanta, GA; Dallas and Waco, TX. will be among its first “5G” markets.
“We think about 5G is 5G evolution and I say that because it’s really important to put it all in perspective. So we think FirstNet, put WCS, AWS with 700 band 14 [ph], and use carrier aggregation and you use forward [indiscernible]; we’ve done that kind of test without the 700, we did that in San Francisco, we got 750 mag speeds in the City of San Francisco on this new network, this new 5G evolution; it’s using the LTE technology, it’s using the existing network but all this new technology. So if you think about that evolution now, when you lower that network hub, those 750 theoretical speeds might go down to 150 or 100 or somewhere down but tremendous speed even on a loaded network; so that’s the first step, we’re doing that now extensively and we’re going to do more of that as we build the first step that work out and put the 700 band in. So that’s the first step for us in this evolution.
Second, people might not think about this way but for us absolutely critical is the fiber bill. We’re taking a lot of fiber out to the Prime [ph], we’re taking a lot of fiber out to business locations, currently we have about 15 million locations with fiber between business and consumer, and by July next year, we’ll have about 22 million, about 8 million business, about 14 million to the Prime if you will, for consumers. So fiber is the key, and it’s a key not only delivering to the home or to the business but for the backhaul support. So if you’re an integrated carrier like we are and you’re building this fiber to go to the home, you’re going to pass the tower, you’re going to get fiber to that tower, you’re going to pass the business location, shopping mall, strip center, you’re going to build out to those.
So 5G is the second stage, we’ve got to think all the inter-gig this is the ability to deliver broadband overall electrical power lines, we’re testing that, we’ll see how that goes, that’s another step. If you think about using millimeter wave to do backhaul for small cells in really congested areas, we have high traffic volumes, you want to take a lot of traffic off, we have tested that, we have used millimeter wave to do that, we can do that. If you think about millimeter wave to do fixed wireless; so from the ally to my home, we have tested that we have the capability to do that, the challenges on that is where do you take it from the ally, where do you offload it, give it on to the network at what those costs are, but we can do that.
Lastly, you will see us put 5G into the core network. All of those things that were going to have to be measured by one of the chipsets ready for the handsets, we expect the chipsets might be next year, handset will come after that but we’re looking at the historically slow upgrade timeframe for phones. We had a couple of quarters last year that the upgrade rates were about 4%, that would equate the 25 quarters before your phone base turned over in an extreme example; so suggesting that things are going to be in the core network, it’s going to take a while, we’ll have pucks [ph] out by the end of the year, that will help but you have to have balance with regard to this.
When you think about those business cases, you think about those augmented reality and virtual reality and robotics and autonomous cars and things on the edge, those are going to be really important, that’s where the business cases will take us but we’ve got a long way to go before we get there. As we build FirstNet, we have been good fortunate being able to so to speak build the network house and leave the room for our 5G capability so that when it’s ready, we can just plug it in to do it with software defined network design, we had a great advantage for that but we’re going to have to make sure we have all of the equipment, not only switching equipment, the radio, the antenna but also the handset equipment before we start — if you will over-indexing on the revenues opportunities, they will be there, we will lead in the gigabit network.
We’ll have the best one because what FirstNet provides us and what the technology developments have allowed us and we will use 5G in that network but I want to be careful about how we think about when it’s going to be — you’re going to have a device in your hand and walking around on a normal kind of usage basis using 5G.”
Stephens said that AT&T reaches about 15 million customer locations with fiber. This includes more than 7 million consumer customer locations and more than 8 million business customer locations within 1,000 feet of AT&T’s fiber footprint. He expects this to increase to about 22 million locations by mid-2019.
For 2018, AT&T expects organic adjusted earnings per share growth in the low single digits, driven by improvements in wireless service revenue trends, improving profitability from its international operations, cost structure improvements from its software defined network/network function virtualization efforts and lower depreciation versus 2017.
Like earlier quarters, the challenges in the fourth quarter for AT&T came from declines in legacy services like Frame Relay and ATM. The company noted that fourth-quarter declines in legacy products were partially offset by continued growth in strategic business services. Total business wireline revenues were $7.4 billion, down 3.5% year-over-year but up sequentially.
Stephens said that more AT&T customers are adopting next-gen services, creating a new foundation for wireline business revenue growth.
“What’s happened is our customers have embraced the strategic services,” Stephens said. “Strategic services are over a $12 billion annual business and are over 42% or so of our revenue and are still growing quickly.”
Indeed, AT&T’s fourth-quarter strategic business services revenues grew by nearly 6%, or $176 million, versus the year-earlier quarter. These services represent 42% of total business wireline revenues and more than 70% of wireline data revenues and have an annualized revenue stream of more than $12 billion. This growth helped offset a decline of more than $400 million in legacy service revenues in the quarter.
Stopping short of forecasting overall wireline business service revenue growth, Stephens said that AT&T will eventually see a point where strategic services will surpass legacy declines.
“As we get past this inflection point where strategic services are growing at a faster than the degradation of legacy, we can get to a point where we are growing revenues,” Stephens said. “We’re not predicting that but we see the opportunity to do that.”
To achieve these business services revenue goals, AT&T’s business sales team is taking a two-pronged approach: retaining legacy services or converting them to strategic services.
While wireline business services continue to be a key focus for AT&T, the service provider is not surprisingly looking at ways to leverage its wireless network to help customers solve issues in their business. The wireless network can be used to support a business customer’s employee base while enabling IoT applications like monitoring of a manufacturing plant or a trucking fleet. Stephens expanded on the role of IoT to close out the interview:
“…you (‘ve) got to realize that if you build this FirstNet network out, things like IoT, things like coverage for business customers, things like the ability to connect factories that are automated, the robotics that have to have wireless connectivity to a controlled center for business customers, all improves dramatically and with that comes this opportunity to sell these wireless services. When you’re in — with the CIO and you can solve his security business, you can solve his big pipe of strategic services but you can also solve some wireless issues that his HR guy has for his connectivity for his employees, you can solve some issues that his engineering department has because they want to get real-time information about how their products are working out, whether it’s a car or a jet engine or a tractor, how it’s working in the field in real-time or you can give them new product and services demand for their internal sources like their pipelines or their shipping fleet.
This IoT capability can solve a lot of issues, you can make that CIO as the success factor for all his related peers, that’s a great thing to great solutions approach to business and that’s what we’re trying to do. Our team is trying to provide solutions for the business customers and we think having those two things together are really important.”
U.S. service provider C Spire today announced a partnership with electric utility Entergy Mississippi which aims to bring more than 300 miles of fiber to remote areas of Mississippi. C Spire will build and own the network, with Entergy contributing construction costs, according to C Spire Vice President of Government Relations Ben Moncrief in an interview with Telecompetitor.
Entergy will lease capacity on the network from C Spire to support its smart grid initiatives, he said. C Spire eventually expects to extend the middle-mile network to end user locations to support retail services, he added, although he emphasized that any such plans are not part of today’s news.
Details about the C Spire – Entergy partnership can be found in this press release. Clearly there were a lot of synergies for these companies to work together.
“This opens the door to offering service to residences and industrial parks,” Moncrief said. “But today is just about getting the (fiber optic) backbone in place.”
When Entergy Mississippi sought the Mississippi Public Service Commission’s approval to build a network to support its smart grid plans, one of the commissioners asked whether that network could also be “at least a foundation for broadband services,” Moncrief explained.
That idea led Entergy to a meeting with C Spire at which representatives of both companies had an “aha moment,” Moncrief recalled.
C Spire initially was a wireless carrier, as well as a provider of wireline business services, but in recent years has been quite aggressive in deploying fiber-to-the-home (FTTH) and other broadband network infrastructure in numerous rural markets in Mississippi. Meanwhile, Moncrief said, “Here’s an electric utility that for security reasons is keeping infrastructure away from population centers.”
The network will be installed with a minimum of 144-count fiber, “in some places more,” Moncrief noted. Each company will have its own fiber. The areas that the network will run through are “very rural” and might have been too costly for C Spire to build out to without the Entergy investment, Moncrief added.
C Spire also will gain connectivity from the rural areas to population centers, Moncrief said.
The construction project will involve placing fiber optic cable along five separate routes as follows:
- Delta: a 92-mile route through Sunflower, Humphreys, Madison and Hinds counties and near the cities of Indianola, Inverness, Isola, Belzoni, Silver City, Yazoo City, Bentonia, Flora and Jackson.
- North: a 51-mile stretch in Attala, Leake and Madison counties, including near the towns of McAdams, Kosciusko and Canton.
- Central: a 33-mile route through Madison, Rankin and Scott counties and near the towns of Canton, Sand Hill and Morton.
- South: a 77-mile route passing through Simpson, Jefferson Davis, Lawrence and Walthall counties and near the towns of Magee, Prentiss, Silver Creek, Monticello and Tylertown.
- Southwest: a 49-mile stretch in Franklin and Adams counties that’s near the communities of Bude, Meadville, Roxie, Natchez and Eddiceton.
“We’re excited about partnering with C Spire to modernize our electrical grid and expand rural broadband access in some hard-to-reach areas across the state,” said Haley Fisackerly, president and CEO of Entergy Mississippi. “We have about 30,000 customers within five miles of the proposed routes who could potentially have access to broadband service when the project is complete. In addition, all of our customers will benefit from the enhancements to our communication systems that connect our facilities, substations, offices and radio sites.” The company provides electric service to an estimated 445,000 customers in 45 counties across the state.
“A robust broadband infrastructure is critical to the success of our efforts to move Mississippi forward by growing the economy, fostering innovation, creating job opportunities and improving the quality of life for all our residents,” said Hu Meena, CEO of C Spire, a Mississippi-based diversified telecommunications and technology services company.
This new Gartner Group report is on the key impacts of digital business, cloud and orchestration strategies. In particular, IT leaders must continue to focus on meeting enterprise needs for expanded WAN connectivity, application performance and improved network agility, without compromising performance.
- As enterprises increasingly rely on the internet for WAN connectivity, they are challenged by the unpredictable nature of internet services.
- Enterprises seeking more agile WAN services continue to be blocked by network service providers’ terms and conditions.
- Enterprises seeking more agile network solutions continue to be hampered by manual processes and cultural resistance.
- Enterprise’s moving applications to public cloud services frequently struggle with application performance issues.
IT leaders responsible for infrastructure agility should:
- Reduce the business impact of internet downtime by deploying redundant WAN connectivity such as hybrid WAN for business-critical activities.
- Improve WAN service agility by negotiating total contractual spend instead of monthly or annual spend.
- Improve agility of internal network solutions by introducing automation of all operations using a step-wise approach.
- Ensure the performance of cloud-based applications by using carriers’ cloud connect services instead of unpredictable internet services.
- Improve alignment between business objectives and network solutions by selectively deploying intent-based network solutions.
Strategic Planning Assumptions:
Within the next five years, there will be a major internet outage that impacts more than 100 million users for longer than 24 hours.
- By 2021, 25% of enterprise telecom contracts will evolve to allow for greater flexibility such as canceling services or introducing new services within the contract period, up from less than 5% today.
- By 2021, productized network automation (NA) tools will be utilized by 55% of organizations, up from less than 15% today.
- By YE20, more than 30% of organizations will connect to cloud providers using alternatives to the public internet, which is a major increase from 5% in 3Q17.
- By 2020, more than 1,000 large enterprises will use intent-based networking systems in production, up from less than 15 today.
Gartner Group has five predictions that represent fundamental changes that are emerging in key network domains, from internal networking to cloud services and WAN services.
two key aspects that the majority of Gartner clients struggle with:
- The increased interest in utilizing the internet for WAN connectivity continues to raise concerns about the performance of public internet services and performance of applications deployed in public cloud services. We discuss the risk that enterprises encounter due to the unpredictable nature of the internet, and we discuss how an enterprise can use MPLS to connect directly to public cloud services instead of using the internet.
- Enterprises continue to need new business solutions deployed faster, but remain hampered by the inability of network solutions and network services to respond fast enough and rectify performance issues fast enough. We discuss three options to improve network operations as well as network services.
Source: Gartner (December 2017)
Strategic Planning Assumption: Within the next five years, there will be a major internet outage that impacts more than 100 million users for longer than 24 hours.
Analysis by: Andrew Lerner, Greg Young
- We are increasingly seeing organizations use the internet as a WAN, and estimate that approximately 20% of Gartner clients in many geographic regions have at least some critical branch locations entirely connected via the internet.
- Most IT teams don’t have a detailed understanding of the multitude of applications and services that are being used on the public internet and/or their criticality. This is because of years of line of business (LOB)-centric buying and the proliferation of SaaS.
- While the internet is highly resilient, there are specific infrastructure and technology hot spots that, if compromised, could threaten the internet as a whole or large portions of it. This could be the result of natural disasters, man-made accidents or intentional acts.
- Natural disasters and man-made acts that could impact large portions of the internet include earthquakes, solar flares, electronic pulses, meteors, tsunamis, hurricanes, major cable cuts and network operator errors.
- Intentional acts include hacktivism, terrorism toward critical infrastructure, and/or coordinated distributed denial of service (DDoS) attacks, attacks against carrier- and ISP-specific components, and protocols (e.g., SS7).
While the probability of each of these events individually is small, the likelihood that at least some of them will occur over an extended period of time is actually surprisingly high. For example, even if there is only a 1% chance that any of the 11 examples identified above results in an outage within a year, there is a statistical likelihood of over 45% that at least one of them will occur over a five-year period. Further, to date, there have been indications that the internet is vulnerable to sizable outages:
- In 2008, millions of users and large portions of the Middle East and India were impacted by a cable cut. 1
- In 2016, a large DDOS attack resulted in many large e-commerce sites going down, including Twitter, Netflix, Reddit and CNN. 2
- In 2015, Telekom Malaysia created a routing problem that rendered much of the Level 3 network unavailable. 3
- It has been widely reported that 70% of all internet traffic goes thru Northern Virginia 4 and, while this might be an overstated, there’s no doubt that there are several major chokepoints in the internet infrastructure.
At a minimum, an extended and widespread internet outage would cause dramatic revenue loss for enterprises, and could even create life-threating situations depending on what business the organizations is in. Initially, many organizations often brush this off by saying, “Well there’s not much we can do about it anyway” or “If there is a large internet outage due to a natural disaster, then personal safety is the priority and the enterprise connectivity is the least of our concerns.” However, there are very specific and actionable items that infrastructure and operations (I&O) leaders should take to mitigate the impact of a large outage.
Strategic Planning Assumption: By 2021, 25% of enterprise telecom contracts will evolve to allow for greater flexibility such as canceling services or introducing new services within the contract period, up from less than 5% today.
Analysis by: Danellie Young
- Enterprise telecom contracts are typically fixed in both term duration and for the services required for procurement.
- Most larger revenue contracts ($1 million annually) require the enterprise to agree to minimum revenue commitments on an annual basis.
- Major WAN decisions are made by 31% to 47% of enterprises each year, including equipment refresh or carrier renegotiations (assuming the refresh cycle on routers is six years, and the average enterprise WAN service contract is three years).
- A large majority of enterprises are struggling with the cost, performance and flexibility of their traditional WAN contracts, further exacerbated by the proliferation of public cloud applications.
Enterprise telecom contracts remain rigid and fixed, with specified services required to ensure compliance. Typically such contracts penalize customers when services are disconnected midterm. Enterprise telecom contracts are typically negotiated on 36-month cycles, based on either full-term or revenue commitments. Revenue commitments are set based on monthly spend, annual spend or total contract spending. Upon meeting the contract’s revenue commitment, the enterprise can then renegotiate or consider alternative services or providers since their financial obligation has been met. Terminating contracts early for convenience will typically levy penalties on the enterprise. These penalties range from 100% of the monthly recurring charges (MRCs) to a percentage of the MRCs to a declining portion through the remainder of the term (i.e., 100% in the first 12 months, 75% in months 13 to 24 and 50% through the end of the term).
Currently, contracts are split between term and revenue commit contracts, whereby most of the revenue commitments are made on an annualized basis. Alternatively, a small number (5%) are offered or negotiated with total contract values tied to them. Total contract revenue commitments enable the enterprise to meet the obligation earlier in their contract and provide the opportunity to negotiate new lower rates and a new contract, and to solicit competitive proposals before the full 36-month cycle terminates.
In addition to traditional voice and data services, many networking vendors now offer SD-WAN functionality products, while carriers and managed service providers (MSPs) are beginning to launch and roll out managed SD-WAN services as an alternative to managed routers. Contract flexibility will be needed to allow the enterprise the flexibility to migrate to new solutions, without financial risk or paying early termination fees on services. Thus, while we anticipate rapid adoption of SD-WAN and virtualized customer premises equipment (vCPE) solutions in the enterprise, SD-WAN by itself will not improve contractual conditions.
A Gartner-conducted software-defined (SD)-WAN survey has identified the key drivers for SD-WAN adoption and preferences for managed services from non-carrier providers. Despite its relative immaturity, the perceived benefits create incentives for IT leaders responsible for networking to leap into SD-WAN pilots now.
- Please refer to our report on IHS-Markit analysis of the SD-WAN market. Cisco and VMware are the top two vendors due to recent acquisitions of Viptela and Velocloud respectively. Cisco also bought Meraki which provides a SD-WAN solution as well as business WiFi networks.
- According survey data from Nemertes Research, enterprises are not discarding their MPLS networks as they deploy SD-WANs. “Fully 78% of organizations deploying SD-WAN have no plan to completely drop MPLS from their WAN,” Nemertes John Burke reports. “However, most intend to reduce and restrict their use of it (MPLS), if not immediately then over the next few years.”
- “Although it brings a lot of benefits to the table, SD-WAN still uses the public Internet to connect your sites,” points out Network World contributor Mike C. Smith. “And once your packets hit the public Internet, you will not be able to guarantee low levels of packet loss, latency and jitter: the killers of real-time applications.”
Key Findings of Gartner Survey:
- Enterprise clients cite increased network availability, reliability and reduced WAN costs resulting from less-expensive transport as the top benefits of software-defined WAN.
- Enterprise clients are concerned about the large number of SD-WAN vendors and anticipate market consolidation, making some early choices risky.
- A lack of familiarity with the technology, the instability of the vendors, and skepticism about performance and reliability are the most common concerns when deploying SD-WAN.
- Nearly two-thirds of the organizations we surveyed prefer buying managed SD-WAN, demonstrating a preference for presales and postsales support. A preference for type of managed service provider does not align with legacy carrier MSP adoption rates.
To maximize new SD-WAN opportunities, infrastructure and operations leaders planning new networking architectures should:
- Include SD-WAN solutions on their shortlists if they’re aggressively migrating apps to the public cloud, building hybrid WANs, refreshing branch WAN equipment and/or renegotiating a managed network service contract.
- Include a diverse range of management solutions related to SD-WAN considerations; don’t just look at carrier offers to determine the best option available to meet enterprise requirements.
- Compare each vendor’s current features and roadmaps with enterprise requirements to develop a shortlist, and use pilots and customer references to confirm providers’ ability to deliver on the most desirable features and functionality.
- Focus pilots on specific, critical success factors and negotiate contract terms and conditions to support service configuration changes, fast site roll-out and granular application reporting.
- Negotiate flexible WAN or managed WAN services contract clauses to support evolution to SD-WAN when appropriate.
Gartner has forecast SD-WAN to grow at a 59% compound annual growth rate (CAGR) through 2021 to become a $1.3 billion market (see Figure 1 and “Forecast: SD-WAN and Its Impact on Traditional Router and MPLS Services Revenue, Worldwide, 2016-2020”). Simultaneously, the overall branch office router market is forecast to decline at a −6.3% CAGR and the legacy router segment will suffer a −28.1% CAGR through 2020.
SD-WAN equipment and services dramatically simplify the complexity associated with the management and configuration of WANs. They provide branch-office connectivity in a simplified and cost-effective manner, compared with traditional routers. These solutions enable traffic to be distributed across multiple WAN connections in an efficient and dynamic fashion, based on performance and/or application-based policies.
The survey data highlights that most of the respondent organizations are in the early stages of their SD-WAN projects. To qualify, respondents must be involved in choosing, implementing and/or managing network services and equipment for their company’s sites, while their primary role in the organization is IT-focused or IT-business-focused. We intentionally searched for companies that plan to use or are using SD-WAN. Of those surveyed, 93% plan to use SD-WAN within two years or are piloting and deploying now, with approximately 73% in pilot or deployment mode. These results do not reflect actual market adoption rates, because Gartner estimates that between 1% and 5% of enterprises have deployed SD-WAN. Although the results differ numerically, the qualitative feedback is compelling.
Related to specific number of sites, the responses are shown in Figure below:
Respondents using SD-WAN; n = 21 (small sample size; results are indicative). Totals may not add up to 100%, due to rounding.
Source: Gartner Group (November 2017)
Enterprises cite their lack of deep technology familiarity as a key barrier to using SD-WAN. In fact, of those who plan for SD-WAN, nearly 50% have concerns about their lack of technical familiarity, followed by concerns over the stability of vendors and concerns about performance and reliability.
Editor’s Note: Surprisingly, enterprises don’t seem to be concerned with the lack of SD-WAN standards which dictates a single vendor solution/lock-in.
With more than 30 SD WAN vendors in the market and consolidation accelerating, this doesn’t come as a surprise.
Other key findings include:
- Vendor stability is a major concern. Among the 51% of respondents who selected performance and reliability as key drivers (n = 44), nearly half (45%) had concerns about the stability of the vendors.
- Many among the 50% who see agility as a key driver (n = 36) expressed concern about their lack of familiarity with the technology.
- Among organizations with fewer than 1,000 employees (n = 53), the most common concern is lack of familiarity with the technology (51%). Organizations with 1,000 to 9,999 employees (n = 38) find the ROI of the investment to be most common challenge (50%).
- Among the EMEA respondents (n = 48), half were most concerned about the stability of the vendors, followed closely by concerns about proven performance and reliability.
To purchase the complete Gartner SD-WAN report go to:
Timon Sloane of the Open Networking Foundation (ONF) provided an update on project CORD on November 1st at the Telecom Council’s Carrier Connections (TC3) summit in Mt View, CA. The session was titled:
Spotlight on CORD: Transforming Operator Networks and Business Models
After the presentation, Sandhya Narayan of Verizon and Tom Tofigh of AT&T came up to the stage to answer a few audience member questions (there was no real panel session).
The basic premise of CORD is to re-architect a telco/MSO central office to have the same or similar architecture of a cloud resident data center. Not only the central office, but also remote networking equipment in the field (like an Optical Line Termination unit or OLT) are decomposed and disaggregated such that all but the most primitive functions are executed by open source software running on a compute server. The only hardware is the Physical layer transmission system which could be optical fiber, copper, or cellular/mobile.
Author’s Note: Mr. Sloane didn’t mention that ONF became involved in project CORD when it merged with ON.Labs earlier this year. At that time, the ONOS and CORD open source projects became ONF priorities. The Linux Foundation still lists CORD as one of their open source projects, but it appears the heavy lifting is being done by the new ONF as per this press release.
A reference implementation of CORD combines commodity servers, white-box switches, and disaggregated access technologies with open source software to provide an extensible service delivery platform. This gives network operators (telcos and MSOs) the means to configure, control, and extend CORD to meet their operational and business objectives. The reference implementation is sufficiently complete to support field trials.
Illustration above is from the OpenCord website
Highlights of Timon Sloane’s CORD Presentation at TC3:
- ONF has transformed over the last year to be a network operator led consortium.
- SDN, Open Flow, ONOS, and CORD are all important ONF projects.
- “70% of world wide network operators are planning to deploy CORD,” according to IHS-Markit senior analyst Michael Howard (who was in the audience- see his question to Verizon below).
- 80% of carrier spending is in the network edge (which includes the line terminating equipment and central office accessed).
- The central office (CO) is the most important network infrastructure for service providers (AKA telcos, carriers and network operators, MSO or cablecos, etc).
- The CO is the service provider’s gateway to customers.
- End to end user experience is controlled by the ingress and egress COs (local and remote) accessed.
- Transforming the outdated CO is a great opportunity for service providers. The challenge is to turn the CO into a cloud like data center.
- CORD mission is the enable the “edge cloud.” –>Note that mission differs from the OpenCord website which states:
“Our mission is to bring datacenter economies and cloud agility to service providers for their residential, enterprise, and mobile customers using an open reference implementation of CORD with an active participation of the community. The reference implementation of CORD will be built from commodity servers, white-box switches, disaggregated access technologies (e.g., vOLT, vBBU, vDOCSIS), and open source software (e.g., OpenStack, ONOS, XOS).”
- A CORD like CO infrastructure is built using commodity hardware, open source software, and white boxes (e.g. switch/routers and compute servers).
- The agility of a cloud service provider depends on software platforms that enable rapid creation of new services- in a “cloud-like” way. Network service providers need to adopt this same model.
- White boxes provide subscriber connections with control functions virtualized in cloud resident compute servers.
- A PON Optical Line Termination Unit (OLT) was the first candidate chosen for CORD. It’s at the “leaf of the cloud,” according to Timon.
- 3 markets for CORD are: Mobile (M-), Enterprise (E-), and Residential (R-). There is also the Multi-Service edge which is a new concept.
- CORD is projected to be a $300B market (source not stated).
- CORD provides opportunities for: application vendors (VNFs, network services, edge services, mobile edge computing, etc), white box suppliers (compute servers, switches, and storage), systems integrators (educate, design, deploy, support customers, etc).
- CORD Build Event was held November 7-9, 2017 in San Jose, CA. It explored CORD’s mission, market traction, use cases, and technical overview as per this schedule.
Service Providers active in CORD project:
- AT&T: R-Cord (PON and g.fast), Multi-service edge-CORD, vOLTHA (Virtual OLT Hardware Abstraction)
- Verizon: M-Cord
- Sprint: M-Cord
- Comcast: R-Cord
- Century Link: R-Cord
- Google: Multi-access CORD
Author’s Note: NTT (Japan) and Telefonica (Spain) have deployed CORD and presented their use cases at the CORD Build event. Deutsche Telekom, China Unicom, and Turk Telecom are active in the ONF and may have plans to deploy CORD?
- This author questioned the partitioning of CORD tasks and responsibility between ONF and Linux Foundation. No clear answer was given. Perhaps in a follow up comment?
- AT&T is bringing use cases into ONF for reference platform deployments.
- CORD is a reference architecture with systems integrators needed to put the pieces together (commodity hardware, white boxes, open source software modules).
- Michael Howard asked Verizon to provide commercial deployment status- number, location, use cases, etc. Verizon said they can’t talk about commercial deployments at this time.
- Biggest challenge for CORD: Dis-aggregating purpose built, vendor specific hardware that exist in COs today. Many COs are router/switch centric, but they have to be opened up if CORD is to gain market traction.
- Future tasks for project CORD include: virtualized Radio Access Network (RAN), open radio (perhaps “new radio” from 3GPP release 15?), systems integration, and inclusion of micro-services (which were discussed at the very next TC3 session).
Addendum from Marc Cohn, formerly with the Linux Foundation:Here’s an attempt to clarify the CORD project responsibilities:
- CORD is an open reference architecture. In that sense, CORD is similar to the ETSI NFV Architectural Framework, ONF SDN Architecture, and MEF LifeCycle Services Orchestration (LSO) reference architectures.
- As it is a reference architecture, it is not an implementation, and is maintained by the Open Networking Foundation (ONF), which merged with ON.LAB towards the end of 2016.
- OpenCORD is a Linux Foundation project announced in the summer of 2016. It is focused on an open source implementation of the CORD architecture. OpenCord was derived from the work undertaken by ON.LAB, prior to the merger with ONF in 2016.
- For technical details, visit the OpenCORD Wiki
- Part of the confusion is that if one visits the Linux Foundation projects page, CORD is listed, but the link is to the OpenCord website.
2017 SPIFFY Awards:
Seven pioneering start-up companies were recognized by the Service Provider Innovation Forum (SPIF) at the 10th Annual SPIFFY Awards held Wednesday evening November 1st at TC3 Summit.
Since 2001, the Telecom Council has worked to identify and recognize companies who represent a broad range of cutting-edge telecom products and services. From there, dozens of young companies are presented each month to the Service Provider Innovation Forum (SPIF), ComTech Forum, IoT Forum, and Investor Forum.
SPIF members, who represent cutting-edge telcos from over 50 countries and who serve over 3B subscribers, selected seven companies from hundreds of presenting communication startup companies and 30 SPIFFY nominees as best-in-class in their respective categories. Each winner, who is set apart for their dedication, technical vision, and interest from the global service provider community, is a company to watch in the telecommunication industry.
The winners below represent the best and brightest in their respective categories:
- The Graham Bell Award for Best Communication Solutions – Sightcall : a cloud API that enables any business to add rich communications (e.g. video), accessible with a single touch, in the context of their application.
- Edison Award for Most Innovative Startup – DataRPM: cognitive preventive maintenance platform.
- San Andreas Award for Most Disruptive Technology – Veniam: networking solution for future autonomous vehicles; mobile WiFi done right.
- Core Award for Best Fixed Telecom Opportunity – Datera: storage and data management for service providers, private cloud, digital business via “Datera elastic data fabric software.”
- Zephyr Award for Best Mobile Opportunity – AtheerAir: augmented reality solutions for industrial enterprises.
- Ground Breaker Award for Engineering Excellence – Cinova: virtual reality streaming at practical bit rates using Cinova’s cloud server technology.
- Prodigy Award for the Most Successful SPIF Alumni – Plex: streaming media server and apps to stream video, audio and photo collections on any device.
This year’s entrepreneurs had a chance to vote on the operators as well, to give a shout out to those telcos who were supportive, approachable, and helpful to young and growing telecom companies. The entreprenneurs chose Verizon.
- Fred & Ginger Award for the Most Supportive Carrier – Verizon.
The SPIFFY nominees attended the awards ceremony along with 50 global fixed and wireless communications companies and over 300 industry professionals. Photos of the event can be found on Telecom Council’s blog and Instagram pages. Note that none of this year’s SPIFFY award winners, with the possible exception of Veniam, actually provide a connectivity (PHY, MAC/Data Link layer) solution.
Author’s Notes on three impressive start-ups that presented at TC3 on November 1st (only day I attended 2017 TC3):
1. In a session titled “Closing the Rural Broadband Gap,” Skyler Ditchfield, CEO of GeoLinks, provided an overview of his company’s success in providing high-speed broadband to schools and libraries using fixed wireless technologies, specifically microwave radio operating in several frequency bands. The company’s flagship service is ClearFiber™, which offers customers fixed wireless broadband service on the most resilient and scalable networkSkyler described the advantages of their 100% in house approach to engineering, design, land procurement, construction and data connectivity. GeoLinks approach offers gigabit plus speeds at a fraction of the cost of fiber with lower latency and rapid deployment across the country.
A broadband fixed wireless installation on Santa Catalina island was particularly impressive. Speeds on the island (which GeoLinks says is 41 miles offshore) are typically 300 Mbps, and the ultra-fast broadband connection provides support for essential communications services, tourism services, and commerce. GeoLinks successfully deployed Mimosa Network´s fiber-fast broadband solutions to bring high-speed Internet access to the island community for the first time in its history. Connecting the island to the mainland at high speeds was very challenging. GeoLinks ultimately selected Mimosa for the last mile of the installation, deploying Mimosa A5 access and C5 client devices throughout the harbor town of Avalon.
Another ClearFiber™ successful deployment was at Robbins Elementary school in California. It involved 19 miles of fixed broadband wireless transport to provide the school with broadband Internet access.
Skyler said that next year, GeoLinks planned to deliver fixed wireless transport at 10G b/sec over 6 to 8 miles in the 5Ghz unlicensed band- either point to point OR point to multi-point. The company is considering 6GHz, 11GHz, 18Ghz and 20Ghz FCC licensed bands. He said it would be important for GeoLinks to get licensed spectrum for point to multi-point transmission.
More on GeoLinks value proposition here and here. And a recent blog post about Skyler Ditchfield who told the TC3 audience he grew up fascinated by communications technologies. This author was very impressed with Skyler and GeoLinks!
2. In a panel on “Startup Success Stories,” Nitin Motgi, founder and CEO of Cask (a “big data” software company) talked about how long it took to seal a deal with telcos. It’s longer than you might think! In one case, Nitin said it was 18 months from the time an unnamed telco agreed to purchase Cask’s solution (based on a proof of concept demo) till the contract was actually signed and sealed. Nitin referred to the process of selling to telcos as “whale hunting.” However, he said that if you succeed it’s worth it because of the telco’s scale of business.
3. Tracknet Co-Founder and CEO Hardy Schmidbauer presented a 5 minute “fast pitch” to the Telecom Council Service Provider Forum. He talked about his company’s highly scalable LPWAN/ IoT network solutions: “TrackNet provides LoRaWAN IoT solutions for consumers and industry, focusing on ease of use and scalability to enable a “new era” of exponentially growing LPWAN deployments.” The company is a contributing member of the LoRa Alliance and the TrackNet team has been instrumental in specifying, building, and establishing LoRaWAN and the LoRa Alliance for more than five years. The founding Tracknet team includes veterans from IBM and Semtech who were instrumental in the development of LoRa and LoRaWAN.
With “Tabs,” Tracknet combines a WiFi connected IoT home and tracker system with LoRaWAN network coverage built from indoor Tabs hubs.
About the Telecom Council: The Telecom Council of Silicon Valley connects the companies who are building communication networks, with the people and ideas that are creating them – by putting those companies, research, ideas, capital and human expertise from across the globe together in the same room. Last year, The Telecom Council connected over 2,000 executives from 750 telecom companies and 60 fixed and wireless carriers across 40 meeting topics. By joining, speaking, sponsoring, or simply participating in a meeting, there are many ways telecom companies of any size can leverage the Telecom Council network. For more information visit: https://www.telecomcouncil.com.
A follow up TC3 blog post will provide an update on project CORD (Central Office Re-architected as a Data Center) from the perspective of the Open Network Foundation (ONF) with panelists from AT&T and Verizon.
Editor’s Note: Why Single Vendor Solutions Dominate New Networking Technologies
There are no accredited standards for exposed interfaces or APIs* in SD-WANs, NFV “virtual appliances,” Virtual Network Functions (VNFs), and access to various cloud networking platforms (each cloud service provider has their own connectivity options and APIs). Those so called “open networking” technologies are in reality closed, single vendor solutions. How could there be anything else if there are no standards for multi-vendor interoperability within a given network?
In other words, “open” is the new paradigm for “closed” with vendor lock-in a given.
* The exception is Open Flow API between Control and Data planes-from ONF.
Yet Gartner Group argues in a new white paper (available free to clients or to non clients for $195), that IT end users should always adopt multi-vendor network architectures. This author strongly agrees, but that’s not the trend in today’s networking industry, especially for the red hot “SD-WANs” where over two dozen vendors are proposing their unique solution in light of no standards for interoperability or really anything else for that matter within a single SD-WAN.
Yes, we know Metro Ethernet Forum (MEF) has started working on SD-WAN policy and security orchestration across multiple provider SD WAN implementations. They’ve also written a white paper “Understanding SD-WAN Managed Services,” which defines SD-WAN fundamental characteristics and service components. However, neither MEF or any other fora/standards body we know of is specifying functionality, interfaces for interoperability within a single SD-WAN.
Here are a few excerpts from the Gartner white paper is titled:
“IT leaders should never rely on a single vendor for the architecture and products of their network, as it can lead to vendor lock-in, higher acquisition costs and technical constraints that limit agility. They should segment their network into logical blocks and evaluate multiple vendors for each.”
Vendors tend to promote end-to-end network architectures that lock clients with their solutions because they are focused on their business goals, rather than enterprise requirements.
Enterprises that make strategic network investments by embracing vendors’ architectures without first mapping their requirements often end up with solutions that are overhyped, over-engineered and more expensive.
Enterprises that do not create and actively maintain a competitive environment can overpay by as much as 50% for the same equipment from the same vendor. Savings can be even greater when comparing to other vendors with a functionally equivalent solution.
IT and Operations leaders focused on network planning should:
- Divide the network into foundational building blocks, defining how they interwork with each other, to enable multiple vendor options for each block.
- Remove proprietary components from the network, replacing them with industry standard elements as they are available, to facilitate new vendors to make competitive proposals.
- Get a technical solution that meets needs at the lowest market purchase price by competitively bidding on each building block.
- Ensure that operations can deal with multiple vendors by planning for network management solutions and processes that can cope with a multivendor environment.
Network technologies have matured in the last 20 years and are a routine component of every IT infrastructure. No vendor can claim a unique “core competency” nor “best-of-breed” capabilities in every area of the network, so there is no reason to treat the network as a monolithic infrastructure entrusted to a single supplier. However, we regularly speak to clients that still give credit to the myth of the single-vendor network. They believe that having only one networking vendor provides the following advantages:
- There is no need to spend time designing a solution, as you simply get what leading vendors recommend.
Products from the same vendor are designed to work seamlessly together, with limited or no integration challenges.
The procurement process is simplified with only one vendor, and there’s no need to deal with time-consuming, vendor-neutral RFPs.
A higher volume of purchases with one vendor would result in a better discount.
You only have a single vendor to hold accountable in case you run into problems, and one that will respond quickly given the loyalty and volume of purchases.
However, these perceived advantages are largely a myth, as much as open networking and complete vendor freedom is a myth. The harsh reality that we frequently hear from clients that followed this single-vendor strategy includes:
- Holistic designs recommended by vendors are not necessarily the best. They are often over-engineered, include products that are not aligned with enterprise needs and are ultimately more expensive to buy and maintain.
- Diverse product lines from the same vendor share the brand, but they are rarely designed to work together from the start, since they often come from independent BUs or acquisitions, making them difficult to integrate.
- A higher volume of purchases does not automatically translate into better discounts. For most vendors, their best discounts are reserved for competitive situations and will generally offer savings of 15% to 50% when compared with the best-negotiated sole-source deals.
- Having to deal with just one vendor for technical issues is simpler, but does not necessarily translate in shorter time to repair and better overall network availability, which is the real goal.
Clients that pursue a multivendor strategy report that time spent on RFPs and evaluation of different vendors is not a waste, because it increases teams’ skills, motivates them to stay abreast of market innovations, prevents suboptimal decisions and pays off — technically and financially.
Divide the Network Into Foundational Building Blocks to Enable Multiple Vendor Options for Each Block
Network planners and architects must break the network infrastructure into smaller, manageable blocks to plan, design and deploy a “fit-for-purpose” infrastructure that addresses the defined usage scenarios and control costs (Figure 1 shows typical building blocks).
*Security is not addressed in this document. Note: There is no hierarchy associated with block positioning in this picture.
Source: Gartner (October 2017)
The key objectives of this activity are to:
- Identify network blocks that have logical and well-defined boundaries.
- Document and standardize as much as possible the interfaces between the various building blocks, to allow choice and enable use of multiple vendors.
This building block approach is useful because not all network segments have the same properties. In some segments little differentiation exists among suppliers, and there is a high degree of substitution within a building block, so enterprises can seek operational and cost advantages. For example, wired LAN switching solutions for branch offices are largely commoditized, and the difference between vendors is hard to discern in the most common use cases.
In other cases, such as in the data center networking market, there is more differentiation among vendors, and the segmentation approach ensures that enterprise architectural decisions align with IT infrastructure strategies and business requirements.
There are no hard-and-fast industry rules on where the boundaries between blocks must be drawn. Each enterprise has to split network infrastructure in a way that makes sense for them. The most common approach is segmentation around functional areas, such as data center leaf and spine switches, WAN edge, WAN connectivity, LAN core and LAN access. Each segment could further be split. For example, LAN access includes wired and wireless, while WAN edge might include WAN optimization and network security services. Another complementary segmentation boundary can be the geographical place, as a large organization with subsidiaries in multiple locations could select different vendors on a regional or country basis for some blocks. Disaggregation is creating another possible segmentation, since hardware and software can be awarded to different vendors for some solutions like white-box Ethernet switching.
Defining building blocks also protects organizations from the “vendor creep” trap. As vendors acquire small companies and startups in adjacent markets, they often encourage enterprises to add these new products or capabilities to the “standardized” solution. If the enterprise defines its foundational requirements, it can easily determine whether the new functionality truly solves a business need, and whether any additional cost is warranted.
Remove Proprietary Components From the Network to Facilitate New Vendors to Make Competitive Proposals
Employment of proprietary protocols and features inside the network limits the ability to segment the network into discrete blocks and makes this activity more difficult.
Within building blocks, it is acceptable to use proprietary technologies, as long as enterprises compare vendors against their business requirements (to avoid over-engineering) and the solution provides a real and indispensable functional advantage. It is important to express the business functionality as a requirement and not to tie requirements to specific proprietary technologies.
Between building blocks, it is critical to avoid proprietary features and use standards, since proprietary protocols favor using certain vendors and disfavor others, leading to loss of purchasing power. Sometimes it’s necessary to employ a proprietary protocol, for example:
To obtain functionality that uniquely meets a critical business need. If so, then it’s critical that these protocols be reviewed regularly and are not automatically propagated into new buying criteria over the long term.
In the early stages of market development, before standards have caught up to innovation. However, once standards exist, or the technology has started to move down the commodity curve, it is imperative that network architects and planners migrate to standards-based solutions (as long as business requirements aren’t compromised). Examples of industry standards that replace previous proprietary solutions are Power over Ethernet Plus (PoE+) and Virtual Router Redundancy Protocol (VRRP) (see Note 1).
In these cases it is essential to document and motivate the exception, so that it can be periodically reviewed. Proprietary technologies should always be avoided in the interface between the network and other components of IT infrastructure (for example, proprietary trunking to connect servers to the data center network).
Get a Technical Solution That Meets Needs at the Lowest Market Purchase Price by Competitively Bidding on Each Building Block
Dividing the network provides a clear definition of what is really needed within each building block, which in turns enables a fit-for-purpose approach and a competitive bidding process.
–>The goal is not to bid on the best technical solution for each block, but on one that is good enough to meet requirements.
This enables real competition across vendors and provides maximum price leverage, since all value-adds to the common denominator can be evaluated separately and matched with the cost difference.
By introducing competition in this thoughtful manner, Gartner has seen clients typically achieve sustained savings of between 10% and 30% and of as much as 300% on specific components like optical transceivers.
Discern the Relationships Between Networking Vendors and Network Management Vendors
You may also find that networking vendors have some level of leverage with certain other vendors specialized in network management. Therefore, it is valuable to understand the arrangement of any partner agreement and whether this can be leveraged to your organization’s benefit.
Editor’s Closing Comment:
The advice provided above by Gartner Group seems very reasonable and mitigates risk of using only a single vendor for a network or sub-network. If so, how can any network operator or enterprise networking customer justify the single vendor SD-WAN solutions that are proliferating today?
Readers are invited to comment in the box below the article (can be anonymous) or contact the author directly (email@example.com).
Two years ago, we reported that “Verizon has completed a field trial of NG-PON2 fiber-to-the-premises technology that could provide the infrastructure for download speeds up to 10 Gbps for residential and business customers.”
This past January, Verizon completed its first interoperability trial of NG-PON2 technology at its Verizon Labs location in Waltham, MA. During the trial, Verizon demonstrated that equipment from different vendors on each end of a single fiber—one on the service provider’s endpoint and that the customer premises—can deliver service without any end-user impact.
In an October 16th press release in advance of the Broadband Forum’s Access Summit, Verizon said NG-PON2 represent a paradigm shift in the access space and a more certain path towards long-term success.
“Technologies such as NG-PON2 present exciting new opportunities for vendors, such as delivering residential and business services on multiple wavelengths over the same fiber,” said Vincent O’Byrne, Director of Technology at Verizon.
“Not only does NG-PON2 parse business and residential customer traffic to isolate and resolve potential problems in the network, it can also scale to achieve speeds of 40 Gbps and above,” O’Byrne added.
“Technologies such as NG-PON2 present exciting new opportunities for vendors, such as delivering residential and business services on multiple wavelengths over the same fiber,” said O’Byrne. “Not only does NG-PON2 parse business and residential customer traffic to isolate and resolve potential problems in the network, it can also scale to achieve speeds of 40 Gbps and above.”
At the Broadband Forum’s Access Summit, The Verizon executive will address how the fiber access space is constantly evolving, with emerging PON technology providing solutions to some of the issues around cost and reliability during the Broadband World Forum, at the Messe Berlin on Tuesday, Oct. 24th.
Verizon has been an active participant in driving awareness about how NG-PON2 can work in a real-world carrier environment. The company completed NG-PON2 interoperability with five vendors for its OpenOMCI (ONT Management and Control Interface) spec, bringing it one step closer toward achieving interoperable NG PON systems.
The mega telco plans to offer it’s own OpenOMCI specification , which define the optical line terminal (OLT)-to-optical network terminal (ONT) interface, to the larger telecom industry.
Note 1. OpenOMCI specification was developed and is owned by Verizon, rathr than a formal standards/spec writing body like the ITU-T or Optical Internetworking Forum (OIF). Is this the new way of producing specs (like “5G” used in trials)?
Bernd Hesse, Chair of the Broadband Access Summit and Senior Director Technology Development at Calix, said:
“We will be exploring NG-PON2 in depth and the use cases that underpin the decisions to deploy them. I look forward to the debate, hearing from the experts in the industry and welcoming the community to these new Forum events.”