MTN Consulting: Network operator capex forecast at $520B in 2025

Executive Summary:

Telco, webscale and carrier-neutral capex will total $520 billion by 2025 according to a report from MTN Consulting..  That’s compared with $420 billion in 2019.

  • Telecom operators (telco) will account for 53% of industry Capex by 2025 vs 9%  in 2019;
  • Webscale operators will grow from 25% to 39%;
  • Carrier-neutral [1.] providers will add 8% of total Capex in 2025 from 6% in 2019.

Note 1. A Carrier-neutral data center is a data center (or carrier hotel) which allows interconnection between multiple telecommunication carriers and/or colocation providers.  It is not owned and operated by a single ISP, but instead offers a wide variety of connection options to its colocation customers.

Adequate power density, efficient use of server space, physical and digital security, and cooling system are some of the key attributes organizations look for in a colocation center. Some facilities distinguish themselves from others by offering additional benefits like smart monitoring, scalability, and additional on-site security.

……………………………………………………………………………………………………………………………………………………….

The number of telco employees will decrease from 5.1 million in 2019 to 4.5 million in 2025 as telcos deploy automation more widely and spin off parts of their network to the carrier-neutral sector.

By 2025, the webscale sector will dominate with revenues of approximately $2.51 trillion, followed by $1.88 trillion for the telco sector and $108 billion for carrier-neutral operators (CNNOs).

KEY FINDINGS from the report:

Revenue growth for telco, webscale and carrier-neutral sector will average 1, 10, and 7% through 2025

Telecom network operator (TNO, or telco) revenues are on track for a significant decline in 2020, with the industry hit by COVID-19 even as webscale operators (WNOs) experienced yet another growth surge as much of the world was forced to work and study from home. For 2020, telco, webscale, and carrier-neutral revenues are likely to reach $1.75 trillion (T), $1.63T, and $71 billion (B), amounting to YoY growth of -3.7%, +12.2%, and 5.0%, respectively. Telcos will recover and webscale will slow down, but this range of growth rates will persist for several years. By 2025, the webscale sector will dominate with revenues of approximately $2.51 trillion, followed by $1.88 trillion for the telco sector and $108 billion for carrier-neutral operators (CNNOs).

Network operator capex will grow to $520B by 2025

In 2019, telco, webscale and carrier-neutral capex totaled $420 billion, a total which is set to grow to $520 billion by 2025. The composition will change starkly though: telcos will account for 53% of industry capex by 2025, from 9% in 2019; webscale operators will grow from 25% to 39% in the same timeframe; and, carrier-neutral providers will add 8% of total capex in 2025 from their 2019 level of 6%.

By 2025, the webscale sector will employ more than the telecom industry

As telcos deploy automation more widely and cast off parts of their network to the carrier-neutral sector, their employee base should decline from 5.1 million in 2019 to 4.5 million in 2025. The cost of the average telco employee will rise significantly in the same timeframe, as they will require many of the same software and IT skills currently prevalent in the webscale workforce. For their part, webscale operators have already grown from 1.3 million staff in 2011 to 2.8 million in 2019, but continued rapid growth in the sector (especially its ecommerce arms) will spur further growth in employment to reach roughly 4.8 million by 2025. The carrier-neutral sector’s headcount will grow far more modestly, rising from 90 million in 2019 to about 119 million in 2025. Managing physical assets like towers tends to involve a far lighter human touch than managing network equipment and software.

Carrier Neutral Data Center Image

Example of a Carrier Neutral Colo Data Center

RECOMMENDATIONS:

Telcos: embrace collaboration with the webscale sector

Telcos remain constrained at the top line and will remain in the “running to stand still” mode that has characterized their last decade. They will continue to shift towards more software-centric operations and automation of networks and customer touch points. What will become far more important is for telcos to actively collaborate with webscale operators and the carrier-neutral sector in order to operate profitable businesses. The webscale sector is now targeting the telecom sector actively as a vertical market. Successful telcos will embrace the new webscale offerings to lower their network costs, digitally transform their internal operations, and develop new services more rapidly. Using the carrier-neutral sector to minimize the money and time spent on building and operating physical assets not viewed as strategic will be another key to success through 2025.

Vendors: to survive you must improve your partnership and integration capabilities

Collaboration across the telco/webscale/carrier-neutral segments has implications for how vendors serve their customers. Some of the biggest telcos will source much of their physical infrastructure from carrier-neutral providers and lean heavily on webscale partners to manage their clouds and support new enterprise and 5G services. Yet telcos spend next to nothing on R&D, especially when compared to the 10% or more of revenues spent on R&D by their vendors and the webscale sector. Vendors who develop customized offerings for telcos in partnership with either their internal cloud divisions (e.g. Oracle, HPE, IBM) or AWS/GCP/Azure/Alibaba will have a leg up. This is not just good for growing telco business, but also for helping webscale operators pursue 5G-based opportunities. One of the earliest examples of a traditional telco vendor aligning with a cloud player for the telco market is NEC’s 2019 development of a mobile core solution for the cloud that can be operated on the AWS network; there will be many more such partnerships going forward.

All sectors: M&A is often not the answer, despite what the bankers urge

M&A will be an important part of the network infrastructure sector’s evolution over the next 5 years. However, the difficulty of successfully executing and integrating a large transaction is almost always underappreciated. There is incredible pressure from bankers to choose M&A, and the best ones are persuasive in arguing that M&A is the best way to improve your competitiveness, enter a new market, or lower your cost base. Many chief executives love to make the big announcements and take credit for bringing the parties together. But making the deal actually work in practice falls to staff way down the chain of command, and to customers’ willingness to cope with the inevitable hiccups and delays brought about by the transaction. And the bankers are long gone by then, busy spending their bonuses and working on their next deal pitch. Be extremely skeptical about M&A. Few big tech companies have a history of doing it well.

Webscale: stop abusing privacy rights and trampling on rules and norms of fair competition

The big tech companies that make up the webscale sector tracked by MTN Consulting have been rightly abused in the press recently for their disregard for consumer privacy rights, and overly aggressive, anti-competitive practices. After years of avoiding increased regulatory oversight through aggressive lobbying and careful brand management, the chickens are coming home to roost in 2021. Public concerns about abuses of privacy, facilitation of fake news, and monopolistic or (at the least) oligopolistic behavior will make it nearly impossible for these companies to stem the increased oversight likely to come soon from policymakers.

Australia’s pending law, the “News Media and Digital Platforms Bargaining Code,” could foreshadow things to come for the webscale sector, as do recent antitrust lawsuits against Facebook and Alphabet. Given that webscale companies are supposed to be fast moving and innovative, they should get out ahead of these problems. They need to implement wholesale, transparent changes to how they treat consumer privacy and commit to (and actually follow) a code of conduct that is conducive to innovation and competition. The billionaires leading the companies may even consider encouraging fairer tax codes so that some of their excessive wealth can be spread across the countries that actually fostered their growth.

ABOUT THIS REPORT:

This report presents MTN Consulting’s first annual forecast of network operator capex. The scope includes telecommunications, webscale and carrier-neutral network operators. The forecast presents revenue, capex and employee figures for each market, both historical and projected, and discusses the likely evolution of the three sectors through 2025. In the discussion of the individual sectors, some additional data series are projected and analyzed; for example, network operations opex in the telco sector. The forecast report presents a baseline, most likely case of industry growth, taking into account the significant upheaval in communication markets experienced during 2020. Based on our analysis, we project that total network operator capex will grow from $420 billion in 2020 to $520 billion in 2025, driven by substantial gains in the webscale and (much less so) carrier-neutral segments. The primary audience for the report is technology vendors, with telcos and webscale/cloud operators a secondary audience.

References:

Network operator capex to hit $520B in 2025

Carrier Neutral Colocation Data Centers

………………………………………………………………………………………………………………………………………………….

January 8, 2021 Update:

Analysys Mason: Cloud technology will pervade the 5G mobile network, from the core to the RAN and edge

“Communications Service Providers (CSPs) spending on multi-cloud network infrastructure software, hardware and professional services will grow from USD4.3 billion in 2019 to USD32 billion by 2025, at a CAGR of 40%.”

5G and edge computing are spurring CSPs to build multi-cloud, cloud-native mobile network infrastructure
Many CSPs acknowledge the need to use cloud-native technology to transform their networks into multi-cloud platforms in order to maximise the benefits of rolling out 5G. Traditional network function virtualisation (NFV) has only partly enabled the software-isation and disaggregation of the network, and as such, limited progress has been made on cloudifying the network to date. Indeed, Analysys Mason estimates that network virtualisation reached only 6% of its total addressable market for mobile networks in 2019.

The telecoms industry is now entering a new phase of network cloudification because 5G calls for ‘true’ clouds that are defined by cloud-native technologies. This will require radical changes to the way in which networks are designed, deployed and operated, and we expect that investments will shift to support this new paradigm. The digital infrastructure used for 5G will be increasingly built as horizontal, open network platforms comprising multiple cloud domains such as mobile core cloud, vRAN cloud and network and enterprise edge clouds. As a result, we have split the spending on network cloud into spending on multiple cloud domains (Figure 1) for the first time in our new network cloud infrastructure report. We forecast that CSP spending on multi-cloud network infrastructure software, hardware and professional services will grow from USD4.3 billion in 2019 to USD32 billion by 2025, at a CAGR of 40%.

https://www.analysysmason.com/research/content/comments/network-cloud-forecast-comment-rma16/

Synergy Research: Hyperscale Operator Capex at New Record in Q3-2020

Hyperscale cloud operator capex topped $37 billion in Q3-2020, which easily set a new quarterly record for spending, according to Synergy Research Group (SRG). Total spending for the first three quarters of 2020 reached $99 billion, which was a 16% increase over the same period last year.

Synergy Research Group’s latest data found that cloud service provider capex that was specifically targeted at data centers in the first three quarters increased by 18% compared to 2019.

The top-four hyperscale spenders in the first three quarters of this year were Amazon, Google, Microsoft and Facebook. Those four easily exceeded the spending by the rest of the hyperscale operators.  The next biggest cloud spenders were Apple, Alibaba, Tencent, IBM, JD.com, Baidu, Oracle, and NTT.

SRG’s data found that capex growth was particularly strong across Amazon, Microsoft, Tencent and Alibaba while Apple’s spend dropped off sharply and Google’s also declined.

Capex Q320

Much of the hyperscale capex goes towards building, expanding and equipping huge data centers, which grew in number to 573 at the end of Q3. The hyperscale data is based on analysis of the capex and data center footprint of 20 of the world’s major cloud and internet service firms, including the largest operators in IaaS, PaaS, SaaS, search, social networking and e-commerce. In aggregate these twenty companies generated revenues of over $1.1 trillion in the first three quarters of the year, up 15% from 2019.

“As expected the hyperscale operators are having little difficulty weathering the pandemic storm. Their revenues and capex have both grown by strong double-digit amounts this year and this has flowed down to strong growth in spending on data centers, up 18% from 2019,” said John Dinsdale, a Chief Analyst at Synergy Research Group. “They generate well over 80% of their revenues from cloud, digital services and online activities, all of which have seen COVID-19 related boosts. As these companies go from strength to strength they need an ever-larger footprint of data centers to support their rapidly expanding digital activities. This is good news for companies in the data center ecosystem who can ride along in the slipstream of the hyperscale operators.”

Separately, Google Cloud announced it is set to add three new ‘regions,’ which provide faster and more reliable services in targeted locations, to its global footprint. The new regions in Chile, Germany and Saudi Arabia will take the total to 27 for Google Cloud.

About Synergy Research Group:

Synergy provides quarterly market tracking and segmentation data on IT and Cloud related markets, including vendor revenues by segment and by region. Market shares and forecasts are provided via Synergy’s uniquely designed online database tool, which enables easy access to complex data sets. Synergy’s CustomView ™ takes this research capability one step further, enabling our clients to receive on-going quantitative market research that matches their internal, executive view of the market segments they compete in.

Synergy Research Group helps marketing and strategic decision makers around the world via its syndicated market research programs and custom consulting projects. For nearly two decades, Synergy has been a trusted source for quantitative research and market intelligence. Synergy is a strategic partner of TeleGeography.

To speak to an analyst or to find out how to receive a copy of a Synergy report, please contact [email protected] or 775-852-3330 extension 101.

References:

https://www.srgresearch.com/articles/hyperscale-operator-capex-sets-new-record-reaches-100-billion-in-first-three-quarters

Internet traffic spikes under “stay at home”; Move to cloud accelerates

With worldwide coronavirus induced “stay at home/shelter in place” orders, almost everyone that has high speed internet at home is using a lot more bandwidth for video conferences and streaming.  How is the Internet holding up against the huge increase in data/video traffic?  We focus this article on U.S. Internet traffic since the stay at home orders went into effect in late March.

………………………………………………………………………………………..

Sidebar:  North America has only 7.6% of world’s Internet users:

Percentage of Internet users by continent/region.

………………………………………………………………………………………..

According to Eric Savitz of Barron’s, the U.S. networks are handling the traffic spikes without any major hiccups. In a call this past week with reporters, Comcast, the largest U.S. internet service provider, said that its network is working well, with tests done 700,000 times a day through customer modems showing average speeds running 110% to 115% of contracted rates. Overall peak traffic is up 32% on the network, with some areas up 60%, in particular around Seattle and the San Francisco Bay area, where lockdowns were put in place before they were in most of the rest of the country. In both Seattle and San Francisco, peak traffic volumes are plateauing, suggesting a new normal.

While Comcast said its peak internet traffic has increased 32 percent since the start of March, total traffic remains within the overall capacity its network. The increase in people working at home has shifted the downstream peak to earlier in the evening, while upload traffic is growing during the day in most cities.  Tony Werner, head of technology at Comcast Cable, says it has a long-term strategy of adding network capacity 12 to 18 months ahead of expected peaks. He says that approach has given Comcast the ability to smoothly absorb the added traffic. The company hasn’t requested that video providers or anyone else limit their traffic.

AT&T, the second largest U.S. internet service provider, likewise asserts that its network is performing “very well” during the pandemic. This past Wednesday, it said, core traffic, including business, home broadband, and wireless, was up 18% from the same day last month. Wireless voice minutes were up 41%, versus the average Wednesday; consumer home voice minutes rose 57%, and WiFi calling was up 105%.

Over the past three weeks, the company has seen new usage patterns on its mobile network, with voice calls up 33% and instant messaging up 63%, while web browsing is down 5% and email is off 18%.

Verizon also says its network is handling the traffic well. One telling stat: The carrier says that mobile handoffs, the shifting of sessions from one cell site to another as users move around, is down 53% in the New York metro area, and 29% nationally; no one is going anywhere.  More on Verizon’s COVID-19 initiatives here.

In the United States prior to coronavirus, total home internet traffic averaged about 15% on weekdays. But it started growing in mid March, and by late March it had reached about 35%, clearly connected to all the working and learning from home due to stay-at-home orders.

“The data suggests remote working will remain elevated in the U.S. for a prolonged period of time,” wrote analysts from Cowen analysts.

Craig Moffett of MoffetNathanson said “The cable companies are simply digital infrastructure providers. They are agnostic about how you can get your video content. And the broadband business is going to be just fine.”

“Our broadband connections are becoming our lifelines – figuratively and literally: we are using them to get news, connect to our work environments (now all virtual), and for entertainment too,” wrote Craig Labovitz, CTO for Nokia’s Deepfield portfolio, in a blog post.

………………………………………………………………………………………..

Enterprise IT Accelerates Move to Cloud:

One takeaway from this extended, forced stay at home period is that, more than ever, corporate IT (think enterprise computing and storage) is moving to the cloud.  We’ve previously reported on this mega-trend in an IEEE techblog post noting the delay in 5G roll-outs.  In particular:

Now the new (5G) technology faces an unprecedented slow down to launch and expand pilot deployments.  Why? It’s because of the stay at home/shelter in place orders all over the world.  Non essential business’ are closed and manufacturing plants have been idled.  Also, why do you need a mobile network if you’re at home 95% of the time?

One reason to deploy 5G is to off load data (especially video) traffic on congested 4G-LTE  networks. But just like the physical roads and highways, those 4G networks have experienced less traffic since the virus took hold.  People confined to their homes need wired broadband and Wi-Fi, NOT 4G and 5G mobile access.

David Readerman of Endurance Capital Partners, a San Francisco, CA based tech hedge fund told Barron’s: “What’s certainly being reinforced right now, is that cloud-based information-technology architecture is providing agility and resiliency for companies to operate dispersed workforces.”

Readerman says the jury is out on whether there’s a lasting impact on how we work, but he adds that contingency planning now requires the ability to work remotely for extended periods.

On March 27th, the Wall Street Journal reported:

Cloud-computing providers are emerging as among the few corporate winners in the coronavirus pandemic as office and store closures across the U.S. have pushed more activity online.

The remote data storage and processing services provided by Amazon.com Inc., Microsoft Corp., Google and others have become the essential link for many people to remain connected with work and families, or just to unwind.

The hardware and software infrastructure those tech giants and others provide, commonly referred to as the cloud, underpins the operation of businesses that have become particularly popular during the virus outbreak, such as workplace collaboration software provider Slack, streaming video service company Netflix Inc. and online video game maker Epic Games Inc.

Demand has been so strong that Microsoft has told some customers its Azure cloud is running up against limits in parts of Australia.

“Due to increased usage of Azure, some regions have limited capacity,” the software giant said, adding it had, in some instances, placed restrictions on new cloud-based resources, according to a customer notice seen by The Wall Street Journal.

A Microsoft spokesman said the company was “actively monitoring performance and usage trends” to support customers and growth demands. “At the same time,” he said, “these are unprecedented times and we’re also taking proactive steps to plan for these high-usage periods.”

“If we think of the cloud as utility, it’s hard to imagine any other public utility that could sustain a 50% increase in utilization—whether that’s electric or water or sewage system—and not fall over,” Matthew Prince, chief executive of cloud-services provider Cloudflare Inc. said in an interview. “The fact that the cloud is holding up as well as it has is one of the real bright spots of this crisis.”

The migration to the cloud has been happening for about a decade as companies have opted to forgo costly investments into in-house IT infrastructure and instead rent processing hardware and software from the likes of Amazon or Microsoft, paying as they go for storage and data processing features. The trends have made cloud-computing one of the most contested battlefields among business IT providers.

“If you look at Amazon or Azure and how much infrastructure usage increased over the past two weeks, it would probably blow your mind how much capacity they’ve had to spin up to keep the world operating,” said Dave McJannet, HashiCorp Inc., which provides tools for both cloud and traditional servers. “Moments like this accelerate the move to the cloud.”

In a message to rally employees, Andy Jassy, head of the Amazon’s Amazon Web Services (AWS) cloud division, urged them to “think about all of the AWS customers carrying extra load right now because of all of the people at home.”

Brad Schick, chief executive of Seattle-based Skytap Inc., which works with companies to move existing IT systems to the cloud, has seen a 20% jump in use of its services in the past month. “A lot of the growth is driven by increased usage of the cloud to deal with the coronavirus.”

For many companies, one of the attractions of cloud services is they can quickly rent more processing horsepower and storage when it is needed, but can scale back during less busy periods. That flexibility also is helping drive cloud-uptake during the coronavirus outbreak, said Nikesh Parekh, CEO and cofounder of Seattle-based Suplari Inc., which helps companies manage their spending with outside vendors such as cloud services.

“We are starting to see CFOs worry about their cash positions and looking for ways to reduce spending in a world where revenue is going to decline dramatically over the next quarter or two,” he said. “That will accelerate the move from traditional suppliers to the cloud.”

Dan Ives of Wedbush opines that the coronavirus pandemic is a “key turning point” around deploying cloud-driven and remote-learning environments.  As a majority of Americans are working or learning from home amid federal social distancing measures, Ives’ projections of moving 55% of workloads to the cloud by 2022 from 33% “now look conservative as these targets could be reached a full year ahead of expectations given this pace,” he said. He also expects that $1 trillion will be spent on cloud services over the next decade, benefiting companies such as Microsoft and Amazon.


…………………………………………………………………………………………………………..

 

Synergy Research Group: Hyperscale Data Center Count > 500 as of 3Q-2019

New data from Synergy Research Group shows that the total number of large data centers operated by hyperscale providers increased to 504 at the end of the third quarter, having tripled since the beginning of 2013. The EMEA and Asia-Pacific regions continue to have the highest growth rates, though the US still accounts for almost 40% of the major cloud and internet data center sites.

The next most popular locations are China, Japan, the UK, Germany and Australia, which collectively account for another 32% of the total. Over the last four quarters new data centers were opened in 15 different countries with the U.S., Hong Kong, Switzerland and China having the largest number of additions. Among the hyperscale operators, Amazon and Microsoft opened the most new data centers in the last twelve months, accounting for over half of the total, with Google and Alibaba being the next most active companies. Synergy research indicates that over 70% of all hyperscale data centers are located in facilities that are leased from data center operators or are owned by partners of the hyperscale operators.

Hyperscale DC Q319

……………………………………………………………………………………………………………………………………………………………………………………………………

Backgrounder:

One vendor in the data center equipment space recently called hyperscale “too big for most minds to envision.” Scalability has always been about creating opportunities to do small things using resources that happen to encompass a very large scale.

IDC, which provides research and advisory services to the tech industry, classifies any data center with at least 5,000 servers and 10,000 square feet of available space as hyperscale, but Synergy Research Group focuses less on physical characteristics and more on “scale-of-business criteria” that assess a company’s cloud, e-commerce, and social media operations.

A hyperscale data center is to be distinguished from a multi-tenant data center as the former is owned and operated by a mega cloud provider (Amazon, Microsoft, Google, Alibaba, etc) while the latter is owned and operator by a real estate company that leases cages to tenants who supply their own IT equipment.

A hyperscale data center accomplishes the following functions:

  • Maximizes cooling efficiency. The largest operational expense in most data centers worldwide — more so than powering the servers — is powering the climate control systems. A hyperscale structure may be partitioned to compartmentalize high-intensity computing workloads, and concentrate cooling power on the servers hosting those workloads. For general-purpose workloads, a hyperscale architecture optimizes airflow throughout the structure, ensuring that hot air flows in one direction (even if it’s a serpentine one) and often reclaiming the heat from that exhaust flow for recycling purposes.
  • Allocates electrical power in discrete packages. In facilities designed to be occupied by multiple tenants, “blocks” are allocated like lots in a housing development. Here, the racks that occupy those blocks are allocated a set number of kilowatts — or, more recently, fractions of megawatts — from the main power supply. When a tenant leases space from a colocation provider, that space is often phrased not in terms of numbers of racks or square footage, but kilowatts. A design that’s more influenced by hyperscale helps ensure that kilowatts are available when a customer needs them.
  • Ensures electricity availability. Many enterprise data centers are equipped with redundant power sources (engineers call this configuration 2N), often backed up by a secondary source or generator (2N + 1). A hyperscale facility may utilize one of these configurations as well, although in recent years, workload management systems have made it feasible to replicate workloads across servers, making the workloads redundant rather than the power, reducing electrical costs. As a result, newer data centers don’t require all that power redundancy. They can get away with just N + 1, saving not just equipment costs but building costs as well.
  • Balances workloads across servers. Because heat tends to spread, one overheated server can easily become a nuisance for the other servers and network gear in its vicinity. When workloads and processor utilization are properly monitored, the virtual machines and/or containers housing high-intensity workloads may be relocated to, or distributed among, processors that are better suited to its functions, or that are simply not being utilized nearly as much at the moment. Even distribution of workloads directly correlates to temperature reduction, so how a data center manages its software is just as important as how it maintains its support systems.

References:

https://www.zdnet.com/article/how-hyperscale-data-centers-are-reshaping-all-of-it/

https://www.vxchnge.com/blog/rise-of-hyperscale-data-centers

………………………………………………………………………………………………………………………………………………………………………………………………………..

Synergy’s research is based on an analysis of the data center footprint of 20 of the world’s major cloud and internet service firms, including the largest operators in SaaS, IaaS, PaaS, search, social networking, e-commerce and gaming. The companies with the broadest data center footprint are the leading cloud providers – Amazon, Microsoft, Google and IBM. Each has 60 or more data center locations with at least three in each of the four regions – North America, APAC, EMEA and Latin America. Oracle also has a notably broad data center presence. The remaining firms tend to have their data centers focused primarily in either the US (Apple, Facebook, Twitter, eBay, Yahoo) or China (Alibaba, Baidu, Tencent).

There were more new hyperscale data centers opened in the last four quarters than in the preceding four quarters, with activity being driven in particular by continued strong growth in cloud services and social networking,” said John Dinsdale, a Chief Analyst and Research Director at Synergy Research Group.

“This is good news for wholesale data center operators and for vendors supplying the hardware that goes into those data centers. In addition to the 504 current hyperscale data centers we have visibility of a further 151 that are at various stages of planning or building, showing that there is no end in sight to the data center building boom.”

Reference:

https://www.srgresearch.com/articles/hyperscale-data-center-count-passed-500-milestone-q3

…………………………………………………………………………………………………………………………………………………………………………………………………………

About Synergy Research Group:

Synergy provides quarterly market tracking and segmentation data on IT and Cloud related markets, including vendor revenues by segment and by region. Market shares and forecasts are provided via Synergy’s uniquely designed online database tool, which enables easy access to complex data sets. Synergy’s CustomView ™ takes this research capability one step further, enabling our clients to receive on-going quantitative market research that matches their internal, executive view of the market segments they compete in.

Synergy Research Group helps marketing and strategic decision makers around the world via its syndicated market research programs and custom consulting projects. For nearly two decades, Synergy has been a trusted source for quantitative research and market intelligence. Synergy is a strategic partner of TeleGeography.

To speak to an analyst or to find out how to receive a copy of a Synergy report, please contact [email protected] or 775-852-3330 extension 101.

 

 

Verizon Software-Defined Interconnect: Private IP network connectivity to Equinix global DC’s

Verizon today announced the launch of Software-Defined Interconnect (SDI), a solution that works with Equinix Cloud Exchange Fabric™ (ECX Fabric™), offering organizations with a Private IP network direct connectivity to 115 Equinix International Business Exchange™ (IBX ®) data centers (DC’s) around the globe within minutes.

Verizon claims its new Private IP service [1]  provides a faster, more flexible alternative to traditional interconnectivity, which requires costly buildouts, long lead times, complex provisioning and often truck rolls: APIs are used to automate connections and, often, reduce costs, boasts Verizon.  The telco said in a press release:

SDI addresses the longstanding challenges associated with connecting premises networks to colocation data centers. To do this over traditional infrastructure requires costly build-outs, long lead times and complex provisioning. The SDI solution leverages an automated Application Program Interface (API) to quickly and simply integrate pre-provisioned Verizon Private IP bandwidth via ECX Fabric, while eliminating the need for dedicated physical connectivity. The result is to make secure colocation and interconnection faster and easier for customers to implement, often at a significantly lower cost.

Note 1.  Private IP is an MPLS-based VPN service that provides a simple network designed to grow with your business and help you consolidate your applications into a single network infrastructure. It gives you dedicated, secure connectivity that helps you adapt to changing demands, so you can deliver a better experience for customers, employees and partners.

Private IP uses Layer 3 networking to connect locations virtually rather than physically. That means you can exchange data among many different sites using Permanent Virtual Connections through a single physical port. Our MPLS-based VPN solution combines the flexibility of IP with the security and reliability of proven network technologies.

……………………………………………………………………………………………………………

“SDI is an addition to our best-in-class software-defined suite of services that can deliver performance ‘at the edge’ and support real-time interactions for our customers,” said Vickie Lonker, vice president of product management and development for Verizon. “Think about how many devices are connected to data centers, the amount of data generated, and then multiply that when 5G becomes ubiquitous. Enabling enterprises to virtually connect to Verizon’s private IP services by coupling our technology with the proven ECX Fabric makes it easy to provision and manage data-intensive network traffic in real time, lifting a key barrier to digital transformation.”

Verizon’s private IP – MPLS network is seeing high double-digit traffic growth year-over-year, and the adoption of colocation services continues to proliferate as more businesses grapple with complex cloud deployments to achieve greater efficiency, flexibility and additional functionality in data management.

“Verizon’s new Software Defined Interconnect addresses one of the leading issues for organizations by improving colocation access. This offer facilitates a reduction in network and connectivity costs for accessing colocation data centers, while promoting agility and innovation for enterprises. This represents a competitive advantage for Verizon as it applies SDN technology to improve interconnecting its Private IP MPLS network globally,” said Courtney Munroe, group vice president at IDC.

“With Software-Defined Interconnect, a key barrier to digital transformation has been lifted. By allowing enterprises to virtually connect to Verizon’s private IP services using the proven ECX Fabric, SDI makes secure colocation and interconnection easier – and more financially viable – to implement than ever before,” said Bill Long, vice president, interconnection services at Equinix [2].

Note 2. Equinix Internet Exchange™ enables networks, content providers and large enterprises to exchange internet traffic through the largest global peering solution across 52 markets.

………………………………………………………………………………………………………

Expert Opinion:

SDI is an incremental addition to Verizon’s overall strategy of interconnecting with other service providers to meet customer needs, as well as virtualizing its network, says Brian Washburn, an analyst at Ovum (owned by Informa as is LightReading and many other market research firms).

“Everything can be dynamic, everything can be made pay-as-you-go, everything can be controlled as a series of virtual resources to push them around the network as you need it, when you need it,” Washburn says.

For Equinix, the Verizon deal builds its gravitational pull. “It pulls in assets and just connects as many things to other things as possible. It is a virtuous circle. The more things they get into their data centers, the more resources they have there, that pulls in more companies to connect to the resources,” Washburn says. Equinix is standardizing its APIs to make interconnections easily.

SDI is similar to CenturyLink Dynamic Connections, which connects enterprises directly to public cloud services. And telcos are building interconnects with each other; for example, AT&T with Colt. “I expect we’ll see more of this sort of automation taking advantage of Equinix APIs,” Washburn says.

Microsoft also provides a virtual WAN service to connect enterprises to Azure. “It’s a different story, but it falls into the broader category of automation between network operators and cloud services,” Washburn said.

…………………………………………………………………………………………………………..

Verizon manages 500,000+ network, hosting, and security devices and 4,000+ networks in 150+ countries. To find out more about how Verizon’s global IP network, managed network services and Software-Defined Interconnect work please visit:

https://enterprise.verizon.com/products/network/connectivity/private-ip/

IHS Markit: Microsoft #1 for total cloud services revenue; AWS remains leader for IaaS; Multi-clouds continue to form

Following is information and insight from the IHS Markit Cloud & Colocation Services for IT Infrastructure and Applications Market Tracker.

Highlights:

·       The global off-premises cloud service market is forecast to grow at a five-year compound annual growth rate (CAGR) of 16 percent, reaching $410 billion in 2023.

·       We expect cloud as a service (CaaS) and platform as a service (PaaS) to be tied for the largest 2018 to 2023 CAGR of 22 percent. Infrastructure as a service (IaaS) and software as a service (SaaS) will have the second and third largest CAGRs of 14 percent and 13 percent, respectively.

IHS Markit analysis:

Microsoft in 2018 became the market share leader for total off-premises cloud service revenue with 13.8 percent share, bumping Amazon to the #2 spot with 13.2 percent; IBM was #3 with 8.8 percent revenue share. Microsoft’s success can be attributed to its comprehensive portfolio and the growth it is experiencing from its more advanced PaaS and CaaS offerings.

Although Amazon relinquished its lead in total off-premises cloud service revenue, it remains the top IaaS provider. In this very segmented market with a small number of large, well-established providers competing for market share:

•        Amazon was #1 in IaaS in 2018 with 45 percent of IaaS revenue.

•        Microsoft was #1 for CaaS with 22 percent of CaaS revenue and #1 in PaaS with 27 percent of PaaS revenue.

•        IBM was #1 for SaaS with 17 percent of SaaS revenue.

…………………………………………………………………………………………………………………………………

Multi-clouds [1] remain a very popular trend in the market; many enterprises are already using various services from different providers and this is continuing as more cloud service providers (CSPs) offer services that interoperate with services from their partners and their competitors,” said Devan Adams, principal analyst, IHS Markit. Expectations of increased multi-cloud adoption were displayed in our recent Cloud Service Strategies & Leadership North American Enterprise Survey – 2018, where respondents stated that in 2018 they were using 10 different CSPs for SaaS (growing to 14 by 2020) and 10 for IT infrastructure (growing to 13 by 2020).

Note 1. Multi-cloud (also multicloud or multi cloud) is the use of multiple cloud computing and storage services in a single network architecture. This refers to the distribution of cloud assets, software, applications, and more across several cloud environments.

There have recently been numerous multi-cloud related announcements highlighting its increased availability, including:

·       Microsoft: Entered into a partnership with Adobe and SAP to create the Open Data Initiative, designed to provide customers with a complete view of their data across different platforms. The initiative allows customers to use several applications and platforms from the three companies including Adobe Experience Cloud and Experience Platform, Microsoft Dynamics 365 and Azure, and SAP C/4HANA and S/4HANA.

·       IBM: Launched Multicloud Manager, designed to help companies manage, move, and integrate apps across several cloud environments. Multicloud Manager is run from IBM’s Cloud Private and enables customers to extend workloads from public to private clouds.

·       Cisco: Introduced CloudCenter Suite, a set of software modules created to help businesses design and deploy applications on different cloud provider infrastructures. It is a Kubernetes-based multi-cloud management tool that provides workflow automation, application lifecycle management, cost optimization, governance and policy management across cloud provider data centers.

IHS Markit Cloud & Colocation Intelligence Service:

The bi-annual IHS Markit Cloud & Colocation Services Market Tracker covers worldwide and regional market size, share, five-year forecast analysis, and trends for IaaS, CaaS, PaaS, SaaS, and colocation. This tracker is a component of the IHS Markit Cloud & Colocation Intelligence Service which also includes the Cloud & Colocation Data Center Building Tracker and Cloud and Colocation Data Center CapEx Market Tracker. Cloud service providers tracked within this service include Amazon, Alibaba, Baidu, IBM, Microsoft, Salesforce, Google, Oracle, SAP, China Telecom, Deutsche Telekom Tencent, China Unicom and others. Colocation providers tracked include Equinix, Digital Realty, China Telecom, CyrusOne, NTT, Interion, China Unicom, Coresite, QTS, Switch, 21Vianet, Internap and others.

DriveNets Network Cloud: Fully disaggregated software solution that runs on white boxes

by Ofer Weill, Director of Product Marketing at DriveNets; edited and augmented by Alan J Weissberger

Introduction:

Networking software startup DriveNets announced in February that it had raised $110 million in first round (Series A) of venture capital funding.  With headquarters in Ra’anana, Israel, DriveNets’ cloud-based service, called Network Cloud, simplifies the deployment of new services for carriers at a time when many telcos are facing declining profit margins. Bessemer Venture Partners and Pitango Growth are the lead VC investors in the round, which also includes money from an undisclosed number of private angel investors.

DriveNets was founded in 2015 by telco experts Ido Susan and Hillel Kobrinsky who are committed to creating the best performing CSP Networks and improving its economics. Network Cloud was designed and built for CSPs (Communications Service Providers), addressing their strict resilience, security and QoS requirements, with zero compromise. 

“We believe Network Cloud will become the networking model of the future,” said DriveNets co-founder and CEO Ido Susan, in a statement. “We’ve challenged many of the assumptions behind traditional routing infrastructures and created a technology that will allow service providers to address their biggest challenges like the exponential capacity growth, 5G deployments and low-latency AI applications.”’

The Solution:

Network Cloud does not use open-source code. It’s an “unbundled” networking software solution, which runs over a cluster of low-cost white box routers and white box x86 based compute servers. DriveNets has developed its own Network Operating System (NOS) rather than use open source or Cumulus’ NOS as several other open networking software companies have done.

Fully disaggregated, its shared data plane scales-out linearly with capacity demand.  A single Network Cloud can encompass up to 7,600 100Gb ports in its largest configuration. Its control plane scales up separately, consolidating any service and routing protocol. 

Network Cloud data-plane is created from just two building blocks white boxes – NCP for packet forwarding and NCF for fabric, shrinking operational expenses by reducing the number of hardware devices, software versions and change procedures associated with building and managing the network. The two white-boxes (NCP and NCF) are based on Broadcom’s Jericho2 chipset which has high-speed, high-density port interfaces of 100G and 400G bits/sec. A single virtual chassis for max ports might have this configuration:  30720 x 10G/25G / 7680 x 100G / 1920 x 400G bits/sec.

Last month, DriveNets disaggregated router added 400G-port routing support (via whitebox routers using the aforementioned Broadcom chipset).  The latest Network Cloud hardware and software is now being tested and certified by an undisclosed tier-1 Telco customer.

“Just like hyper-scale cloud providers have disaggregated hardware and software for maximum agility, DriveNets is bringing a similar approach to the service provider router market. It is impressive to see it coming to life, taking full advantage of the strength and scale of our Jericho2 device,” said Ram Velaga, Senior Vice President and General Manager of the Switch Products Division at Broadcom.

Network Cloud control-plane runs on a separate compute server and is based on containerized microservices that run different routing services for different network functions (Core, Edge, Aggregation, etc.). Where they are co-located, service-chaining allows sharing of the same infrastructure for all router services. 

Multi-layer resiliency, with auto failure recovery, is a key feature of Network Cloud.  There is inter-router redundancy and geo-redundancy of control to select a new end to end path by routing around points of failure.

Network Cloud’s orchestration capabilities include Zero Touch Provisioning, full life cycle management and automation, as well as superior diagnostics with unmatched transparency.  These are illustrated in the figures below:

Image Courtesy of DriveNets

 

Future New Services:

Network Cloud is a platform for new revenue generation.  For example, adding 3rd party services as separate micro-services, such as DDoS Protection, Managed LAN to WAN, Network Analytics, Core network and Edge network.

“Unlike existing offerings, Network Cloud has built a disaggregated router from scratch. We adapted the data-center switching model behind the world’s largest clouds to routing, at a carrier-grade level, to build the world’s largest Service Providers’ networks. We are proud to show how DriveNets can rapidly and reliably deploy technological innovations at that scale,” said Ido Susan CEO and Co-Founder of DriveNets in a press release.

………………………………………………………………………………………………

References:

https://www.reuters.com/article/us-tech-drivenets-fundraising/israeli-software-firm-drivenets-raises-110-million-in-first-funding-round-idUSKCN1Q32S0

https://www.drivenets.com/about-us

https://www.drivenets.com/uploads/Press/201904_dn_400g.pdf

https://www.prnewswire.com/il/news-releases/drivenets-delivers-worlds-first-400g-white-box-based-distributed-router-to-service-provider-testing-300833647.html

 

Will Hyperscale Cloud Companies (e.g. Google) Control the Internet’s Backbone?

Rob Powell reports that Google’s submarine cable empire now hooks up another corner of the world. The company’s 10,000km Curie submarine cable has officially come ashore in Valparaiso, Chile.

The Curie cable system now connects Chile with southern California. it’s a four-fiber-pair system that will add big bandwidth along the western coast of the Americas to Google’s inventory.  Also part of the plans is a branching unit with potential connectivity to Panama at about the halfway point where they can potentially hook up to systems in the Caribbean.

Subcom’s CS Durable brought the cable ashore on the beach of Las Torpederas, about 100 km from Santiago. In Los Angeles the cable terminates at Equinix’s LA4 facility, while in Chile the company is using its own recently built data center in Quilicura, just outside of Santiago.

Google has a variety of other projects going on around the world as well, as the company continues to invest in its infrastructure.  Google’s projects tend to happen quickly, as they don’t need to spend time finding investors to back their plans.

Curie is one of three submarine cable network projects Google unveiled in January 2018. (Source: Google)

……………………………………………………………………………………………………………………………………………………………………………………..

Powell also wrote that SoftBank’s HAPSMobile is investing $125M in Google’s Loon as the two partner for a common platform, and Loon gains an option to invest a similar sum in HAPSMobile later on.

Both companies envision automatic, unmanned, solar-powered devices in the sky above the range of commercial aircraft but not way up in orbit. From there they can reach places that fiber and towers don’t or can’t. HAPSMobile uses drones, and Loon uses balloons. The idea is to develop a ‘common gateway or ground station’ and the necessary automation to support both technologies.

It’s a natural partnership in some ways, and the two are putting real money behind it. But despite the high profile we haven’t really seen mobile operators chomping at the bit, since after all it’s more fun to cherry pick those tower-covered urban centers for 5G first and there’s plenty of work to do. And when they do get around to it, there’s the multiple near-earth-orbit satellite projects going on to compete with.

But the benefit both HAPSMobile and Loon have to their model is that they can, you know, reach it without rockets.

…………………………………………………………………………………………………………

AWS’s Backbone (explained by Sapphire):

An AWS Region is a particular geographic area where Amazon decided to deploy several data centers, just like that. The reason behind a chosen area is to be close to the users and also to have no restrictions. At the same time, every Region is also connected through private links with other Regions which means they have a dedicated link for their communications because for them is cheaper and they also have full capacity planing with lower latency.

What is inside a Region?

  • Minimum 2 Availability Zones
  • Separate transit centers (peering the connections out of the World)

How transit centers work?

AWS has private links to other AWS regions, but they also have private links for the feature AWS Direct Connect – a dedicated and private & encrypted (IPSEC tunnel) connection from the “xyz” company’s datacenters to their infrastructures in the Cloud, which works with the VLANs inside (IEEE 802.1Q) for accessing public and private resources with a lower latency like Glacier or S3 buckets and their VPC at the same time between <2ms and usually <1ms latency. Between Availability Zones (inter AZ zone) the data transit there’s a 25TB/sec average.

From AWS Multiple Region Multi-VPC Connectivity:

AWS Regions are connected to multiple Internet Service Providers (ISPs) as well as to Amazon’s private global network backbone, which provides lower cost and more consistent cross-region network latency when compared with the public internet.  Here is one illustrative example:

,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,

From Facebook Building backbone network infrastructure:

We have strengthened the long-haul fiber networks that connect our data centers to one another and to the rest of the world.

As we bring more data centers online, we will continue to partner and invest in core backbone network infrastructure. We take a pragmatic approach to investing in network infrastructure and utilize whatever method is most efficient for the task at hand. Those options include leveraging long-established partnerships to access existing fiber-optic cable infrastructure; partnering on mutually beneficial investments in new infrastructure; or, in situations where we have a specific need, leading the investment in new fiber-optic cable routes.

In particular, we invest in new fiber routes that provide much-needed resiliency and scale. As a continuation of our previous investments, we are building two new routes that exemplify this approach. We will be investing in new long-haul fiber to allow direct connectivity between our data centers in Ohio, Virginia, and North Carolina.

As with our previous builds, these new long-haul fiber routes will help us continue to provide fast, efficient access to the people using our products and services. We intend to allow third parties — including local and regional providers — to purchase excess capacity on our fiber. This capacity could provide additional network infrastructure to existing and emerging providers, helping them extend service to many parts of the country, and particularly in underserved rural areas near our long-haul fiber builds.

………………………………………………………………………………………………….

Venture Beat Assessment of what it all means:

Google’s increasing investment in submarine cables fits into a broader trend of major technology companies investing in the infrastructure their services rely on.

Besides all the datacenters AmazonMicrosoft, and Google are investing in as part of their respective cloud services, we’ve seen Google plow cash into countless side projects, such as broadband infrastrucure in Africa and public Wi-Fi hotspots across Asia.

Elsewhere, Facebook — while not in the cloud services business itself — requires omnipresent internet connectivity to ensure access for its billions of users. The social network behemoth is also investing in numerous satellite internet projectsand had worked on an autonomous solar-powered drone project that was later canned. Earlier this year, Facebook revealed it was working with Viasat to deploy high-speed satellite-powered internet in rural areas of Mexico.

While satellites will likely play a pivotal role in powering internet in the future — particularly in hard-to-reach places — physical cables laid across ocean floors are capable of far more capacity and lower latency. This is vital for Facebook, as it continues to embrace live video and virtual reality. In addition to its subsea investments with Google, Facebook has also partnered with Microsoft for a 4,000-mile transatlantic internet cable, with Amazon and SoftBank for a 14,000 km transpacific cable connecting Asia with North America, and on myriad othercable investments around the world.

Needless to say, Google’s services — ranging from cloud computing and video-streaming to email and countless enterprise offerings — also depend on reliable infrastructure, for which subsea cables are key.

Curie’s completion this week represents not only a landmark moment for Google, but for the internet as a whole. There are currently more than 400 undersea cables in service around the world, constituting 1.1 million kilometers (700,000 miles). Google is now directly invested in around 100,000 kilometers of these cables (62,000 miles), which equates to nearly 10% of all subsea cables globally.

The full implications of “big tech” owning the internet’s backbone have yet to be realized, but as evidenced by their investments over the past few years, these companies’ grasp will only tighten going forward.

Huawei to build Public Cloud Data Centers using OCP Open Rack and its own IT Equipment; Google Cloud and OCP?

Huawei:

On March 14th at the OCP 2019 Summit in San Jose, CA, Huawei Technologies (the world’s number one telecom/network equipment supplier) announced plans to adopt OCP Open Rack in its new public cloud data centers worldwide. The move is designed to enhance the environmental sustainability of Huawei’s new public cloud data centers by using less energy for servers, while driving operational efficiency by reducing the time it takes to install and maintain racks of IT equipment.  In addition to Huawei’s adoption of Open Rack in its cloud data centers, the company is also expanding its work with the OCP Community to extend the design of the standard and further improve time-to-market, and high serviceability and reduce TCO.  In an answer to this author’s question, Jinshui Liu CTO, IT Hardware Domain said the company would make its own OCP compliant compute servers and storage equipment (in addition to network switches) that would be used in its public cloud data centers.  All that IT equipment will ALSO sold to its customers building cloud resident data centers.

The Open Rack initiative introduced by the Open Compute Project (OCP) in 2013, seeks to redefine the data center rack and is one of the most promising developments in the scale computing environment. It is the first rack standard that is designed for data centers, integrating the rack into the data center infrastructure.  Open Rack integrating the rack into the data center infrastructure as part of the Open Compute Project’s “grid to gates” philosophy, a holistic design process that considers the interdependence of everything from the power grid to the gates in the chips on each motherboard.

“Huawei’s engineering and business leaders recognized the efficiency and flexibility that Open Rack offers, and the support that is available from a global supplier base. Providing cloud services to a global customer base creates certain challenges. The flexibility of the Open Rack specification and the ability to adapt for liquid cooling allows Huawei to service new geographies. Huawei’s decision to choose Open Rack is a great endorsement!” stated Bill Carter, Chief Technology Officer for the Open Compute Project Foundation.

 

OCP specified Open Rack v2:

 

Last year Huawei became an OCP Platinum Member. This year, Huawei continues investment in and commitment to OCP and the open source community. Huawei’s active involvement within the OCP Community includes on-going participation and contributions for various OCP projects such as Rack and Power, System Management and Server projects with underlying contributions to the upcoming specs for OCP accelerator Module, Advanced Cooling Solutions and OpenRMC.

“Huawei’s strategic investment and commitment to OCP is a win-win,” said Mr. Kenneth Zhang, General Manager of FusionServer, Huawei Intelligent Computing Business Department. “Combining Huawei’s extensive experience in Telco and Cloud deployments together with the knowledge of the vast OCP community will help Huawei to provide cutting edge, flexible and open solutions to its global customers. In turn, Huawei can leverage its market leadership and global data center infrastructure to help introduce OCP to new geographies and new market segments worldwide.”

During a keynote address at OCP Global Summit, Huawei shared more information about its Open Rack adoption plans as well as overall OCP strategy. Huawei  also showcased some of the building blocks of these solutions in its booth, including OCP-based compute module, Huawei Kunpeng 920 ARM CPU, Huawei Ascend 310 AI processor and other Huawei intelligent Compute products.

Huawei’s Booth at  OCP 2019 Summit

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

In summary, Huawei has developed an optimized rack scale design that will become the foundation of its cloud and IT infrastructure roll out.   This extends the company’s product portfolio from telecom/networking to cloud computing and storage as well as an ODM for compute and storage equipment.  Hence, Huawei will now compete with Microsoft Azure as well as China CSPs Alibaba, Baidu and Tencent in using OCP compliant IT equipment in their cloud resident data centers,.  Unlike the other aforementioned OCP Platinum members, Huawei will design and build its own IT equipment (the other  CSPs buy OCP equipment from ODMs).

There are now 124 OCP certified products available with over 60 more in the pipeline.  Most of the OCP ODMs are in Taiwan.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Google:

While Google has been an OCP Platinum member since 2015, they maintained a very low profile at this year’s OCP Summit, so it’s not clear how much OCP compliant equipment they use in Google Cloud or in any of their cloud resident data centers.  Google did present 2 tech sessions at the conference:

Google’s 48V Rack Adaptation and Onboard Power Technology Update” at the OCP 2019 Summit early Friday morning March 15th.  Google said that significant progress has been made in three specific applications:

1. Multi-phase 48V-to-12V voltage regulators adopting the latest hybrid switched-capacitor-buck topologies for traditional 12V workloads such as PCIEs and OTS servers;

2. Very high efficiency high density fixed ratio bus converters for 2-stage 48V-to-PoL power conversions;

3. High frequency high density voltage regulators for extremely power hungry AI accelerators.

Google and ONF provided an update on Stratum — a next generation, thin switch OS that provides silicon and hardware independence, which was first introduced at the 2018 OCP Summit.  Stratum was said to enable the next generation of SDN.  It adds new SDN-ready interfaces from the P4 and OpenConfig communities to ONL (Open Network Linux) that enable programmable switching chips (ASICs, FPGAs, etc.) and traditional switching ASICs alike. The talk described how the open source community has generalized Google’s seed OVP contribution for additional whitebox targets, and demonstrate Stratum on a fabric of OCP devices controlled by an open source control plane.

I believe Google is still designing all their own IT hardware (compute servers, storage equipment, switch/routers, Data Center Interconnect gear other than the PHY layer transponders). They announced design of many AI processor chips that presumably go into their IT equipment which they use internally but don’t sell to anyone else (just like Amazon AWS).

Google Cloud Next 2019 conference will be April 9-11, 2019 at the Moscone Center in San Francisco, CA.

References:

https://www.huawei.com/en/press-events/news/2019/3/huawei-ocp-open-rack-public-cloud-datacenters

https://www.globenewswire.com/news-release/2019/03/14/1754946/0/en/Huawei-to-Adopt-OCP-s-Open-Rack-across-New-Public-Cloud-Datacenters-Globally.html

 

Synergy Research: Cloud Service Provider Rankings (See Comments for Details)

Cloud services market remains top heavy, with the large providers dominant. Synergy Research

………………………………………………………………………………………………………………………………………………………………………

According to Larry Dignan of ZDNET, “the cloud computing market in 2019 will have a decidedly multi-cloud spin, as the hybrid shift by players such as IBM, which is acquiring Red Hat, could change the landscape. This year’s edition of the top cloud computing providers also features software-as-a-service giants that will increasingly run more of your enterprise’s operations via expansion.

One thing to note about the cloud in 2019 is that the market isn’t zero sum. Cloud computing is driving IT spending overall. For instance, Gartner predicts that 2019 global IT spending will increase 3.2 percent to $3.76 trillion with as-a-service models fueling everything from data center spending to enterprise software.  In fact, it’s quite possible that a large enterprise will consume cloud computing services from every vendor in this guide. The real cloud innovation may be from customers that mix and match the following public cloud vendors in unique ways. ”

Key 2019 themes to watch among the top cloud providers include:

  • Pricing power. Google recently raised prices of G Suite and the cloud space is a technology where add-ons exist for most new technologies. While compute and storage services are often a race to the bottom, tools for machine learning, artificial intelligence and serverless functions can add up. There’s a good reason that cost management is such a big theme for cloud computing customers–it’s arguably the biggest challenge. Look for cost management and concerns about lock-in to be big themes.
  • Multi-cloud. A recent survey from Kentik highlights how public cloud customers are increasingly using more than one vendor. AWS and Microsoft Azure are most often paired up. Google Cloud Platform is also in the mix. And naturally these public cloud service providers are often tied into existing data center and private cloud assets. Add it up and there’s a healthy hybrid and private cloud race underway and that’s reordered the pecking order. The multi-cloud approach is being enabled by virtual machines and containers.
  • Artificial intelligence, Internet of things and analytics are the upsell technologies for cloud vendors. Microsoft Azure, Amazon Web Services and Google Cloud Platform all have similar strategies to land customers with compute, cloud storage, serverless functions and then upsell you to the AI that’ll differentiate them. Companies like IBM are looking to manage AI and cloud services across multiple clouds.
  • The cloud computing landscape is maturing rapidly yet financial transparency backslides. It’s telling when Gartner’s Magic Quadrant for cloud infrastructure goes to 6 players from more than a dozen. In addition, transparency has become worse among cloud computing providers. For instance, Oracle used to break out infrastructure-, platform- and software-as-a-service in its financial reports. Today, Oracle’s cloud business is lumped together. Microsoft has a “commercial cloud” that is very successful, but also hard to parse. IBM has cloud revenue and “as-a-service” revenue. Google doesn’t break out cloud revenue at all. Aside from AWS, parsing cloud sales has become more difficult.

IBM is more private cloud and hybrid with hooks into IBM Cloud as well as other cloud environments. Oracle Cloud is primarily a software- and database-as-a-service provider. Salesforce has become about way more than CRM.

………………………………………………………………………

Page 2 of 3
1 2 3