As more and more applications move to the cloud, cloud network security teams have to keep them secure against an ever-evolving threat landscape. Shielding applications against network threats is also one of the most important criteria for regulatory compliance. To address these challenges, many cloud network security teams build their own complex network threat detection solutions based on open source or third-party IDS components. These customized solutions can be difficult and costly to operate, and they often lack the scalability that is required to protect dynamic cloud applications.
To meet this challenge, Google Cloud has announced the general availability of Google Cloud Intrusion Detection System (IDS) – a cloud-native managed network security solution, where key security capabilities are continuously engineered into our trusted cloud platform. This core network security offering helps detect network-based threats and helps organizations meet compliance standards that call for the use of an intrusion detection system.
Cloud IDS is built with Palo Alto Networks’ industry-leading threat detection technologies, providing high levels of security efficacy that enable you to detect malicious activity with few false positives. The general availability release includes these enhancements:
- Service availability in all regions
- Auto-scaling available in all regions
- Detection signatures automatically updated daily
- Support for customers’ HIPAA compliance requirements (under the Google Cloud HIPAA Business Associate Agreement)
- ISO27001 certification (and in the audit process to support customers’ PCI-DSS compliance requirements by year end)
- Integration with Chronicle, Google’s security analytics platform, to help organizations investigate threats surfaced by Cloud IDS.
Managed network threat detection with full traffic visibility:
Cloud IDS delivers cloud-native, managed, network-based threat detection. It features simple setup and deployment, and gives customers visibility into traffic entering their cloud environment (north-south traffic) and into traffic between workloads (east-west traffic). Cloud IDS empowers security teams to focus their resources on high priority issues instead of designing and operating complex network threat detection solutions.4
“Cloud IDS delivers cloud-native, managed, network-based threat detection. It features simple setup and deployment, and gives customers visibility into traffic entering their cloud environment (north-south traffic) and into traffic between workloads (east-west traffic). Cloud IDS empowers security teams to focus their resources on high priority issues instead of designing and operating complex network threat detection solutions,” according to Google.
“Google Cloud customers will be able to deploy on-demand application visibility and threat detection between workloads or containers in any Google Cloud virtual private cloud (VPC) to support their compliance goals and protect applications,” said Palo Alto Networks Senior Vice President Muninder Singh Sambi in a separate post.
Google Cloud VPC threat detection preceding Google Cloud IDS was limited in its scope, he said. It was also complex to design and implement, and—most crucially for cloud-native businesses—couldn’t scale dynamically to handle cloud bursting events, which are necessary to handle peaks in IT demand.
“Until now, detecting threats in traffic between workloads within the trust boundary of a VPC has been a significant hurdle for cloud network security teams, leading to compliance challenges and blind spots for the Security Operations Center (SOC),” he said.
“The Palo Alto Networks ML-powered threat analysis engine processes over 15 trillion transactions per day, automatically collected from across our global network of firewalls and endpoint agents. The result is 4.3 million unique security updates made per day to ensure you’re covered against the latest threats,” Sambi added.
Google Cloud IDS comes at at time when hyper-scalers, including Google, Amazon and Microsoft, are rapidly increasing their global Wide Area Network (WAN) reach. Businesses are increasingly turning to the public cloud and multi-cloud as more companies pivot to being cloud-native or at least cloud-adjacent.
In December Google announced plans to move into Germany, Israel, and Saudi Arabia with new cloud regions planned for 2022. Those join 29 cloud regions and 88 zones already in use.
Cloud IDS is now available in all regions. It provides protection against malware, virus and spyware, command and control (C2) attacks, and vulnerabilities such as buffer overflow and illegal code execution attacks. Auto-scaling capability dynamically adjusts Cloud IDS as needed when your traffic throughput changes so that you can automatically keep up with your scale needs. Threat signature updates are applied daily so you can stay ahead of the new threat variants. You can now use Chronicle to investigate the threats surfaced in Cloud IDS. With Chronicle’s integration, you can store and analyze Cloud IDS threat logs along with all your security telemetry data in one place so that you can effectively investigate and respond to threats at scale.
Google has patented their IDS, which is defined as follows:
An intrusion detection system for detection of intrusion or attempted intrusion by an unauthorized party or entity to a computer system or network, the intrusion detection system comprising means for monitoring the activity relative to the computer system or network, means for receiving and storing one or more general rules, each of the general rules being representative of characteristics associated with a plurality of specific instances of intrusion or attempted intrusion, and matching means for receiving data relating to activity relative to said computer system or network from the monitoring means and for comparing, in a semantic manner, sets of actions forming the activity against the one or more general rules to identify an intrusion or attempted intrusion. Inductive logic techniques are proposed for suggesting new intrusion detection rules for inclusion into the system, based on examples of sinister traffic.
At Google Cloud Next ’21 the cloud giant announced Google Distributed Cloud, a portfolio of solutions consisting of hardware and software that extend Google cloud infrastructure to the edge and into the customer premises data center. This new offering permits wireless network operators to run their 5G core and radio access network (RAN) functions on Google Distributed Cloud in a variety of locations. These could include a telco’s own facilities, premises owned by customers or Google’s network of about 140 centers. One unifying theme is that hosting functions in a multitude of places – and not just a couple of big data centers – would shorten the distance that data signals must travel and cut service-interfering latency, a measure of the journey time. Functions can also be co-hosted with enterprise applications, according to Google.
In particular, Google Distributed Cloud can run across multiple locations, including:
- Google’s network edge – Allowing customers to leverage over 140+ Google network edge locations around the world.
- Operator edge – Enabling customers to take advantage of an operator’s edge network and benefit from 5G/LTE services offered by our leading communication service provider (CSP) partners. The operator edge is optimized to support low-latency use cases, running edge applications with stringent latency and bandwidth requirements.
- Customer edge – Supporting customer-owned edge or remote locations such as retail stores, factory floors, or branch offices, which require localized compute and processing directly in the edge locations.
- Customer data centers – Supporting customer-owned data centers and colocation facilities to address strict data security and privacy requirements, and to modernize on-premises deployments while meeting regulatory compliance.
The first products under this portfolio include Google Distributed Cloud Edge and Google Distributed Cloud Hosted. Google Distributed Cloud Edge is now available for preview, while Google Distributed Cloud Hosted set to become available in preview in the first half of 2022.
Google Distributed Cloud Edge is primarily aimed at wireless network operators. It is designed to exist in the operator edge, customer edge and Google edge locations, of which there are over 140 around the world.
while Google Distributed Cloud Hosted is meant for public-sector and commercial customers that need to meet strict data residency, security or privacy requirements. Both are fully managed and comprise hardware and software solutions, including artificial intelligence and analytics capabilities.
During a media briefing, Google Cloud’s VP and GM of Open Infrastructure Sachin Gupta said “This portfolio allows customers to focus on applications and business initiatives rather than management of their underlying infrastructure. In other words, they can just leave the complexity to us.”
Gupta added that the Distributed Cloud Edge product enables network operators to run 5G core and radio access network functions closer to users, allowing them to slash latency and “offer their enterprise customers high-speed bandwidth, with private 5G and localized compute.” In a blog, the executive added the product advances previously announced work with Ericsson and Nokia to deliver cloud-native network applications. He said that Distributed Cloud Hosted is a “safe and secure way to modernize on premises deployments without requiring any connectivity to Google Cloud.”
Google Distributed Cloud is built on Anthos, an open-source-based platform that unifies the management of infrastructure and applications across on-premises, edge, and in multiple public clouds, all while offering consistent operation at scale. Google Distributed Cloud taps into our planet-scale infrastructure that delivers the highest levels of performance, availability, and security, while Anthos running on Google-managed hardware at the customer or edge location provides a services platform on which to run applications securely and remotely.
Using Google Distributed Cloud, customers can migrate or modernize applications and process data locally with Google Cloud services, including databases, machine learning, data analytics and container management. Customers can also leverage third-party services from leading vendors in their own dedicated environment. At launch, a diverse portfolio of partners, including Cisco, Dell, HPE, and NetApp, will support the service.
As Google’s global network increases in reach, the company will be building out service-centric networking capabilities to simplify everything from connectivity to observability. For organizations with interconnects, VPNs, and SD-WANs, Networking Connectivity Center provides a centralized management model, with monitoring and visualization through our Network Intelligence Center. And, with Private Service Connect, partners and customers such as Bloomberg, MongoDB, and Elastic are now able to easily connect services without having to configure the underlying network.
Enterprises with workloads both on-premises and in the cloud can leverage hybrid load balancing to securely optimize application delivery. To help you detect and prevent malicious bot attacks, we recently integrated reCAPTCHA Enterprise with Cloud Amor. Together with Cloud IDS, the Google network edge is fortified with best-in-class security.
Obviously, virtualization is a network operator prerequisite so that network software runs on common, off-the-shelf compute servers. After virtualizing, network operators could theoretically integrate their networks with Google’s distributed cloud.
Operators might do this if they believe a deal with Google costs less than operating a private cloud, or if it promises other benefits. But it means giving the hyper-scaler a big say over technology strategy and would have been inconceivable just a few years ago, when the valuation gap between telecom players and Internet firms was not so extreme and telcos were much warier of tie-ups.
For a start, it would obviously hand prominent roles to Anthos, Google’s application management platform, and Kubernetes, a container orchestration platform that Google originally designed. Even when Google’s facilities are not being used, it will effectively manage the hardware and software.
Obviously, neither Ericsson or Nokia were listed as partners or systems integrators as their purpose built wireless network equipment and 5G SA core network software are in direct competition with 5G deployments using hyper-scale cloud service providers (AWS, Azure, Google Cloud) technology. Ericsson will launch virtual RAN software next year while Nokia´s AirScale Cloud RAN solution is in trials with major wireless network operators, including AT&T (which has outsourced its 5G SA core network to Amazon AWS). Nonetheless, those two major network equipment vendors made supportive comments:
“The announcement of Google Distributed Cloud supports Ericsson’s vision of the network becoming a platform of innovation, enabling companies across the ecosystem to deliver the applications of the future the way they need to, unlocking the full potential of 5G and edge,” said Rishi Bhaskar, Head of Hyperscale Cloud Providers for Ericsson North America.
“This announcement builds on our on-going partnership with Google Cloud to develop Nokia cloud-native 5G core and Nokia radio solutions for Google’s edge computing platform,” said Nishant Batra, Nokia Chief Strategy and Technology Officer. “By extending this relationship into Google Distributed Cloud Edge, we will increase customer choice and flexibility, ultimately helping our global customer base with multiple cloud-based solutions to deliver 5G services on the network edge.”
Curiously, there was no mention of software partners in Google’s announcement, but any RAN software would have to work with the underlying base station hardware. Who takes responsibility for that is something 5G network operators must resolve before committing to Google Cloud.
Iain says that network operators teaming up with cloud service providers is a new form of lock-in substituting cloud hyper-scalers from wireless network equipment vendors. He wrote:
What’s entirely unclear is why operators should worry less about dependency on Google than they currently do about their heavy reliance on Ericsson, Huawei and Nokia. Switching from one RAN vendor to another is costly but feasible, as swap-outs of Huawei in Europe are showing. Moving from one public cloud to another may be as tricky as quitting a crime syndicate. In 2019, Snapchat developer Snap warned in a regulatory filing that moving systems between public clouds would be “difficult to implement” and demand “significant time and expense.”
If this and other hyper-scaler offers take off, the real losers would probably not be Ericsson and Nokia – which can still sell radio units and provide RAN software – but the vendors of private cloud software, such as VMware and Red Hat (owned by IBM). More generally, the public cloud could also be a threat to some of Google’s own hardware partners. “The server vendors (Dell, HPE etc) also lose out,” says James Crawshaw, a principal analyst with Omdia (a sister company to Light Reading, in an email. “Although they are going to be building and shipping the Google boxes, I suspect the margins on these will be lower than the regular servers they sell enterprises.”
Few telcos have been as brave/reckless (delete according to bias) as Dish and gone all-in with a public cloud. That is partly because brownfield operators would be writing off the servers they already own. Nevertheless, Crawshaw expects public cloud usage to keep rising. “Servers are depreciated over three to seven years depending on the business and how fresh they like their IT,” he says. “So while the telcos will continue to run their own clouds, they will increase their public cloud usage over time and only partially renew their private estate.”
AT&T, Bell Canada, Telus, Telenet, TIM, Reliance Jio and Orange are all on the growing list of operators that have put some IT workloads on Google Cloud. “Some of these are running packet core and RAN applications as well,” says Gupta. Contrast that with Dish Network which is wholly reliant on the AWS public cloud and AT&T which has its own physical 5G RAN, but will use the public AWS cloud for its 5G SA core network.
“Some years ago, everyone was saying we would have vendor lock-in with Ericsson, Huawei and Nokia and no one mentioned Oracle and Cisco and now the light is on hyper-scalers,” said Yves Bellego, Orange’s director of network strategy, during a recent interview with Light Reading. “In fact, that risk is something we have always been very concerned about.” That would imply cloud hyper-scaler lock-in is something network operators must carefully evaluate.
Each cable system will contain 16 fiber optic pairs while adhering to the innovative concepts of open cable, supporting multiple fiber tenants, and open landing station, enabling competitive access to the cable termination points, the two systems set a new reference in terms of diversification, scalability and latency throughout these geographies.
Blue will be deployed along a new northbound route in the Mediterranean, crossing the Strait of Messina, rather than following the traditional route through Sicily Channel.
As a result, Internet Service Providers, Carriers, Telecom Operators, Content Providers, Enterprises and Institutions will benefit from high-speed Internet and state-of-the-art capacity services with unparalleled diversity and performances.
Within the Blue System, BlueMed submarine cable is now Sparkle’s own private domain sharing its wet components with four additional fibre pairs and an initial design capacity of more than 25 Tbps per fibre pair, and is extended up to Jordan (Aqaba) with additional private branches into France (Corsica), Greece (Chania – Crete), Italy (Golfo Aranci – Sardinia and Rome), Algeria, Tunisia, Libya, Turkey, Cyprus and more in the future.
BlueMed flexible design allows both seamless express connections throughout the Mediterranean Basin, with unprecedented latency and spectral efficiency, and sophisticated regional subsystems, based on specific customer requirements.
In addition, Sparkle’s Genoa Open Landing Platform is set to become the alternative priority access for other upcoming submarine cables looking for a diversified entry to Europe, backhauled to the Milan’s rich digital marketplace, and thus a new reference gateway between Africa, the Middle East, Asia and Europe.
Blue and Raman are expected to be ready for service in 2024, with the Tyrrhenian part of BlueMed planned to be operational already in 2022.
“We are extremely proud to bring our collaboration with Google to the next level with this cutting-edge intercontinental infrastructure”, comments Elisabetta Romano, CEO of Sparkle. “With Blue and Raman Submarine Cable Systems, Sparkle boosts its capabilities in the strategic routes between Asia, Middle East and Europe and the enhanced BlueMed strengthens our presence in the greater Mediterranean area”
Google Cloud revenues increased 54% year over year to $4.62 billion during the second quarter of 2021, parent company Alphabet reported today. Google Cloud’s operating loss shrunk 59%, from $1.42 billion a year ago to $591 million last quarter.
Google Cloud includes both Google Cloud Platform (GCP) and its Workspace (formerly G Suite) cloud computing services and collaboration tools.
Like previous quarters, “GCPs revenue growth was, again, above cloud overall, reflecting significant growth in both infrastructure and platform services,” the company said in a statement.
“As for Google Cloud, we remain focused on revenue growth, and are pleased with the trends we’re seeing across cloud,” Google CFO Ruth Porat said on the company’s 2Q-2021 earnings call today. Porat added that growth in its Google Cloud Platform segment again surpassed overall cloud gains “reflecting significant growth in both infrastructure and platform services.”
“We will continue to invest aggressively, including expanding our go-to-market organization, our channel expansion, our product offerings, and our compute capacity,” she said.
Also on today’s earnings call, Google CEO Sundar Pichai cited security as a competitive differentiator and “our strongest product portfolio.” Google will continue to invest in security and continue its work to integrate its various security products such as Beyond Corp and Chronicle, he added.
“Cyber threats increasingly are on the mind of not just CIOs but CEOs across our partners. So it’s definitely an area where we are seeing a lot of conversations, a lot of interest…so a definite source of strength and you’ll see us continue to invest here,” he said.
“We are cloud native, we pioneered … zero trust and built the architecture out from a security-first perspective. Particularly, over the course of the last couple of years, with the recent attacks, [companies] really started thinking deeply about vulnerabilities, supply chain security has been a major source of consensus, cyber threats are increasingly on the mind of, not just CIOs, but CEOs across our partners. So it’s definitely an area where we are seeing a lot of conversations, a lot of interest.”
Google Cloud, along with its other business units, boosted Alphabet’s revenue 62% year over year, to $61.9 billion. As usual, Google ad revenue represented the biggest piece of the pie. It grew 69% to $50.44 billion. Retail was the biggest contributor to advertising growth.
Google Cloud holds around 7% market share in the cloud services segment, according to a Canalys report released in April 2021. It trails Amazon Web Services (AWS) and Microsoft Azure, which hold 32% and 19% market share, respectively.
Microsoft posted financial results Tuesday, its Intelligent Cloud revenue increased 30% to $17.4 billion. The company stated Azure revenue grew of 51%, but did not break out a dollar figure. Amazon is set to report earnings on Thursday.
Along with their hyper-scale cloud competitors Google Cloud is partnering with telecom companies all over the world to help them drive transformation and accelerate 5G adoption and monetization.
Here are a few of their telco partners:
Nokia and Vodafone have partnered to jointly develop a new machine learning (ML) system designed to detect and remediate network anomalies before they impact customers. Based on Nokia’s Bell Labs algorithm, the Anomaly Detection Service product runs on Google Cloud and is already being rolled out across Vodafone’s pan-European network.
In a joint statement, the partners said the ML system quickly detects and troubleshoots irregularities, such as mobile site congestion and interference, as well as unexpected latency, that may have an impact on customer service quality. Following an initial deployment in Italy on more than 60,000 LTE cells, Vodafone said it will be extending the service to all its European markets by early 2022, and there are plans to eventually apply it on the company’s 5G and core networks.
Vodafone added that it expects that around 80 percent of all its anomalous mobile network issues and capacity demands to be automatically detected and addressed using Anomaly Detection Service.
Vodafone’s deal with Nokia signed last year complements its recent six-year agreement with Google Cloud to jointly build integrated cloud-based capabilities backed by hubs of networking and software engineering expertise.
The platform, called ’Nucleus’, will house a new system ‘Dynamo’, which will drive data throughout Vodafone to enable it to more quickly offer its customers new, personalized products and services across multiple markets. Dynamo is expected to help Vodafone to tailor new connectivity services for homes and businesses through the release of new features such as providing a sudden broadband speed boost.
Capable of processing around 50 TB of data per day, Nucleus and Dynamo are considered “industry firsts”. Being built in-house by Vodafone and Google Cloud specialist teams, the project involves up to 1,000 employees of both companies located in Spain, the UK and the US.
Vodafone said it has already identified more than 700 use-cases to deliver new products and services quickly across its markets, support fact-based decision-making, reduce costs, remove duplication of data sources, and simplify and centralize operations.
Johan Wibergh, Chief Technology Officer, Vodafone, said: “We are building an automated and programmable network that can respond quickly to our customers’ needs. As we extend 5G across Europe, it is important to match the speed and responsiveness of this new technology with a great service. With machine learning, we can ensure a consistently high-quality performance that is as smart as the technology behind it.”
Amol Phadke, Managing Director, Telecom Industry Solutions, Google Cloud, said:
“We are thrilled to partner with Nokia and Vodafone to deliver a data- and AI-driven solution that scales quickly and leverages automation to increase cost efficiency and ensures seamless customer experiences across Europe. As behaviors change and the data needed for analysis increases in velocity, volume, and complexity, automation and a cloud-based data platform are now key in making fast and informed decisions.”
Anil Rao, Research Director, Analysys Mason, said: “Vodafone’s anomaly detection use case, developed in partnership with Nokia and run on Google Cloud, automates root-cause analysis for efficient network planning, optimization, and operations. This type of partnership provides a new opportunity for operators to rethink data management and increase the focus on use cases and application development.”
Raghav Sahgal, President of Cloud and Network Services, Nokia, said: “This first commercial deployment of Anomaly Detection Service with Vodafone on Google Cloud provides a great boost to customer service. It not only addresses the critical need to quickly detect and remedy anomalies impacting network performance using machine learning-based algorithms, but it also highlights Nokia’s technology leadership and the deep technical expertise of Nokia Bell Labs.”
Vodafone said it will convert its entire SAP environment to Google Cloud, including the migration of its core SAP workloads and key corporate SAP modules such as SAP Central Finance.
During OFC 2021 last week, Ciena and Lumenisity Ltd. said that they had partnered to demonstrate transmission of 45 wavelengths, each at 400G, over 1,000 km of hollowcore fiber cable.
The demonstration paired Lumenisity’s CoreSmart hollowcore cable with Ciena’s WaveLogic 5 Extreme and Nano coherent optical engines, with the transmission occurring in a recirculating loop. The companies say their work indicates that hollowcore fiber cable can be used for high-bandwidth, long-reach applications such as data center interconnect (DCI) in addition to edge and 5G xHaul applications Lumenisity had previously cited (see “Lumenisity, BT drive 400ZR DWDM transmission over hollowcore fiber“ and “BT testing hollowcore fiber for 5G support”).
Lumenisty said that it has been working over the past six months with ecosystem partners to test the CoreSmart low-latency hollowcore cable in its System Lab in Romsey, UK (see “Startup Lumenisity unveils hollowcore fiber cables for DWDM applications, new funding” for more on Lumenisity’s fiber). Ciena participated in at least some of those exercises, including a second trial in which the two companies achieved a capacity of 38.4 Tbps with 48x800G channels over greater than 20 km without in line amplification using the current generation of CoreSmart. Lumenisity says the next generation of CoreSmart will be able to extend reach in such an application to between 50-100 km with no inline amplification when paired with the WaveLogic 5 Extreme.
“The results obtained both internally and with Ciena commercial WaveLogic 5 systems show further evidence that we are bringing our world-class hollowcore fiber cable technology to market at an accelerating rate for multiple high-capacity applications, that solve real world latency issues for our customers,” commented Tony Pearson, business development director at Lumenisity.
“System characterization results of WaveLogic 5 Extreme programmable 800G and WaveLogic 5 Nano 400ZR coherent pluggables running over CoreSmart show promising results with hollowcore fiber now proven to preserve high-capacity while materially reducing latency,” added Steve Alexander, senior vice president and CTO of Ciena. “We are proud to be at the forefront of this breakthrough technological achievement where we can enable a 50% increase in reach for latency-sensitive data center interconnects.”
Separately, CTO Alexander wrote a blog titled, “Ciena has joined Google Cloud’s 5G/Edge ISV Program to help enterprises accelerate migration of their IT resources to the cloud“
Here’s an excerpt:
To facilitate the migration of enterprise IT workloads to the cloud, there is a requirement for higher speed connections from the enterprise edge to cloud provider that are scalable with enhanced security to best protect critical business data. Shared IP network connections to the cloud are acceptable for lower speed (10Gb/s) connections and below. However, when secure, higher speed connections are required to the cloud, connectivity via the IP network can become overly complex, expensive, and inefficient when compared to the optical network (Optical Fast Lane) that can provide a more efficient, cost-effective, and secure option for enterprises needing to reduce their workload migration times to support their evolving business objectives.
For the multi-cloud market to succeed, it must reduce the friction for enterprises to migrate their workloads to a cloud provider, as well as between cloud providers – on demand. This is analogous to the days when you had a mobile plan with one carrier, and to switch to another carrier, you had to switch mobile numbers, which was too complex for most customers, so they stuck with their existing carrier. Only when consumers could keep their phone number when they switched carriers (through Local Number Portability), did it make the mobile market truly competitive leading to improved choice, pricing, and innovation. This is what we’re trying to achieve in the multi-cloud market.
Google Cloud is one of the leading cloud providers in the market that embraces an architecture that enables their enterprise customers to gracefully migrate their workloads to Google Cloud via an Optical Fast Lane that enables Enterprise to develop and leverage the Google Cloud for new and innovative applications. Ciena is excited to be a key player in this program and in addressing this opportunity in the industry. This builds off Ciena’s long standing relationship with Google and other Cloud Providers serving both private and managed high-capacity optical transport networks – principally dominated by subsea, long-haul, metro and DCI connectivity.
Ciena is also a major supplier to Communication Service Providers (CSPs) and MSOs – serving all segments of the network – including high-speed access connectivity for Enterprises as well as cell-site routing and backhaul. In partnership with CSPs, Google Cloud is helping customers leverage their edge real-estate assets to facilitate low latency connectivity to Google Cloud and reduce the friction required for enterprises to improve their mean time to the cloud for their data and workloads.
Vodafone and Google Cloud today announced a new, six-year strategic partnership to drive the use of reliable and secure data analytics, insights, and learnings to support the introduction of new digital products and services for Vodafone customers simultaneously worldwide.
In a significant expansion of their existing agreement, Vodafone and Google Cloud will jointly build a powerful new integrated data platform with the added capability of processing and moving huge volumes of data globally from multiple systems into the cloud.
The platform, called ‘Nucleus‘, will house a new system – ‘Dynamo‘ – which will drive data throughout Vodafone to enable it to more quickly offer its customers new, personalized products and services across multiple markets. Dynamo will allow Vodafone to tailor new connectivity services for homes and businesses through the release of smart network features, such as providing a sudden broadband speed boost.
Capable of processing around 50 terabytes of data per day, equivalent to 25,000 hours of HD film (and growing), both Nucleus and Dynamo, which are industry firsts, are being built in-house by Vodafone and Google Cloud specialist teams. Up to 1,000 employees of both companies located in Spain, the UK, and the United States are collaborating on the project.
Vodafone has already identified more than 700 use-cases to deliver new products and services quickly across Vodafone’s markets, support fact-based decision-making, reduce costs, remove duplication of data sources, and simplify and centralize operations. The speed and ease with which Vodafone’s operating companies in multiple countries can access its data analytics, intelligence, and machine-learning capabilities will also be vastly improved.
By generating more detailed insight and data-driven analysis across the organization and with its partners, Vodafone customers around the world can have a better and more enriched experience. Some of the key benefits include:
- Enhancing Vodafone’s mobile, fixed, and TV content and connectivity services through the instantaneous availability of highly personalized rewards, content, and applications. For example, a consumer might receive a sudden broadband speed boost based on personalized individual needs.
- Increasing the number of smart network services in its Google Cloud footprint from eight markets to the entire Vodafone footprint. This allows Vodafone to precisely match network roll-out to consumer demand, increase capacity at critical times, and use machine learning to predict, detect, and fix issues before customers are aware of them.
- Empowering data scientists to collaborate on key environmental and health issues in 11 countries using automated machine learning tools. Vodafone is already assisting governments and aid organisations, upon their request, with secure, anonymised, and aggregated movement data to tackle COVID-19. This partnership will further improve Vodafone’s ability to provide deeper insights, in accordance with local laws and regulations, into the spread of disease through intelligent analytics across a wider geographical area.
- Providing a complete digital replica of many of Vodafone’s internal support functions using artificial intelligence and advanced analytics. Called a digital twin, it enables analytic models on Google Cloud to improve response times to enquiries and predict future demand. The system will also support a digital twin of Vodafone’s vast digital infrastructure worldwide.
- In addition, Vodafone will re-platform its entire SAP environment to Google Cloud, including the migration of its core SAP workloads and key corporate SAP modules such as SAP Central Finance.
Johan Wibergh, Chief Technology Officer for Vodafone, said: “Vodafone is building a powerful foundation for a digital future. We have vast amounts of data which, when securely processed and made available across our footprint using the collective power of Vodafone and Google Cloud’s engineering expertise, will transform our services, to our customers and governments, and the societies where they live and serve.”
Thomas Kurian, CEO at Google Cloud, commented: “Telecommunications firms are increasingly differentiating their customer experiences through the use of data and analytics, and this has never been more important than during the current pandemic. We are thrilled to be selected as Vodafone’s global strategic cloud partner for analytics and SAP, and to co-innovate on new products that will accelerate the industry’s digital transformation.”
Revenues at Google’s Cloud business grew 46% this past quarter. However, Google continues to be a distant third to Amazon and Microsoft in the cloud business.
All data generated by Vodafone in the markets in which it operates is stored and processed in the required Google Cloud facilities as per local jurisdiction requirements and in accordance with local laws and regulations. Customer permissions and Vodafone’s own rigorous security and privacy by design processes also apply.
On the back of their collaborative work, Vodafone and Google Cloud will also explore opportunities to provide consultancy services, offered either jointly or independently, to other multi-national organizations and businesses.
The platform is being built using the latest hybrid cloud technologies from Google Cloud to facilitate the rapid standardization and movement of data in both Vodafone’s physical data centers and onto Google Cloud. Dynamo will direct all of Vodafone’s worldwide data, extracting, encrypting, and anonymizing the data from source to cloud and back again, enabling intelligent data analysis and generating efficiencies and insight.
Google has plans to build three new undersea cables in 2019 to support its Google Cloud customers. The company plans to co-commission the Hong Kong-Guam (HK-G) cable system as part of a consortium. In a blog post by Ben Treynor Sloss, vice president of Google’s cloud platform, three undersea cables and five new regions were announced..
The HK-G will be an extension of the SEA-US cable system, and will have a design capacity of more than 48Tbps. It is being built by RTI-C and NEC. Google said that together with Indigo and other cable systems, HK-G will create multiple scalable, diverse paths to Australia. In addition, Google plans to commission Curie, a private cable connecting Chile to Los Angeles and Hvfrue, a consortium cable connecting the US to Denmark and Ireland as shown in the figure below.
Late last year, Google also revealed plans to open a Google Cloud Platform region in Hong Kong in 2018 to join its recently launched Mumbai, Sydney, and Singapore regions, as well as Taiwan and Tokyo.
Of the five new Google Cloud regions, Netherlands and Montreal will be online in the first quarter of 2018. Three others in Los Angeles, Finland, and Hong Kong will come online later this year. The Hong Kong region will be designed for high availability, launching with three zones to protect against service disruptions. The HK-G cable will provide improved network capacity for the cloud region. Google promises they are not done yet and there will be additional announcements of other regions.
In an earlier announcement last week, Google revealed that it has implemented a compile-time patch for its Google Cloud Platform infrastructure to address the major CPU security flaw disclosed by Google’s Project Zero zero-day vulnerability unit at the beginning of this year.
Diane Greene, who heads up Google’s cloud unit, often marvels at how much her company invests in Google Cloud infrastructure. It’s with good reason. Over the past three years since Greene came on board, the company has spent a whopping $30 billion beefing up the infrastructure.
Google has direct investment in 11 cables, including those planned or under construction. The three cables highlighted in yellow are being announced in this blog post. (In addition to these 11 cables where Google has direct ownership, the company also leases capacity on numerous additional submarine cables.)
In the referenced Google blog post, Mr Treynor Sloss wrote:
At Google, we’ve spent $30 billion improving our infrastructure over three years, and we’re not done yet. From data centers to subsea cables, Google is committed to connecting the world and serving our Cloud customers, and today we’re excited to announce that we’re adding three new submarine cables, and five new regions.
We’ll open our Netherlands and Montreal regions in the first quarter of 2018, followed by Los Angeles, Finland, and Hong Kong – with more to come. Then, in 2019 we’ll commission three subsea cables: Curie, a private cable connecting Chile to Los Angeles; Havfrue, a consortium cable connecting the U.S. to Denmark and Ireland; and the Hong Kong-Guam Cable system (HK-G), a consortium cable interconnecting major subsea communication hubs in Asia.
Together, these investments further improve our network—the world’s largest—which by some accounts delivers 25% of worldwide internet traffic……………….l.l….
Simply put, it wouldn’t be possible to deliver products like Machine Learning Engine, Spanner, BigQuery and other Google Cloud Platform and G Suite services at the quality of service users expect without the Google network. Our cable systems provide the speed, capacity and reliability Google is known for worldwide, and at Google Cloud, our customers are able to to make use of the same network infrastructure that powers Google’s own services.
While we haven’t hastened the speed of light, we have built a superior cloud network as a result of the well-provisioned direct paths between our cloud and end-users, as shown in the figure below.
According to Ben: “The Google network offers better reliability, speed and security performance as compared with the nondeterministic performance of the public internet, or other cloud networks. The Google network consists of fiber optic links and subsea cables between 100+ points of presence, 7500+ edge node locations, 90+ Cloud CDN locations, 47 dedicated interconnect locations and 15 GCP regions.”