Google’s Bosun subsea cable to link Darwin, Australia to Christmas Island in the Indian Ocean

Alphabet’s Google will build a subsea cable connecting Australia’s Indian Ocean territory of Christmas Island to the northern garrison city of Darwin, a project backed by that Australia says will boost its digital resilience.  Christmas Island is 1,500 km (930 miles) west of the Australian mainland, with a small population of 1,250, but strategically located in the Indian Ocean, 350 km (215 miles) from Jakarta. Other partners in the cable project include Australian data center company NextDC, Macquarie-backed telecommunications infrastructure provider Vocus, and SUBCO which provides geographically diverse, low-latency subsea cable system connectivity from Australia to the Middle East and beyond.
Google’s vice president of global network infrastructure, Brian Quigley, said in a statement the Bosun cable will link Darwin to Christmas Island, while another subsea cable will connect Melbourne on Australia’s east coast to the west coast city of Perth, then on to Christmas Island and Singapore.
Image Credit: Google Cloud
…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
Australia is seeking to reduce its exposure to digital disruption by building more subsea cable pathways to Asia to its west, and through the South Pacific to the United States. “These new cable systems will not only expand and strengthen the resilience of Australia’s own digital connectivity through new and diversified routes, but will also complement the Government’s active work with industry and government partners to support secure, resilient and reliable connectivity across the Pacific,” said Communications Minister Michelle Rowland in a statement.
SUBCO previously built an Indian Ocean cable from Perth to Oman with spurs to the U.S. military base of Diego Garcia, and Cocos Islands, where Australia is upgrading a runway for defence surveillance aircraft.
Although 900 km (560 miles) apart, Christmas Island is seen as an Indian Ocean neighbor of Cocos Islands, which the Australian Defense Force has said is key to its maritime surveillance operations in a region where China is increasing submarine activity.
The new subsea cables will also link to a Pacific Islands network being built by Google and jointly funded by the United States, connecting the U.S. and Australia through hubs in Fiji and French Polynesia.
Vocus said in a statement the two networks will form the world’s largest submarine cable system spanning 42,500 km of fiber optic cable running between the U.S. and Asia via Australia.
Image Credit: Vocus

“Vocus is thrilled to have the opportunity to deepen our strategic network partnership with Google, and to play a part in establishing critical digital infrastructure for our region. Australia Connect will bolster our nation’s strategic position as a vital gateway between Asia and the United States by connecting key nodes located in Australia’s East, West, and North to global digital markets,” said Jarrod Nink, Interim Chief Executive Officer, Vocus.

“The combination of the new Australia Connect subsea cables with Vocus’ existing terrestrial route between Darwin and Brisbane, will create a low latency, secure, and stable network architecture. It will also establish Australia’s largest and most diverse domestic inter-capital network, with unparalleled reach and protection across terrestrial and subsea paths.

“By partnering with Google, we are ensuring that Vocus customers have access to high capacity, trusted and protected digital infrastructure linking Australia to the Asia Pacific and to the USA. “The new subsea paths, combined with Vocus’ existing land-based infrastructure, will provide unprecedented levels of diversity, capacity and reliability for Google, our customers and partners,” Nink said.

“Australia Connect advances Google’s mission to make the world’s information universally accessible and useful. We’re excited to collaborate with Vocus to build out the reach, reliability, and resiliency of internet access in Australia and across the Indo-Pacific region,” said Brian Quigley, VP, Global Network Infrastructure, Google Cloud.

Perth, Darwin, and Brisbane are key beneficiaries of this investment and are now emerging as key nodes on the global internet utilizing the competitive and diverse subsea and terrestrial infrastructure established by the Vocus network.  Vocus will be in a position to supply an initial 20-30Tbps of capacity per fiber pair on the announced systems, depending on the length of the segment.

References:

Equinix and Vodafone to Build Digital Subsea Cable Hub in Genoa, Italy

Canalys & Gartner: AI investments drive growth in cloud infrastructure spending

According to market research firm Canalys, global spending on cloud infrastructure services [1.] increased by 21% year on year, reaching US$82.0 billion in the 3rd quarter of 2024.  Customer investment in the hyperscalers’ AI offerings fueled growth, prompting leading cloud vendors to escalate their investments in AI. 

Note 1. Canalys defines cloud infrastructure services as services providing infrastructure (IaaS and bare metal) and platforms that are hosted by third-party providers and made available to users via the Internet. 

The rankings of the top three cloud service providers –  Amazon AWS, Microsoft Azure and Google Cloud – remained stable from the previous quarter, with these providers together accounting for 64% of total expenditure. Total combined spending with these three providers grew by 26% year on year, and all three reported sequential growth. Market leader AWS maintained a year-on-year growth rate of 19%, consistent with the previous quarter. That was outpaced by both Microsoft, with 33% growth, and Google Cloud, with 36% growth. In actual dollar terms, however, AWS outgrew both Microsoft and Google Cloud, increasing sales by almost US$4.4 billion on the previous year.

In Q3 2024, the cloud services market saw strong, steady growth. All three cloud hyperscalers reported positive returns on their AI investments, which have begun to contribute to their overall cloud business performance. These returns reflect a growing reliance on AI as a key driver for innovation and competitive advantage in the cloud.

With the increasing adoption of AI technologies, demand for high-performance computing and storage continues to rise, putting pressure on cloud providers to expand their infrastructure. In response, leading cloud providers are prioritizing large-scale investments in next-generation AI infrastructure. To mitigate the risks associated with under-investment – such as being unprepared for future demand or missing key opportunities – they have adopted over-investment strategies, ensuring their ability to scale offerings in line with the growing needs of their AI customers. Enterprises are convinced that AI will deliver an unprecedented boost in efficiency and productivity, so they are pouring money into hyperscalers’ AI solutions. Accordingly, cloud service provider capital spending (CAPEX) will sustain their rapid growth trajectories and are expected to continue on this path into 2025.

“Continued substantial expenditure will present new challenges, requiring cloud vendors to carefully balance their investments in AI with the cost discipline needed to fund these initiatives,” said Rachel Brindley, Senior Director at Canalys. “While companies should invest sufficiently in AI to capitalize on technological growth, they must also exercise caution to avoid overspending or inefficient resource allocation. Ensuring the sustainability of these investments over time will be vital to maintaining long-term financial health and competitive advantage.”

“On the other hand, the three leading cloud providers are also expediting the update and iteration of their AI foundational models, continuously expanding their associated product portfolios,” said Yi Zhang, Analyst at Canalys. “As these AI foundational models mature, cloud providers are focused on leveraging their enhanced capabilities to empower a broader range of core products and services. By integrating these advanced models into their existing offerings, they aim to enhance functionality, improve performance and increase user engagement across their platforms, thereby unlocking new revenue streams.”

Amazon Web Services (AWS) maintained its lead in the global cloud market in Q3 2024, capturing a 33% market share and achieving 19% year-on-year revenue growth. It continued to enhance and broaden its AI offerings by launching new models through Amazon Bedrock and SageMaker, including Anthropic’s upgraded Claude 3.5 Sonnet and Meta’s Llama 3.2. It reported a triple-digit year-on-year increase in AI-related revenue, outpacing its overall growth by more than three times. Over the past 18 months, AWS has introduced nearly twice as many machine learning and generative AI features as the combined offerings of the other leading cloud providers. In terms of capital expenditure, AWS announced plans to further increase investment, with projected spending of approximately US$75 billion in 2024. This investment will primarily be allocated to expanding technology infrastructure to meet the rising demand for AI services, underscoring AWS’ commitment to staying at the forefront of technological innovation and service capability.

Microsoft Azure remains the second-largest cloud provider, with a 20% market share and impressive annual growth of 33%. This growth was partly driven by AI services, which contributed approximately 12% to the overall increase. Over the past six months, use of Azure OpenAI has more than doubled, driven by increased adoption by both digital-native companies and established enterprises transitioning their applications from testing phases to full-scale production environments. To further enhance its offerings, Microsoft is expanding Azure AI by introducing industry-specific models, including advanced multimodal medical imaging models, aimed at providing tailored solutions for a broader customer base. Additionally, the company announced new cloud and AI infrastructure investments in Brazil, Italy, Mexico and Sweden to expand capacity in alignment with long-term demand forecasts.

Google Cloud, the third-largest provider, maintained a 10% market share, achieving robust year-on-year growth of 36%. It showed the strongest AI-driven revenue growth among the leading providers, with a clear acceleration compared with the previous quarter. As of September 2024, its revenue backlog increased to US$86.8 billion, up from US$78.8 billion in Q2, signaling continued momentum in the near term. Its enterprise AI platform, Vertex, has garnered substantial user adoption, with Gemini API calls increasing nearly 14-fold over the past six months. Google Cloud is actively seeking and developing new ways to apply AI tools across different scenarios and use cases. It introduced the GenAI Partner Companion, an AI-driven advisory tool designed to offer service partners personalized access to training resources, enhancing learning and supporting successful project execution. In Q3 2024, Google announced over US$7 billion in planned data center investments, with nearly US$6 billion allocated to projects within the United States.

Separate statistics from Gartner corroborate hyperscale CAPEX optimism.  Gartner predicts that worldwide end-user spending on public cloud services is on course to reach $723.4 billion next year, up from a projected $595.7 billion in 2024.  All segments of the cloud market – platform-as-a-service (PaaS), software-as-a-service (SaaS), desktop-as-a-service (DaaS), and infrastructure-as-a-service (IaaS) – are expected to achieve double-digit growth.

While SaaS will be the biggest single segment, accounting for $299.1 billion, IaaS will grow the fastest, jumping 24.8 percent to $211.9 million.

Gartner_cloud_forecast.jpg

Like Canalys, Gartner also singles out AI for special attention. “The use of AI technologies in IT and business operations is unabatedly accelerating the role of cloud computing in supporting business operations and outcomes,” said Sid Nag, vice president analyst at Gartner. “Cloud use cases continue to expand with increasing focus on distributed, hybrid, cloud-native, and multicloud environments supported by a cross-cloud framework, making the public cloud services market achieve a 21.5 percent growth in 2025.”

……………………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://canalys.com/newsroom/global-cloud-services-q3-2024

https://www.telecoms.com/public-cloud/ai-hype-fuels-21-percent-jump-in-q3-cloud-spending

Cloud Service Providers struggle with Generative AI; Users face vendor lock-in; “The hype is here, the revenue is not”

MTN Consulting: Top Telco Network Infrastructure (equipment) vendors + revenue growth changes favor cloud service providers

IDC: Public Cloud software at 2/3 of all enterprise applications revenue in 2026; SaaS is essential!

IDC: Cloud Infrastructure Spending +13.5% YoY in 4Q-2021 to $21.1 billion; Forecast CAGR of 12.6% from 2021-2026

IDC: Worldwide Public Cloud Services Revenues Grew 29% to $408.6 Billion in 2021 with Microsoft #1?

Synergy Research: Microsoft and Amazon (AWS) Dominate IT Vendor Revenue & Growth; Popularity of Multi-cloud in 2021

Google Cloud revenues up 54% YoY; Cloud native security is a top priority

AI Echo Chamber: “Upstream AI” companies huge spending fuels profit growth for “Downstream AI” firms

According to the Wall Street Journal, the AI industry has become an  “Echo Chamber,” where huge capital spending by the AI infrastructure and application providers have fueled revenue and profit growth for everyone else. Market research firm Bespoke Investment Group has recently created baskets for “downstream” and “upstream” AI companies.

  • The Downstream group involves “AI implementation,” which consist of firms that sell AI development tools, such as the large language models (LLMs) popularized by OpenAI’s ChatGPT since the end of 2022, or run products that can incorporate them. This includes Google/Alphabet, Microsoft, Amazon, Meta Platforms (FB), along with IBM, Adobe and Salesforce.
  • Higher up the supply chain (Upstream group), are the “AI infrastructure” providers, which sell AI chips, applications, data centers and training software. The undisputed leader is Nvidia, which has seen its sales triple in a year, but it also includes other semiconductor companies, database developer Oracle and owners of data centers Equinix and Digital Realty.

The Upstream group of companies have posted profit margins that are far above what analysts expected a year ago. In the second quarter, and pending Nvidia’s results on Aug. 28th , Upstream AI members of the S&P 500 are set to have delivered a 50% annual increase in earnings. For the remainder of 2024, they will be increasingly responsible for the profit growth that Wall Street expects from the stock market—even accounting for Intel’s huge problems and restructuring.

It should be noted that the lines between the two groups can be blurry, particularly when it comes to giants such as Amazon, Microsoft and Alphabet, which  provide both AI implementation (e.g. LLMs) and infrastructure: Their cloud-computing businesses are responsible for turning these companies into the early winners of the AI craze last year and reported breakneck growth during this latest earnings season.  A crucial point is that it is their role as ultimate developers of AI applications that have led them to make super huge capital expenditures, which are responsible for the profit surge in the rest of the ecosystem.  So there is a definite trickle down effect where the big tech  players AI directed CAPEX is boosting revenue and profits for the companies down the supply chain.

As the path for monetizing this technology gets longer and harder, the benefits seem to be increasingly accruing to companies higher up in the supply chain. Meta Platforms Chief Executive Mark Zuckerberg recently said the company’s coming Llama 4 language model will require 10 times as much computing power to train as its predecessor. Were it not for AI, revenues for semiconductor firms would probably have fallen during the second quarter, rather than rise 18%, according to S&P Global.

………………………………………………………………………………………………………………………………………………………..

paper written by researchers from the likes of Cambridge and Oxford uncovered that the large language models (LLMs) behind some of today’s most exciting AI apps may have been trained on “synthetic data” or data generated by other AI. This revelation raises ethical and quality concerns. If an AI model is trained primarily or even partially on synthetic data, it might produce outputs lacking human-generated content’s richness and reliability. It could be a case of the blind leading the blind, with AI models reinforcing the limitations or biases inherent in the synthetic data they were trained on.

In this paper, the team coined the phrase “model collapse,” claiming that training models this way will answer user prompts with low-quality outputs. The idea of “model collapse” suggests a sort of unraveling of the machine’s learning capabilities, where it fails to produce outputs with the informative or nuanced characteristics we expect. This poses a serious question for the future of AI development. If AI is increasingly trained on synthetic data, we risk creating echo chambers of misinformation or low-quality responses, leading to less helpful and potentially even misleading systems.

……………………………………………………………………………………………………………………………………………

In a recent working paper, Massachusetts Institute of Technology (MIT) economist Daron Acemoglu argued that AI’s knack for easy tasks has led to exaggerated predictions of its power to enhance productivity in hard jobs. Also, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation).  Indeed, data from the Census Bureau show that only a small percentage of U.S. companies outside of the information and knowledge sectors are looking to make use of AI.

References:

https://www.wsj.com/tech/ai/the-big-risk-for-the-market-becoming-an-ai-echo-chamber-e8977de0?mod=tech_lead_pos4

https://deepgram.com/learn/the-ai-echo-chamber-model-collapse-synthetic-data-risks

https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf

AI wave stimulates big tech spending and strong profits, but for how long?

AI winner Nvidia faces competition with new super chip delayed

SK Telecom and Singtel partner to develop next-generation telco technologies using AI

Telecom and AI Status in the EU

Vodafone: GenAI overhyped, will spend $151M to enhance its chatbot with AI

Data infrastructure software: picks and shovels for AI; Hyperscaler CAPEX

AI wave stimulates big tech spending and strong profits, but for how long?

Big tech companies have made it clear over the last week that they have no intention of slowing down their stunning levels of spending on artificial intelligence (AI), even though investors are getting worried that a big payoff is further down the line than most believe.

In the last quarter, Apple, Amazon, Meta, Microsoft and Google’s parent company Alphabet spent a combined $59 billion on capital expenses, 63% more than a year earlier and 161 percent more than four years ago. A large part of that was funneled into building data centers and packing them with new computer systems to build artificial intelligence. Only Apple has not dramatically increased spending, because it does not build the most advanced AI systems and is not a cloud service provider like the others.

At the beginning of this year, Meta said it would spend more than $30 billion in 2024 on new tech infrastructure. In April, he raised that to $35 billion. On Wednesday, he increased it to at least $37 billion. CEO Mark Zuckerberg said Meta would spend even more next year.  He said he’d rather build too fast “rather than too late,” and allow his competitors to get a big lead in the A.I. race. Meta gives away the advanced A.I. systems it develops, but Mr. Zuckerberg still said it was worth it. “Part of what’s important about A.I. is that it can be used to improve all of our products in almost every way,” he said.

………………………………………………………………………………………………………………………………………………………..

This new wave of Generative A.I. is incredibly expensive. The systems work with vast amounts of data and require sophisticated computer chips and new data centers to develop the technology and serve it to customers. The companies are seeing some sales from their A.I. work, but it is barely moving the needle financially.

In recent months, several high-profile tech industry watchers, including Goldman Sachs’s head of equity research and a partner at the venture firm Sequoia Capital, have questioned when or if A.I. will ever produce enough benefit to bring in the sales needed to cover its staggering costs. It is not clear that AI will come close to having the same impact as the internet or mobile phones, Goldman’s Jim Covello wrote in a June report.

“What $1 trillion problem will AI solve?” he wrote. “Replacing low wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I’ve witnessed in my 30 years of closely following the tech industry.” “The reality right now is that while we’re investing a significant amount in the AI.space and in infrastructure, we would like to have more capacity than we already have today,” said Andy Jassy, Amazon’s chief executive. “I mean, we have a lot of demand right now.”

That means buying land, building data centers and all the computers, chips and gear that go into them. Amazon executives put a positive spin on all that spending. “We use that to drive revenue and free cash flow for the next decade and beyond,” said Brian Olsavsky, the company’s finance chief.

There are plenty of signs the boom will persist. In mid-July, Taiwan Semiconductor Manufacturing Company, which makes most of the in-demand chips designed by Nvidia (the ONLY tech company that is now making money from AI – much more below) that are used in AI systems, said those chips would be in scarce supply until the end of 2025.

Mr. Zuckerberg said AI’s potential is super exciting. “It’s why there are all the jokes about how all the tech C.E.O.s get on these earnings calls and just talk about A.I. the whole time.”

……………………………………………………………………………………………………………………

Big tech profits and revenue continue to grow, but will massive spending produce a good ROI?

Last week’s Q2-2024 results:

  • Google parent Alphabet reported $24 billion net profit on $85 billion revenue.
  • Microsoft reported $22 billion net profit on $65 billion revenue.
  • Meta reported $13.5 billion net profit on $39 billion revenue.
  • Apple reported $21 billion net profit on $86 billion revenue.
  • Amazon reported $13.5 billion net profit on $148 billion revenue.

This chart sums it all up:

………………………………………………………………………………………………………………………………………………………..

References:

https://www.nytimes.com/2024/08/02/technology/tech-companies-ai-spending.html

https://www.wsj.com/business/telecom/amazon-apple-earnings-63314b6c?st=40v8du7p5rxq72j&reflink=desktopwebshare_permalink

https://www.axios.com/2024/08/02/google-microsoft-meta-ai-earnings

https://www.nvidia.com/en-us/data-center/grace-hopper-superchip/

AI Frenzy Backgrounder; Review of AI Products and Services from Nvidia, Microsoft, Amazon, Google and Meta; Conclusions

 

Ericsson and Google Cloud expand partnership with Cloud RAN solution

Ericsson has announced an expansion of its successful and long-standing partnership with Google Cloud to develop an Ericsson Cloud RAN solution on Google Distributed Cloud (GDC) [1.] that offers integrated automation and orchestration and leverages AI/ML for additional communications service providers (CSP) benefits. The companies have successfully demonstrated the full implementation of the Ericsson vDU and vCU on GDC Edge and the solution is running live in the Ericsson Open Lab in Ottawa, Canada, with joint ambition for market development.

Note 1. GDC is a portfolio of fully managed hardware and software solutions which extends Google Cloud’s infrastructure and services to the edge and into your data centers.

Deploying Ericsson Cloud RAN on GDC Edge enables the delivery of a fully automated and very large-scale distributed cloud, resulting in an efficient, reliable, highly performant and secured software centric radio access network infrastructure. In this setup, the on-premises GDC Edge is managed using functions such as fleet management from the public cloud through a dedicated secure connection between on-prem hardware and the cloud, while also addressing sovereignty and privacy requirements of the CSP customers. This architecture ensures the clear path for CSPs toward the implementation of a fully hybrid cloud solution for RAN.

Cloud RAN comprises a mobile switching center, a BBU hotel and a remote radio head

C-RAN networks comprise three primary components:

  1. A BBU hotel. This is a centralized site that functions as a data or processing center. Individual units can stack together without direct linkage or interconnect to dynamically allocate resources based on network needs. Communication between units has high bandwidth and low latency.
  2. A remote radio unit (RRU) network. Also known as a remote radio head, an RRU is a traditional network that connects wireless devices to access points.
  3. A fronthaul or transport network. Also known as a mobile switching center, a fronthaul or transport network is the connection layer between a BBU and a set of RRUs that use optical fiber, cellular or mmWave communication.

………………………………………………………………………………………………………………………………………………………………………………………………………….

Running Ericsson Cloud RAN on GDC Edge will enable CSPs to utilize Google Cloud Vertex AI, BigQuery and other cloud services, to improve the usability of the massive data sets being provided by Cloud RAN applications. This in turn, will open a number of opportunities for CSPs to control, inspect, configure, and optimize their RAN infrastructure.

Ericsson Cloud RAN provides CSPs additional choice for creating networks based on open standards and interfaces using multiple vendors. The Ericsson Cloud RAN solution is infrastructure agnostic, allowing RAN applications to be deployed on any infrastructure chosen by the CSP. Ericsson is continuously collaborating with ecosystem partners and adapting its Cloud RAN applications to work on different infrastructures and configurations.

To further a cloud-native automation approach to network workloads, Ericsson and Google Cloud are jointly enhancing the solution through the Linux Foundation open-source project Nephio (a Kubernetes-based automation platform for deploying and managing highly distributed, interconnected workloads such as 5G network functions), where we jointly drive standardization of critical functionality.

Mårten Lerner, Head of Product Line Cloud RAN, Ericsson, says: “This partnership enables us to deepen and expand our valuable collaboration with Google Cloud, and it opens new opportunities for operators to utilize the benefits of cloud-native solutions and automation. Ericsson remains committed to ensuring the adaptability of its Cloud RAN applications on diverse cloud infrastructures, offering operators enhanced flexibility and choice in deploying Cloud RAN as well as supporting the evolving hybrid cloud architectures together with Google Cloud.”

Gabriele Di Piazza, Senior Director, Telecom Products, Google Cloud, says:
“We’re proud to enable Ericsson Cloud RAN to run on Google Distributed Cloud Edge infrastructure, which includes access to our AI/ML capabilities as well as cloud-native automations. We’re delighted to recognize Ericsson as a distinguished Google Cloud Partner and look forward to a continued strong partnership in support of our mutual customers.”

References:

Google Cloud infrastructure enhancements: AI accelerator, cross-cloud network and distributed cloud

Google is selling broad access to its most powerful artificial-intelligence technology for the first time as it races to make up ground in the lucrative cloud-software market.  The cloud giant now has a global network of 38 cloud regions, with a goal to operate entirely on carbon-free energy 24/7 by 2030.

At the Google Cloud Next conference today, Google Cloud announced several key infrastructure enhancements for customers, including:

  • Cloud TPU v5e: Google’s most cost-efficient, versatile, and scalable purpose-built AI accelerator to date. Now, customers can use a single Cloud TPU platform to run both large-scale AI training and inference. Cloud TPU v5e scales to tens of thousands of chips and is optimized for efficiency. Compared to Cloud TPU v4, it provides up to a 2x improvement in training performance per dollar and up to a 2.5x improvement in inference performance per dollar.
  • A3 VMs with NVIDIA H100 GPUA3 VMs powered by NVIDIA’s H100 GPU will be generally available next month. It is purpose-built with high-performance networking and other advances to enable today’s most demanding gen AI and large language model (LLM) innovations. This allows organizations to achieve three times better training performance over the prior-generation A2.
  • GKE Enterprise: This enables multi-cluster horizontal scaling ;-required for the most demanding, mission-critical AI/ML workloads. Customers are already seeing productivity gains of 45%, while decreasing software deployment times by more than 70%. Starting today, the benefits that come with GKE, including autoscaling, workload orchestration, and automatic upgrades, are now available with Cloud TPU v5e.
  • Cross-Cloud Network: A global networking platform that helps customers connect and secure applications across clouds. It is open, workload-optimized, and offers ML-powered security to deliver zero trust. Designed to enable customers to gain access to Google services more easily from any cloud, Cross-Cloud Network reduces network latency by up to 35%.
  • Google Distributed Cloud: Designed to meet the unique demands of organizations that want to run workloads at the edge or in their data center. In addition to next-generation hardware and new security capabilities, the company is also enhancing the GDC portfolio to bring AI to the edge, with Vertex AI integrations and a new managed offering of AlloyDB Omni on GDC Hosted.

Google’s launch on Tuesday puts it ahead of Microsoft in making AI-powered office software easily available for all customers,” wrote WSJ’s Miles Kruppa. Google will also open up availability to its large PaLM 2 model, which supports generative AI features, plus AI technology by Meta Platforms and startup Anthropic, reported Kruppa.

The efforts are Google’s latest attempt to spark growth in the cloud business, an important part of CEO Sundar Pichai’s attempts to reduce dependence on its cash-cow search engine. Recent advances in AI, and the computing resources they require, have added extra urgency to turn the technology into profitable products.

Google’s infrastructure and software offerings produce $32 billion in annual sales, about 10% of total revenue at parent company. Its cloud unit turned a quarterly operating profit for the first time this year. That still leaves Google firmly in third place in the cloud behind AWS and Microsoft Azure.  However, Google Cloud revenue is growing faster – at 31% – than its two bigger cloud rivals.

Google will make widely available its current large PaLM 2 model, which powers many of the company’s generative-AI features. It was previously only available for handpicked customers. The company also will make available AI technology developed by Meta Platforms and the startup Anthropic, in which it is an investor.

Google Cloud CEO Thomas Kurian who gave the keynote speech at Google Cloud Next 2023 conference.   Image Credit: Alphabet (parent company of Google)

……………………………………………………………………………………………………………………………

Google Cloud’s comprehensive AI platform — Vertex AI — enables customers to build, deploy and scale machine learning (ML) models. They have seen tremendous usage, with the number of gen AI customer projects growing more than 150 times from April-July this year.  Customers have access to more than 100 foundation models, including third-party and popular open-source versions, in their Model Garden. They are all optimized for different tasks and different sizes, including text, chat, images, speech, software code, and more.

Google also offer industry specific models like Sec-PaLM 2 for cybersecurity, to empower global security providers like Broadcom and Tenable; and Med-PaLM 2 to assist leading healthcare and life sciences companies including Bayer Pharmaceuticals, HCA Healthcare, and Meditech.

Partners are also using Vertex AI to build their own features for customers – including Box, Canva, Salesforce, UKG, and many others. Today at Next ‘23, we’re announcing:

  • DocuSign is working with Google to pilot how Vertex AI could be used to help generate smart contract assistants that can summarize, explain and answer what’s in complex contracts and other documents.
  • SAP is working with us to build new solutions utilizing SAP data and Vertex AI that will help enterprises apply gen AI to important business use cases, like streamlining automotive manufacturing or improving sustainability.
  • Workday’s applications for Finance and HR are now live on Google Cloud and they are working with us to develop new gen AI capabilities within the flow of Workday, as part of their multi-cloud strategy. This includes the ability to generate high-quality job descriptions and to bring Google Cloud gen AI to app developers via the skills API in Workday Extend, while helping to ensure the highest levels of data security and governance for customers’ most sensitive information.

In addition, many of the world’s largest consulting firms, including AccentureCapgeminiDeloitte, and Wipro, have collectively planned to train more than 150,000 experts to help customers implement Google Cloud GenAI.

………………………………………………………………………………………………………………………

“The computing capabilities are improving a lot, but the applications are improving even more,” said Character Technologies CEO Noam Shazeer, who pushed for Google to release a chatbot to the public before leaving the company in 2021. “There will be trillions of dollars worth of value and product chasing tens of billions of dollars worth of hardware.”

………………………………………………………………………………………………………………

References:

https://cloud.google.com/blog/topics/google-cloud-next/welcome-to-google-cloud-next-23

https://www.wsj.com/tech/ai/google-chases-microsoft-amazon-cloud-market-share-with-ai-tools-a7ffc449

https://cloud.withgoogle.com/next

Cloud infrastructure services market grows; AI will be a major driver of future cloud service provider investments

Cloud Service Providers struggle with Generative AI; Users face vendor lock-in; “The hype is here, the revenue is not”

Everyone agrees that Generative AI has great promise and potential.  Martin Casado of Andreessen Horowitz recently wrote in the Wall Street Journal that the technology has “finally become transformative:”

“Generative AI can bring real economic benefits to large industries with established and expensive workloads. Large language models could save costs by performing tasks such as summarizing discovery documents without replacing attorneys, to take one example. And there are plenty of similar jobs spread across fields like medicine, computer programming, design and entertainment….. This all means opportunity for the new class of generative AI startups to evolve along with users, while incumbents focus on applying the technology to their existing cash-cow business lines.”

A new investment wave caused by generative AI is  starting to loom among cloud service providers, raising questions about whether Big Tech’s spending cutbacks and layoffs will prove to be short lived.  Pressed to say when they would see a revenue lift from AI, the big U.S. cloud companies (Microsoft, Alphabet/Google, Meta/FB and Amazon) all referred to existing services that rely heavily on investments made in the past. These range from the AWS’s machine learning services for cloud customers to AI-enhanced tools that Google and Meta offer to their advertising customers.

Microsoft offered only a cautious prediction of when AI would result in higher revenue. Amy Hood, chief financial officer, told investors during an earnings call last week that the revenue impact would be “gradual,” as the features are launched and start to catch on with customers. The caution failed to match high expectations ahead of the company’s earnings, wiping 7% off its stock price (MSFT ticker symbol) over the following week.

When it comes to the newer generative AI wave, predictions were few and far between. Amazon CEO Andy Jassy said on Thursday that the technology was in its “very early stages” and that the industry was only “a few steps into a marathon”. Many customers of Amazon’s cloud arm, AWS, see the technology as transformative, Jassy noted that “most companies are still figuring out how they want to approach it, they are figuring out how to train models.”  He insisted that every part of Amazon’s business was working on generative AI initiatives and the technology was “going to be at the heart of what we do.”

There are a number of large language models that power generative AI, and many of the AI companies that make them have forged partnerships with big cloud service providers. As business technology leaders make their picks among them, they are weighing the risks and benefits of using one cloud provider’s AI ecosystem. They say it is an important decision that could have long-term consequences, including how much they spend and whether they are willing to sink deeper into one cloud provider’s set of software, tools, and services.

To date, AI large language model makers like OpenAI, Anthropic, and Cohere have led the charge in developing proprietary large language models that companies are using to boost efficiency in areas like accounting and writing code, or adding to their own products with tools like custom chatbots. Partnerships between model makers and major cloud companies include OpenAI and Microsoft Azure, Anthropic and Cohere with Google Cloud, and the machine-learning startup Hugging Face with Amazon Web Services.  Databricks, a data storage and management company, agreed to buy the generative AI startup MosaicML in June.

If a company chooses a single AI ecosystem, it could risk “vendor lock-in” within that provider’s platform and set of services, said Ram Chakravarti, chief technology officer of Houston-based BMC Software. This paradigm is a recurring one, where a business’s IT system, software and data all sit within one digital platform, and it could become more pronounced as companies look for help in using generative AI.  Companies say the problem with vendor lock-in, especially among cloud providers, is that they have difficulty moving their data to other platforms, lose negotiating power with other vendors, and must rely on one provider to keep its services online and secure.

Cloud providers, partly in response to complaints of lock-in, now offer tools to help customers move data between their own and competitors’ platforms. Businesses have increasingly signed up with more than one cloud provider to reduce their reliance on any single vendor. That is the strategy companies could end up taking with generative AI, where by using a “multiple generative AI approach,” they can avoid getting too entrenched in a particular platform. To be sure, many chief information officers have said they willingly accept such risks for the convenience, and potentially lower cost, of working with a single technology vendor or cloud provider.

A significant challenge in incorporating generative AI is that the technology is changing so quickly, analysts have said, forcing CIOs to not only keep up with the pace of innovation, but also sift through potential data privacy and cybersecurity risks.

A company using its cloud provider’s premade tools and services, plus guardrails for protecting company data and reducing inaccurate outputs, can more quickly implement generative AI off-the-shelf,  said Adnan Masood, chief AI architect at digital technology and IT services firm UST.  “It has privacy, it has security, it has all the compliance elements in there. At that point, people don’t really have to worry so much about the logistics of things, but rather are focused on utilizing the model.”

For other companies, it is a conservative approach to use generative AI with a large cloud platform they already trust to hold sensitive company data, said Jon Turow, a partner at Madrona Venture Group. “It’s a very natural start to a conversation to say, ‘Hey, would you also like to apply AI inside my four walls?’”

End Quotes:

“Right now, the evidence is a little bit scarce about what the effect on revenue will be across the tech industry,” said James Tierney of Alliance Bernstein.

Brent Thill, an analyst at Jefferies, summed up the mood among investors: “The hype is here, the revenue is not.  Behind the scenes, the whole industry is scrambling to figure out the business model [for generative AI]: how are we going to price it? How are we going to sell it?”

………………………………………………………………………………………………………………

References:

https://www.wsj.com/articles/ai-has-finally-become-transformative-humans-scale-language-model-6a67e641

https://www.ft.com/content/56706c31-e760-44e1-a507-2c8175a170e8

https://www.wsj.com/articles/companies-weigh-growing-power-of-cloud-providers-amid-ai-boom-478c454a

https://www.techtarget.com/searchenterpriseai/definition/generative-AI?Offer=abt_pubpro_AI-Insider

Global Telco AI Alliance to progress generative AI for telcos

Curmudgeon/Sperandeo:  Impact of Generative AI on Jobs and Workers 

Bain & Co, McKinsey & Co, AWS suggest how telcos can use and adapt Generative AI

Generative AI Unicorns Rule the Startup Roost; OpenAI in the Spotlight

Generative AI in telecom; ChatGPT as a manager? ChatGPT vs Google Search

Generative AI could put telecom jobs in jeopardy; compelling AI in telecom use cases

Qualcomm CEO: AI will become pervasive, at the edge, and run on Snapdragon SoC devices

 

 

T-Mobile and Google Cloud collaborate on 5G and edge compute

T-Mobile and Google Cloud announced today they are working together to combine the power of 5G and edge compute, giving enterprises more ways to embrace digital transformation. T-Mobile will connect the 5G Advanced Network Solutions (ANS) [1.] suite of public, private and hybrid 5G networks with Google Distributed Cloud Edge (GDC Edge) to help customers embrace next-generation 5G applications and use cases — like AR/VR experiences.

Note 1. 5G ANS is an end-to-end portfolio of deployable 5G solutions, comprised of 5G Connectivity, Edge Computing, and Industry Solutions – along with a partnership that simplifies creating, deploying and managing unique solutions to unique problems.

More companies are turning to edge computing as they focus on digital transformation. In fact, the global edge compute market size is expected to grow by 37.9% to $155.9 billion in 2030. And the combination of edge computing with the low latency, high speeds, and reliability of 5G will be key to promising use cases in industries like retail, manufacturing, logistics, and smart cities. GDC Edge customers across industries will be able to leverage T-Mobile’s 5G ANS easily to get the low latency, high speeds, and reliability they will need for any use case that requires data-intensive computing processes such as AR or computer vision.

For example, manufacturing companies could use computer vision technology to improve safety by monitoring equipment and automatically notifying support personnel if there are issues. And municipalities could leverage augmented reality to keep workers at a safe distance from dangerous situations by using machines to remotely perform hazardous tasks.

To demonstrate the promise of 5G ANS and GDC Edge in a retail setting, T-Mobile created a proof of concept at T-Mobile’s Tech Experience 5G Hub called the “magic mirror” with the support of Google Cloud.  This interactive display leverages cloud-based processing and image rendering at the edge to make retail products “magically” come to life. Users simply hold a product in front of the mirror to make interactive videos or product details — such as ingredients or instructions — appear onscreen in near real-time.

“We’ve built the largest and fastest 5G network in the country. This partnership brings together the powerful combination of 5G and edge computing to unlock the expansion of technologies such as AR and VR from limited applications to large-scale adoption,” said Mishka Dehghan, Senior Vice President, Strategy, Product, and Solutions Engineering, T-Mobile Business Group. “From providing a shopping experience in a virtual reality environment to improving safety through connected sensors or computer vision technologies, T-Mobile’s 5G ANS combined with Google Cloud’s innovative edge compute technology can bring the connected world to businesses across the country.”

“Google Cloud is committed to helping telecommunication companies accelerate their growth, competitiveness, and digital journeys,” said Amol Phadke, General Manager, Global Telecom Industry, Google Cloud. “Google Distributed Cloud Edge and T-Mobile’s 5G ANS will help businesses deliver more value to their customers by unlocking new capabilities through 5G and edge technologies.”

T-Mobile is also working with Microsoft Azure, Amazon Web Services and Ericsson on advanced 5G solutions.

 

References:

https://www.t-mobile.com/news/business/t-mobile-and-google-cloud-join-5g-advanced-network-solutions

https://www.t-mobile.com/business/solutions/networking/5G-advanced-solutions

 

Cloud RAN with Google Distributed Cloud Edge; Strategy: host network functions of other vendors on Google Cloud

At MWC 2023 Barcelona, Google Cloud announced that they can now run the radio access network (RAN) functions as software on Google Distributed Cloud Edge, providing communications service providers (CSPs- AKA telcos) with a common and agile operating model that extends from the core of the network to the edge, for a high degree of programmability, flexibility, and low operating expenses.  CSPs have already embraced open architecture, open-source software, disaggregation, automation, cloud, AI and machine learning, and new operational models, to name a few. The journey started in the last decade with Network Functions Virtualization, primarily with value added services and then deeper with core network applications, and in the past few years, that evolved into a push towards cloud-native. With significant progress in the core, the time for Cloud RAN is now, according to Google.  However, whether for industry or region-specific compliance reasons, data sovereignty needs, or latency or local data-processing requirements, most of the network functions deployed in a mobile or wireline network may have to follow a hybrid deployment model where network functions are placed flexibly in a combination of both on-premises and cloud regions. RAN, which is traditionally implemented with proprietary hardware, falls into that camp as well.

In 2021,the company launched Google Distributed Cloud Edge (GDC Edge), an on-premises offering that extends a consistent operating model from our public Google Cloud regions to the customer’s premises. For CSPs, this hybrid approach makes it possible to modernize the network, while enabling easy development, fast innovation, efficient scale and operational efficiency; all while simultaneously helping to reduce technology risk and operational costs. GDC Edge became generally available in 2022.

Google Cloud does not plan to develop its own private wireless networking services to sell to enterprise customers, nor does the company plan to develop its own networking software functions, according to Gabriele Di Piazza, an executive with Google Cloud who spoke at MWC 2023 in Barcelona. Instead,  Google Cloud would like to host the networking software functions of other vendors like Ericsson and Mavenir in its cloud.  It would also like to resell private networking services from operators and others.

Rather than develop its own cloud native 5G SA core network or other cloud networking software (like Microsoft and AWS are doing), Google Cloud wants to “avoid partner conflict,” Di Piazza said.  Google has been building its telecom cloud story around its Anthos platform. That platform is directly competing against the likes of AWS and Microsoft for telecom customers. According to a number of analysts, AWS appears to enjoy an early lead in the telecom industry – but its rivals, like Google, are looking for ways to gain a competitive advantage.  One of Google’s competitive arguments is that it doesn’t have aspirations to sell network functions. Therefore, according to Di Piazza, the company can remain a trusted, unbiased partner.

Image Credit:  Google Cloud

Last year, the executive said that moving to a cloud-native architecture is mandatory, not optional for telcos, adding that telecom operators are facing lots of challenges right now due to declining revenue growth, exploding data consumption and increasing capital requirements for 5G.  Cloud-native networks have significant challenges. For example, there is a lack of standardization among the various open-source groups and there’s fragmentation among parts of the cloud-native ecosystem, particularly among OSS vendors, cloud providers and startups.

In recent years, Google, Microsoft, Amazon, Oracle and other cloud computing service providers have been working to develop products and services that are specifically designed to allow telecom network operator’s to run their network functions inside a third-party cloud environment.  For example, AT&T and Dish Network are running their 5G SA core networks on Microsoft Azure and AWS, respectively.

Matt Beal, a senior VP of software development for Oracle Communications, said his company offers both a substantial cloud computing service as well as a lengthy list of network functions. He maintains that Oracle is a better partner for telecom network operators because of it. Beal said Oracle has long offered a wide range of networking functions, from policy control to network slice management, that can be run inside its cloud or inside the cloud of other companies. He said that, because Oracle developed those functions itself, the company has more experience in running them in a cloud environment compared with a company that hasn’t done that kind of work.  Beal’s inference is that network operators ought to partner with the best and most experienced companies in the market. That position runs directly counter to Google’s competitive stance on the topic.  “When you know how these things work in real life … you can optimize your cloud to run these workloads,” he said.

While a number of other telecom network operators have put things like customer support or IT into the cloud, they have been reluctant to release critical network functions like policy control to a cloud service provider.

References:

https://www.lightreading.com/mobile-world-congress/google-cloud-takes-non-threatening-stance-in-pursuit-of-telecom/d/d-id/783559?

https://cloud.google.com/solutions/telecommunications

https://cloud.google.com/blog/topics/telecommunications

https://www.silverliningsinfo.com/cloud/google-cloud-exec-says-cloud-native-architecture-will-reduce-costs

 

 

Synergy: Q3 Cloud Spending Up Over $11 Billion YoY; Google Cloud gained market share in 3Q-2022

Synergy Research estimates the cloud infrastructure market at $57B in Q3-2022. That was up by well over $11 billion from the third quarter of last year despite two fierce headwinds – historically strong U.S. dollar and a severely restricted Chinese market. The incremental spending represents year-on-year growth of 24%. If exchange rates had remained constant over the last year, the growth rate would have been over 30%. As the market continues on a strong growth trajectory,

Google is alone among the hyper-scaler giants to be gaining market share.  Google Cloud increased its market share in Q3 compared to the prior quarter, while Amazon and Microsoft market shares remained relatively unchanged. Compared to a year ago all three have increased their market share by at least a percentage point. Amazon, Microsoft and Google combined had a 66% share of the worldwide market in the quarter, up from 61% a year ago. In aggregate all other cloud providers have tripled their revenues since late 2017, though their collective market share has plunged from 50% to 34% as their growth rates remain far below the market leaders.

Synergy estimates that quarterly cloud infrastructure service revenues (including IaaS, PaaS and hosted private cloud services) were $57.5 billion, with trailing twelve-month revenues reaching $217 billion. Public IaaS and PaaS services account for the bulk of the market and those grew by 26% in Q3. The dominance of the major cloud providers is even more pronounced in public cloud, where the top three control 72% of the market. Geographically, the cloud market continues to grow strongly in all regions of the world.

“It is a strong testament to the benefits of cloud computing that despite two major obstacles to growth the worldwide market still expanded by 24% from last year. Had exchange rates remained stable and had the Chinese market remained on a more normal path then the growth rate percentage would have been well into the thirties,” said John Dinsdale, a Chief Analyst at Synergy Research Group. “The three leading cloud providers all report their financials in US dollars so their growth rates are all beaten down by the historic strength of their home currency. Despite that all three have increased their share of a rapidly growing market over the last year, which is a strong testament to their strategies and performance. Beyond these three, all other cloud providers in aggregate have been losing around three percentage points of market share per year but are still seeing strong double-digit revenue growth. The key for these companies is to focus on specific portions of the market where they can outperform the big three.”

About Synergy Research Group:

Synergy provides quarterly market tracking and segmentation data on IT and Cloud related markets, including vendor revenues by segment and by region. Market shares and forecasts are provided via Synergy’s uniquely designed online database SIA™, which enables easy access to complex data sets. Synergy’s Competitive Matrix™ and CustomView™ take this research capability one step further, enabling our clients to receive on-going quantitative market research that matches their internal, executive view of the market segments they compete in.

References:

https://www.prnewswire.com/news-releases/q3-cloud-spending-up-over-11-billion-from-2021-despite-major-headwinds-google-increases-its-market-share-301661926.html

Synergy Research: public cloud service and infrastructure market hit $126B in 1Q-2022

Cloud Computing Giants Growth Slows; Recession Looms, Layoffs Begin

 

Page 1 of 2
1 2