Google Cloud
AI Echo Chamber: “Upstream AI” companies huge spending fuels profit growth for “Downstream AI” firms
According to the Wall Street Journal, the AI industry has become an “Echo Chamber,” where huge capital spending by the AI infrastructure and application providers have fueled revenue and profit growth for everyone else. Market research firm Bespoke Investment Group has recently created baskets for “downstream” and “upstream” AI companies.
- The Downstream group involves “AI implementation,” which consist of firms that sell AI development tools, such as the large language models (LLMs) popularized by OpenAI’s ChatGPT since the end of 2022, or run products that can incorporate them. This includes Google/Alphabet, Microsoft, Amazon, Meta Platforms (FB), along with IBM, Adobe and Salesforce.
- Higher up the supply chain (Upstream group), are the “AI infrastructure” providers, which sell AI chips, applications, data centers and training software. The undisputed leader is Nvidia, which has seen its sales triple in a year, but it also includes other semiconductor companies, database developer Oracle and owners of data centers Equinix and Digital Realty.
The Upstream group of companies have posted profit margins that are far above what analysts expected a year ago. In the second quarter, and pending Nvidia’s results on Aug. 28th , Upstream AI members of the S&P 500 are set to have delivered a 50% annual increase in earnings. For the remainder of 2024, they will be increasingly responsible for the profit growth that Wall Street expects from the stock market—even accounting for Intel’s huge problems and restructuring.
It should be noted that the lines between the two groups can be blurry, particularly when it comes to giants such as Amazon, Microsoft and Alphabet, which provide both AI implementation (e.g. LLMs) and infrastructure: Their cloud-computing businesses are responsible for turning these companies into the early winners of the AI craze last year and reported breakneck growth during this latest earnings season. A crucial point is that it is their role as ultimate developers of AI applications that have led them to make super huge capital expenditures, which are responsible for the profit surge in the rest of the ecosystem. So there is a definite trickle down effect where the big tech players AI directed CAPEX is boosting revenue and profits for the companies down the supply chain.
As the path for monetizing this technology gets longer and harder, the benefits seem to be increasingly accruing to companies higher up in the supply chain. Meta Platforms Chief Executive Mark Zuckerberg recently said the company’s coming Llama 4 language model will require 10 times as much computing power to train as its predecessor. Were it not for AI, revenues for semiconductor firms would probably have fallen during the second quarter, rather than rise 18%, according to S&P Global.
………………………………………………………………………………………………………………………………………………………..
A paper written by researchers from the likes of Cambridge and Oxford uncovered that the large language models (LLMs) behind some of today’s most exciting AI apps may have been trained on “synthetic data” or data generated by other AI. This revelation raises ethical and quality concerns. If an AI model is trained primarily or even partially on synthetic data, it might produce outputs lacking human-generated content’s richness and reliability. It could be a case of the blind leading the blind, with AI models reinforcing the limitations or biases inherent in the synthetic data they were trained on.
In this paper, the team coined the phrase “model collapse,” claiming that training models this way will answer user prompts with low-quality outputs. The idea of “model collapse” suggests a sort of unraveling of the machine’s learning capabilities, where it fails to produce outputs with the informative or nuanced characteristics we expect. This poses a serious question for the future of AI development. If AI is increasingly trained on synthetic data, we risk creating echo chambers of misinformation or low-quality responses, leading to less helpful and potentially even misleading systems.
……………………………………………………………………………………………………………………………………………
In a recent working paper, Massachusetts Institute of Technology (MIT) economist Daron Acemoglu argued that AI’s knack for easy tasks has led to exaggerated predictions of its power to enhance productivity in hard jobs. Also, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation). Indeed, data from the Census Bureau show that only a small percentage of U.S. companies outside of the information and knowledge sectors are looking to make use of AI.
References:
https://deepgram.com/learn/the-ai-echo-chamber-model-collapse-synthetic-data-risks
https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf
AI wave stimulates big tech spending and strong profits, but for how long?
AI winner Nvidia faces competition with new super chip delayed
SK Telecom and Singtel partner to develop next-generation telco technologies using AI
Telecom and AI Status in the EU
Vodafone: GenAI overhyped, will spend $151M to enhance its chatbot with AI
Data infrastructure software: picks and shovels for AI; Hyperscaler CAPEX
AI wave stimulates big tech spending and strong profits, but for how long?
Big tech companies have made it clear over the last week that they have no intention of slowing down their stunning levels of spending on artificial intelligence (AI), even though investors are getting worried that a big payoff is further down the line than most believe.
In the last quarter, Apple, Amazon, Meta, Microsoft and Google’s parent company Alphabet spent a combined $59 billion on capital expenses, 63% more than a year earlier and 161 percent more than four years ago. A large part of that was funneled into building data centers and packing them with new computer systems to build artificial intelligence. Only Apple has not dramatically increased spending, because it does not build the most advanced AI systems and is not a cloud service provider like the others.
At the beginning of this year, Meta said it would spend more than $30 billion in 2024 on new tech infrastructure. In April, he raised that to $35 billion. On Wednesday, he increased it to at least $37 billion. CEO Mark Zuckerberg said Meta would spend even more next year. He said he’d rather build too fast “rather than too late,” and allow his competitors to get a big lead in the A.I. race. Meta gives away the advanced A.I. systems it develops, but Mr. Zuckerberg still said it was worth it. “Part of what’s important about A.I. is that it can be used to improve all of our products in almost every way,” he said.
………………………………………………………………………………………………………………………………………………………..
This new wave of Generative A.I. is incredibly expensive. The systems work with vast amounts of data and require sophisticated computer chips and new data centers to develop the technology and serve it to customers. The companies are seeing some sales from their A.I. work, but it is barely moving the needle financially.
In recent months, several high-profile tech industry watchers, including Goldman Sachs’s head of equity research and a partner at the venture firm Sequoia Capital, have questioned when or if A.I. will ever produce enough benefit to bring in the sales needed to cover its staggering costs. It is not clear that AI will come close to having the same impact as the internet or mobile phones, Goldman’s Jim Covello wrote in a June report.
“What $1 trillion problem will AI solve?” he wrote. “Replacing low wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I’ve witnessed in my 30 years of closely following the tech industry.” “The reality right now is that while we’re investing a significant amount in the AI.space and in infrastructure, we would like to have more capacity than we already have today,” said Andy Jassy, Amazon’s chief executive. “I mean, we have a lot of demand right now.”
That means buying land, building data centers and all the computers, chips and gear that go into them. Amazon executives put a positive spin on all that spending. “We use that to drive revenue and free cash flow for the next decade and beyond,” said Brian Olsavsky, the company’s finance chief.
There are plenty of signs the boom will persist. In mid-July, Taiwan Semiconductor Manufacturing Company, which makes most of the in-demand chips designed by Nvidia (the ONLY tech company that is now making money from AI – much more below) that are used in AI systems, said those chips would be in scarce supply until the end of 2025.
Mr. Zuckerberg said AI’s potential is super exciting. “It’s why there are all the jokes about how all the tech C.E.O.s get on these earnings calls and just talk about A.I. the whole time.”
……………………………………………………………………………………………………………………
Big tech profits and revenue continue to grow, but will massive spending produce a good ROI?
Last week’s Q2-2024 results:
- Google parent Alphabet reported $24 billion net profit on $85 billion revenue.
- Microsoft reported $22 billion net profit on $65 billion revenue.
- Meta reported $13.5 billion net profit on $39 billion revenue.
- Apple reported $21 billion net profit on $86 billion revenue.
- Amazon reported $13.5 billion net profit on $148 billion revenue.
This chart sums it all up:
………………………………………………………………………………………………………………………………………………………..
References:
https://www.nytimes.com/2024/08/02/technology/tech-companies-ai-spending.html
https://www.axios.com/2024/08/02/google-microsoft-meta-ai-earnings
https://www.nvidia.com/en-us/data-center/grace-hopper-superchip/
AI Frenzy Backgrounder; Review of AI Products and Services from Nvidia, Microsoft, Amazon, Google and Meta; Conclusions
Ericsson and Google Cloud expand partnership with Cloud RAN solution
Ericsson has announced an expansion of its successful and long-standing partnership with Google Cloud to develop an Ericsson Cloud RAN solution on Google Distributed Cloud (GDC) [1.] that offers integrated automation and orchestration and leverages AI/ML for additional communications service providers (CSP) benefits. The companies have successfully demonstrated the full implementation of the Ericsson vDU and vCU on GDC Edge and the solution is running live in the Ericsson Open Lab in Ottawa, Canada, with joint ambition for market development.
Note 1. GDC is a portfolio of fully managed hardware and software solutions which extends Google Cloud’s infrastructure and services to the edge and into your data centers.
Deploying Ericsson Cloud RAN on GDC Edge enables the delivery of a fully automated and very large-scale distributed cloud, resulting in an efficient, reliable, highly performant and secured software centric radio access network infrastructure. In this setup, the on-premises GDC Edge is managed using functions such as fleet management from the public cloud through a dedicated secure connection between on-prem hardware and the cloud, while also addressing sovereignty and privacy requirements of the CSP customers. This architecture ensures the clear path for CSPs toward the implementation of a fully hybrid cloud solution for RAN.
Cloud RAN comprises a mobile switching center, a BBU hotel and a remote radio head
C-RAN networks comprise three primary components:
- A BBU hotel. This is a centralized site that functions as a data or processing center. Individual units can stack together without direct linkage or interconnect to dynamically allocate resources based on network needs. Communication between units has high bandwidth and low latency.
- A remote radio unit (RRU) network. Also known as a remote radio head, an RRU is a traditional network that connects wireless devices to access points.
- A fronthaul or transport network. Also known as a mobile switching center, a fronthaul or transport network is the connection layer between a BBU and a set of RRUs that use optical fiber, cellular or mmWave communication.
………………………………………………………………………………………………………………………………………………………………………………………………………….
Running Ericsson Cloud RAN on GDC Edge will enable CSPs to utilize Google Cloud Vertex AI, BigQuery and other cloud services, to improve the usability of the massive data sets being provided by Cloud RAN applications. This in turn, will open a number of opportunities for CSPs to control, inspect, configure, and optimize their RAN infrastructure.
Ericsson Cloud RAN provides CSPs additional choice for creating networks based on open standards and interfaces using multiple vendors. The Ericsson Cloud RAN solution is infrastructure agnostic, allowing RAN applications to be deployed on any infrastructure chosen by the CSP. Ericsson is continuously collaborating with ecosystem partners and adapting its Cloud RAN applications to work on different infrastructures and configurations.
To further a cloud-native automation approach to network workloads, Ericsson and Google Cloud are jointly enhancing the solution through the Linux Foundation open-source project Nephio (a Kubernetes-based automation platform for deploying and managing highly distributed, interconnected workloads such as 5G network functions), where we jointly drive standardization of critical functionality.
Mårten Lerner, Head of Product Line Cloud RAN, Ericsson, says: “This partnership enables us to deepen and expand our valuable collaboration with Google Cloud, and it opens new opportunities for operators to utilize the benefits of cloud-native solutions and automation. Ericsson remains committed to ensuring the adaptability of its Cloud RAN applications on diverse cloud infrastructures, offering operators enhanced flexibility and choice in deploying Cloud RAN as well as supporting the evolving hybrid cloud architectures together with Google Cloud.”
Gabriele Di Piazza, Senior Director, Telecom Products, Google Cloud, says:
“We’re proud to enable Ericsson Cloud RAN to run on Google Distributed Cloud Edge infrastructure, which includes access to our AI/ML capabilities as well as cloud-native automations. We’re delighted to recognize Ericsson as a distinguished Google Cloud Partner and look forward to a continued strong partnership in support of our mutual customers.”
https://www.techtarget.com/searchnetworking/definition/cloud-radio-access-network-C-RAN
Ericsson and O2 Telefónica demo Europe’s 1st Cloud RAN 5G mmWave FWA use case
Cloud RAN with Google Distributed Cloud Edge; Strategy: host network functions of other vendors on Google Cloud
Vodafone Trials Nokia’s Cloud RAN; Other 5G Research Partnerships
Nokia launches anyRAN to drive CloudRAN partnerships for mobile network operators and enterprises
Omdia and Ericsson on telco transitioning to cloud native network functions (CNFs) and 5G SA core networks
Google Cloud infrastructure enhancements: AI accelerator, cross-cloud network and distributed cloud
Google is selling broad access to its most powerful artificial-intelligence technology for the first time as it races to make up ground in the lucrative cloud-software market. The cloud giant now has a global network of 38 cloud regions, with a goal to operate entirely on carbon-free energy 24/7 by 2030.
At the Google Cloud Next conference today, Google Cloud announced several key infrastructure enhancements for customers, including:
- Cloud TPU v5e: Google’s most cost-efficient, versatile, and scalable purpose-built AI accelerator to date. Now, customers can use a single Cloud TPU platform to run both large-scale AI training and inference. Cloud TPU v5e scales to tens of thousands of chips and is optimized for efficiency. Compared to Cloud TPU v4, it provides up to a 2x improvement in training performance per dollar and up to a 2.5x improvement in inference performance per dollar.
- A3 VMs with NVIDIA H100 GPU: A3 VMs powered by NVIDIA’s H100 GPU will be generally available next month. It is purpose-built with high-performance networking and other advances to enable today’s most demanding gen AI and large language model (LLM) innovations. This allows organizations to achieve three times better training performance over the prior-generation A2.
- GKE Enterprise: This enables multi-cluster horizontal scaling ;-required for the most demanding, mission-critical AI/ML workloads. Customers are already seeing productivity gains of 45%, while decreasing software deployment times by more than 70%. Starting today, the benefits that come with GKE, including autoscaling, workload orchestration, and automatic upgrades, are now available with Cloud TPU v5e.
- Cross-Cloud Network: A global networking platform that helps customers connect and secure applications across clouds. It is open, workload-optimized, and offers ML-powered security to deliver zero trust. Designed to enable customers to gain access to Google services more easily from any cloud, Cross-Cloud Network reduces network latency by up to 35%.
- Google Distributed Cloud: Designed to meet the unique demands of organizations that want to run workloads at the edge or in their data center. In addition to next-generation hardware and new security capabilities, the company is also enhancing the GDC portfolio to bring AI to the edge, with Vertex AI integrations and a new managed offering of AlloyDB Omni on GDC Hosted.
Google’s launch on Tuesday puts it ahead of Microsoft in making AI-powered office software easily available for all customers,” wrote WSJ’s Miles Kruppa. Google will also open up availability to its large PaLM 2 model, which supports generative AI features, plus AI technology by Meta Platforms and startup Anthropic, reported Kruppa.
The efforts are Google’s latest attempt to spark growth in the cloud business, an important part of CEO Sundar Pichai’s attempts to reduce dependence on its cash-cow search engine. Recent advances in AI, and the computing resources they require, have added extra urgency to turn the technology into profitable products.
Google’s infrastructure and software offerings produce $32 billion in annual sales, about 10% of total revenue at parent company. Its cloud unit turned a quarterly operating profit for the first time this year. That still leaves Google firmly in third place in the cloud behind AWS and Microsoft Azure. However, Google Cloud revenue is growing faster – at 31% – than its two bigger cloud rivals.
Google will make widely available its current large PaLM 2 model, which powers many of the company’s generative-AI features. It was previously only available for handpicked customers. The company also will make available AI technology developed by Meta Platforms and the startup Anthropic, in which it is an investor.
Google Cloud CEO Thomas Kurian who gave the keynote speech at Google Cloud Next 2023 conference. Image Credit: Alphabet (parent company of Google)
……………………………………………………………………………………………………………………………
Google Cloud’s comprehensive AI platform — Vertex AI — enables customers to build, deploy and scale machine learning (ML) models. They have seen tremendous usage, with the number of gen AI customer projects growing more than 150 times from April-July this year. Customers have access to more than 100 foundation models, including third-party and popular open-source versions, in their Model Garden. They are all optimized for different tasks and different sizes, including text, chat, images, speech, software code, and more.
Google also offer industry specific models like Sec-PaLM 2 for cybersecurity, to empower global security providers like Broadcom and Tenable; and Med-PaLM 2 to assist leading healthcare and life sciences companies including Bayer Pharmaceuticals, HCA Healthcare, and Meditech.
Partners are also using Vertex AI to build their own features for customers – including Box, Canva, Salesforce, UKG, and many others. Today at Next ‘23, we’re announcing:
- DocuSign is working with Google to pilot how Vertex AI could be used to help generate smart contract assistants that can summarize, explain and answer what’s in complex contracts and other documents.
- SAP is working with us to build new solutions utilizing SAP data and Vertex AI that will help enterprises apply gen AI to important business use cases, like streamlining automotive manufacturing or improving sustainability.
- Workday’s applications for Finance and HR are now live on Google Cloud and they are working with us to develop new gen AI capabilities within the flow of Workday, as part of their multi-cloud strategy. This includes the ability to generate high-quality job descriptions and to bring Google Cloud gen AI to app developers via the skills API in Workday Extend, while helping to ensure the highest levels of data security and governance for customers’ most sensitive information.
In addition, many of the world’s largest consulting firms, including Accenture, Capgemini, Deloitte, and Wipro, have collectively planned to train more than 150,000 experts to help customers implement Google Cloud GenAI.
………………………………………………………………………………………………………………………
“The computing capabilities are improving a lot, but the applications are improving even more,” said Character Technologies CEO Noam Shazeer, who pushed for Google to release a chatbot to the public before leaving the company in 2021. “There will be trillions of dollars worth of value and product chasing tens of billions of dollars worth of hardware.”
………………………………………………………………………………………………………………
References:
https://cloud.google.com/blog/topics/google-cloud-next/welcome-to-google-cloud-next-23
https://www.wsj.com/tech/ai/google-chases-microsoft-amazon-cloud-market-share-with-ai-tools-a7ffc449
https://cloud.withgoogle.com/next
Cloud Service Providers struggle with Generative AI; Users face vendor lock-in; “The hype is here, the revenue is not”
Everyone agrees that Generative AI has great promise and potential. Martin Casado of Andreessen Horowitz recently wrote in the Wall Street Journal that the technology has “finally become transformative:”
“Generative AI can bring real economic benefits to large industries with established and expensive workloads. Large language models could save costs by performing tasks such as summarizing discovery documents without replacing attorneys, to take one example. And there are plenty of similar jobs spread across fields like medicine, computer programming, design and entertainment….. This all means opportunity for the new class of generative AI startups to evolve along with users, while incumbents focus on applying the technology to their existing cash-cow business lines.”
A new investment wave caused by generative AI is starting to loom among cloud service providers, raising questions about whether Big Tech’s spending cutbacks and layoffs will prove to be short lived. Pressed to say when they would see a revenue lift from AI, the big U.S. cloud companies (Microsoft, Alphabet/Google, Meta/FB and Amazon) all referred to existing services that rely heavily on investments made in the past. These range from the AWS’s machine learning services for cloud customers to AI-enhanced tools that Google and Meta offer to their advertising customers.
Microsoft offered only a cautious prediction of when AI would result in higher revenue. Amy Hood, chief financial officer, told investors during an earnings call last week that the revenue impact would be “gradual,” as the features are launched and start to catch on with customers. The caution failed to match high expectations ahead of the company’s earnings, wiping 7% off its stock price (MSFT ticker symbol) over the following week.
When it comes to the newer generative AI wave, predictions were few and far between. Amazon CEO Andy Jassy said on Thursday that the technology was in its “very early stages” and that the industry was only “a few steps into a marathon”. Many customers of Amazon’s cloud arm, AWS, see the technology as transformative, Jassy noted that “most companies are still figuring out how they want to approach it, they are figuring out how to train models.” He insisted that every part of Amazon’s business was working on generative AI initiatives and the technology was “going to be at the heart of what we do.”
There are a number of large language models that power generative AI, and many of the AI companies that make them have forged partnerships with big cloud service providers. As business technology leaders make their picks among them, they are weighing the risks and benefits of using one cloud provider’s AI ecosystem. They say it is an important decision that could have long-term consequences, including how much they spend and whether they are willing to sink deeper into one cloud provider’s set of software, tools, and services.
To date, AI large language model makers like OpenAI, Anthropic, and Cohere have led the charge in developing proprietary large language models that companies are using to boost efficiency in areas like accounting and writing code, or adding to their own products with tools like custom chatbots. Partnerships between model makers and major cloud companies include OpenAI and Microsoft Azure, Anthropic and Cohere with Google Cloud, and the machine-learning startup Hugging Face with Amazon Web Services. Databricks, a data storage and management company, agreed to buy the generative AI startup MosaicML in June.
If a company chooses a single AI ecosystem, it could risk “vendor lock-in” within that provider’s platform and set of services, said Ram Chakravarti, chief technology officer of Houston-based BMC Software. This paradigm is a recurring one, where a business’s IT system, software and data all sit within one digital platform, and it could become more pronounced as companies look for help in using generative AI. Companies say the problem with vendor lock-in, especially among cloud providers, is that they have difficulty moving their data to other platforms, lose negotiating power with other vendors, and must rely on one provider to keep its services online and secure.
Cloud providers, partly in response to complaints of lock-in, now offer tools to help customers move data between their own and competitors’ platforms. Businesses have increasingly signed up with more than one cloud provider to reduce their reliance on any single vendor. That is the strategy companies could end up taking with generative AI, where by using a “multiple generative AI approach,” they can avoid getting too entrenched in a particular platform. To be sure, many chief information officers have said they willingly accept such risks for the convenience, and potentially lower cost, of working with a single technology vendor or cloud provider.
A significant challenge in incorporating generative AI is that the technology is changing so quickly, analysts have said, forcing CIOs to not only keep up with the pace of innovation, but also sift through potential data privacy and cybersecurity risks.
A company using its cloud provider’s premade tools and services, plus guardrails for protecting company data and reducing inaccurate outputs, can more quickly implement generative AI off-the-shelf, said Adnan Masood, chief AI architect at digital technology and IT services firm UST. “It has privacy, it has security, it has all the compliance elements in there. At that point, people don’t really have to worry so much about the logistics of things, but rather are focused on utilizing the model.”
For other companies, it is a conservative approach to use generative AI with a large cloud platform they already trust to hold sensitive company data, said Jon Turow, a partner at Madrona Venture Group. “It’s a very natural start to a conversation to say, ‘Hey, would you also like to apply AI inside my four walls?’”
End Quotes:
“Right now, the evidence is a little bit scarce about what the effect on revenue will be across the tech industry,” said James Tierney of Alliance Bernstein.
Brent Thill, an analyst at Jefferies, summed up the mood among investors: “The hype is here, the revenue is not. Behind the scenes, the whole industry is scrambling to figure out the business model [for generative AI]: how are we going to price it? How are we going to sell it?”
………………………………………………………………………………………………………………
References:
https://www.ft.com/content/56706c31-e760-44e1-a507-2c8175a170e8
https://www.wsj.com/articles/companies-weigh-growing-power-of-cloud-providers-amid-ai-boom-478c454a
https://www.techtarget.com/searchenterpriseai/definition/generative-AI?Offer=abt_pubpro_AI-Insider
Global Telco AI Alliance to progress generative AI for telcos
Curmudgeon/Sperandeo: Impact of Generative AI on Jobs and Workers
Bain & Co, McKinsey & Co, AWS suggest how telcos can use and adapt Generative AI
Generative AI Unicorns Rule the Startup Roost; OpenAI in the Spotlight
Generative AI in telecom; ChatGPT as a manager? ChatGPT vs Google Search
Generative AI could put telecom jobs in jeopardy; compelling AI in telecom use cases
Qualcomm CEO: AI will become pervasive, at the edge, and run on Snapdragon SoC devices
T-Mobile and Google Cloud collaborate on 5G and edge compute
T-Mobile and Google Cloud announced today they are working together to combine the power of 5G and edge compute, giving enterprises more ways to embrace digital transformation. T-Mobile will connect the 5G Advanced Network Solutions (ANS) [1.] suite of public, private and hybrid 5G networks with Google Distributed Cloud Edge (GDC Edge) to help customers embrace next-generation 5G applications and use cases — like AR/VR experiences.
Note 1. 5G ANS is an end-to-end portfolio of deployable 5G solutions, comprised of 5G Connectivity, Edge Computing, and Industry Solutions – along with a partnership that simplifies creating, deploying and managing unique solutions to unique problems.
More companies are turning to edge computing as they focus on digital transformation. In fact, the global edge compute market size is expected to grow by 37.9% to $155.9 billion in 2030. And the combination of edge computing with the low latency, high speeds, and reliability of 5G will be key to promising use cases in industries like retail, manufacturing, logistics, and smart cities. GDC Edge customers across industries will be able to leverage T-Mobile’s 5G ANS easily to get the low latency, high speeds, and reliability they will need for any use case that requires data-intensive computing processes such as AR or computer vision.
For example, manufacturing companies could use computer vision technology to improve safety by monitoring equipment and automatically notifying support personnel if there are issues. And municipalities could leverage augmented reality to keep workers at a safe distance from dangerous situations by using machines to remotely perform hazardous tasks.
To demonstrate the promise of 5G ANS and GDC Edge in a retail setting, T-Mobile created a proof of concept at T-Mobile’s Tech Experience 5G Hub called the “magic mirror” with the support of Google Cloud. This interactive display leverages cloud-based processing and image rendering at the edge to make retail products “magically” come to life. Users simply hold a product in front of the mirror to make interactive videos or product details — such as ingredients or instructions — appear onscreen in near real-time.
“We’ve built the largest and fastest 5G network in the country. This partnership brings together the powerful combination of 5G and edge computing to unlock the expansion of technologies such as AR and VR from limited applications to large-scale adoption,” said Mishka Dehghan, Senior Vice President, Strategy, Product, and Solutions Engineering, T-Mobile Business Group. “From providing a shopping experience in a virtual reality environment to improving safety through connected sensors or computer vision technologies, T-Mobile’s 5G ANS combined with Google Cloud’s innovative edge compute technology can bring the connected world to businesses across the country.”
“Google Cloud is committed to helping telecommunication companies accelerate their growth, competitiveness, and digital journeys,” said Amol Phadke, General Manager, Global Telecom Industry, Google Cloud. “Google Distributed Cloud Edge and T-Mobile’s 5G ANS will help businesses deliver more value to their customers by unlocking new capabilities through 5G and edge technologies.”
T-Mobile is also working with Microsoft Azure, Amazon Web Services and Ericsson on advanced 5G solutions.
References:
https://www.t-mobile.com/news/business/t-mobile-and-google-cloud-join-5g-advanced-network-solutions
https://www.t-mobile.com/business/solutions/networking/5G-advanced-solutions
Cloud RAN with Google Distributed Cloud Edge; Strategy: host network functions of other vendors on Google Cloud
At MWC 2023 Barcelona, Google Cloud announced that they can now run the radio access network (RAN) functions as software on Google Distributed Cloud Edge, providing communications service providers (CSPs- AKA telcos) with a common and agile operating model that extends from the core of the network to the edge, for a high degree of programmability, flexibility, and low operating expenses. CSPs have already embraced open architecture, open-source software, disaggregation, automation, cloud, AI and machine learning, and new operational models, to name a few. The journey started in the last decade with Network Functions Virtualization, primarily with value added services and then deeper with core network applications, and in the past few years, that evolved into a push towards cloud-native. With significant progress in the core, the time for Cloud RAN is now, according to Google. However, whether for industry or region-specific compliance reasons, data sovereignty needs, or latency or local data-processing requirements, most of the network functions deployed in a mobile or wireline network may have to follow a hybrid deployment model where network functions are placed flexibly in a combination of both on-premises and cloud regions. RAN, which is traditionally implemented with proprietary hardware, falls into that camp as well.
In 2021,the company launched Google Distributed Cloud Edge (GDC Edge), an on-premises offering that extends a consistent operating model from our public Google Cloud regions to the customer’s premises. For CSPs, this hybrid approach makes it possible to modernize the network, while enabling easy development, fast innovation, efficient scale and operational efficiency; all while simultaneously helping to reduce technology risk and operational costs. GDC Edge became generally available in 2022.
Google Cloud does not plan to develop its own private wireless networking services to sell to enterprise customers, nor does the company plan to develop its own networking software functions, according to Gabriele Di Piazza, an executive with Google Cloud who spoke at MWC 2023 in Barcelona. Instead, Google Cloud would like to host the networking software functions of other vendors like Ericsson and Mavenir in its cloud. It would also like to resell private networking services from operators and others.
Rather than develop its own cloud native 5G SA core network or other cloud networking software (like Microsoft and AWS are doing), Google Cloud wants to “avoid partner conflict,” Di Piazza said. Google has been building its telecom cloud story around its Anthos platform. That platform is directly competing against the likes of AWS and Microsoft for telecom customers. According to a number of analysts, AWS appears to enjoy an early lead in the telecom industry – but its rivals, like Google, are looking for ways to gain a competitive advantage. One of Google’s competitive arguments is that it doesn’t have aspirations to sell network functions. Therefore, according to Di Piazza, the company can remain a trusted, unbiased partner.
Image Credit: Google Cloud
Last year, the executive said that moving to a cloud-native architecture is mandatory, not optional for telcos, adding that telecom operators are facing lots of challenges right now due to declining revenue growth, exploding data consumption and increasing capital requirements for 5G. Cloud-native networks have significant challenges. For example, there is a lack of standardization among the various open-source groups and there’s fragmentation among parts of the cloud-native ecosystem, particularly among OSS vendors, cloud providers and startups.
In recent years, Google, Microsoft, Amazon, Oracle and other cloud computing service providers have been working to develop products and services that are specifically designed to allow telecom network operator’s to run their network functions inside a third-party cloud environment. For example, AT&T and Dish Network are running their 5G SA core networks on Microsoft Azure and AWS, respectively.
Matt Beal, a senior VP of software development for Oracle Communications, said his company offers both a substantial cloud computing service as well as a lengthy list of network functions. He maintains that Oracle is a better partner for telecom network operators because of it. Beal said Oracle has long offered a wide range of networking functions, from policy control to network slice management, that can be run inside its cloud or inside the cloud of other companies. He said that, because Oracle developed those functions itself, the company has more experience in running them in a cloud environment compared with a company that hasn’t done that kind of work. Beal’s inference is that network operators ought to partner with the best and most experienced companies in the market. That position runs directly counter to Google’s competitive stance on the topic. “When you know how these things work in real life … you can optimize your cloud to run these workloads,” he said.
While a number of other telecom network operators have put things like customer support or IT into the cloud, they have been reluctant to release critical network functions like policy control to a cloud service provider.
References:
https://cloud.google.com/solutions/telecommunications
https://cloud.google.com/blog/topics/telecommunications
Synergy: Q3 Cloud Spending Up Over $11 Billion YoY; Google Cloud gained market share in 3Q-2022
Synergy Research estimates the cloud infrastructure market at $57B in Q3-2022. That was up by well over $11 billion from the third quarter of last year despite two fierce headwinds – historically strong U.S. dollar and a severely restricted Chinese market. The incremental spending represents year-on-year growth of 24%. If exchange rates had remained constant over the last year, the growth rate would have been over 30%. As the market continues on a strong growth trajectory,
Google is alone among the hyper-scaler giants to be gaining market share. Google Cloud increased its market share in Q3 compared to the prior quarter, while Amazon and Microsoft market shares remained relatively unchanged. Compared to a year ago all three have increased their market share by at least a percentage point. Amazon, Microsoft and Google combined had a 66% share of the worldwide market in the quarter, up from 61% a year ago. In aggregate all other cloud providers have tripled their revenues since late 2017, though their collective market share has plunged from 50% to 34% as their growth rates remain far below the market leaders.
Synergy estimates that quarterly cloud infrastructure service revenues (including IaaS, PaaS and hosted private cloud services) were $57.5 billion, with trailing twelve-month revenues reaching $217 billion. Public IaaS and PaaS services account for the bulk of the market and those grew by 26% in Q3. The dominance of the major cloud providers is even more pronounced in public cloud, where the top three control 72% of the market. Geographically, the cloud market continues to grow strongly in all regions of the world.
“It is a strong testament to the benefits of cloud computing that despite two major obstacles to growth the worldwide market still expanded by 24% from last year. Had exchange rates remained stable and had the Chinese market remained on a more normal path then the growth rate percentage would have been well into the thirties,” said John Dinsdale, a Chief Analyst at Synergy Research Group. “The three leading cloud providers all report their financials in US dollars so their growth rates are all beaten down by the historic strength of their home currency. Despite that all three have increased their share of a rapidly growing market over the last year, which is a strong testament to their strategies and performance. Beyond these three, all other cloud providers in aggregate have been losing around three percentage points of market share per year but are still seeing strong double-digit revenue growth. The key for these companies is to focus on specific portions of the market where they can outperform the big three.”
Synergy provides quarterly market tracking and segmentation data on IT and Cloud related markets, including vendor revenues by segment and by region. Market shares and forecasts are provided via Synergy’s uniquely designed online database SIA™, which enables easy access to complex data sets. Synergy’s Competitive Matrix™ and CustomView™ take this research capability one step further, enabling our clients to receive on-going quantitative market research that matches their internal, executive view of the market segments they compete in.
References:
Synergy Research: public cloud service and infrastructure market hit $126B in 1Q-2022
Cloud Computing Giants Growth Slows; Recession Looms, Layoffs Begin
Casa Systems and Google Cloud strengthen partnership to progress cloud-native 5G SA core, MEC, and mobile private networks
Andover, MA based Casa Systems [1.] today announced a strategic technology and distribution partnership with Google Cloud to further advance and differentiate Casa Systems and Google Cloud’s integrated cloud native software and service offerings. The partnership provides for formalized and coordinated global sales, marketing, and support engagement, whereby Casa Systems and Google Cloud will offer Communication Service Providers (CSPs) and major enterprises integrated Google Cloud-Casa Systems solutions for cloud-native 5G core, 5G SA multi-access edge computing (MEC), and enterprise mobile private network use cases. It’s yet another partnership between a telecom company and a cloud service provider (e.g. AWS, Azure are the other two) to produce cloud native services and software.
This new partnership enables Google Cloud and Casa Systems’ technical teams to engage deeply with one another to enable the seamless integration of Casa Systems’ cloud-native software solutions and network functions with Google Cloud, for best-in-class solution offerings with optimized ease-of-use and support for telecom and enterprise customers. Furthermore, Casa Systems and Google Cloud will also collaborate on the development of unique, new features and capabilities to provide competitive differentiation for the combined Google Cloud – Casa Systems solution offering. Additionally, this partnership provides the companies with a foundation on which to build more tightly coordinated and integrated sales efforts between Casa Systems and Google Cloud sales teams globally.
“We are delighted to formalize our partnership with Google Cloud and more quickly drive the adoption of our cloud-native 5G Core and 5G SA MEC solutions, as well as our other software solutions,” said Jerry Guo, Chief Executive Officer at Casa Systems. “This partnership provides the foundation for Casa Systems and Google Cloud’s continued collaboration, ensuring we remain at the cutting edge with our cloud-native, differentiated software solutions, and that the products and services we offer our customers are best-in-class and can be efficiently brought to market globally. We look forward to working with Google Cloud to develop and deliver the solutions customers need to succeed in the cloud, and to a long and mutually beneficial partnership.”
“We are pleased to formalize our relationship with Casa Systems with the announcement of this multifaceted strategic partnership,” said Amol Phadke, managing director and general manager, Global Telecom Industry, Google Cloud. “We have been working with Casa Systems for over two years and believe that they have a great cloud-native 5G software technology platform and team, and that they are a new leader in the cloud-native 5G market segment. The partnership will enable a much wider availability of premium solutions and services for our mutual telecommunications and enterprise customers and prospects.”
Casa also partnered with Google Cloud last year to integrate its 5G SA core with a hyperscaler public cloud, in order to deliver ultra-low latency applications.
Note 1. Casa Systems, Inc. delivers the core-to-customer building blocks to speed 5G transformation with future-proof solutions and cutting-edge bandwidth for all access types. In today’s increasingly personalized world, Casa Systems creates disruptive architectures built specifically to meet the needs of service provider networks. Our suite of open, cloud-native network solutions unlocks new ways for service providers to build networks without boundaries and maximizes revenue-generating capabilities. Commercially deployed in more than 70 countries, Casa Systems serves over 475 Tier 1 and regional service providers worldwide. For more information, please visit http://www.casa-systems.com.
Image Courtesy of Casa Systems
…………………………………………………………………………………………………………………………………………………………………………………
References:
https://www.fiercetelecom.com/cloud/casa-systems-google-cloud-tout-combined-cloud-native-offering
Celona’s 5G LAN solution for private 5G now on Google Distributed Cloud Edge
Celona, an innovator of 5G LAN solutions, today announced it has been selected by Google Cloud to accelerate the delivery of private 5G networks in the U.S. by making its 5G LAN solution available through Google Cloud’s recently announced private cellular network solutions running on Google Distributed Cloud Edge (GDC Edge) [1.].Enterprises using Google Cloud will now enjoy unprecedented agility and economies of scale by bringing new private cellular network services closer to users, data, and applications at the mobile compute edge.
Note 1. Google Distributed Cloud Edge is a fully managed product that brings Google Cloud’s infrastructure and services closer to where data is being generated and consumed. Google Distributed Cloud Edge empowers communication service providers to run 5G Core and Radio Access Network (RAN) functions at the edge.
………………………………………………………………………………………………………………………………..
Google Cloud and Celona are working together to offer turnkey enterprise private 5G networks with the ability to run network management, control, and user plane functions on Google Distributed Cloud Edge wherever it resides. The combined solution addresses distinct performance, service-level, and economic needs of key industry verticals by combining dedicated network capabilities with full edge-computing application stacks. Celona’s solution delivers a complete end-to-end 5G LAN developed for enterprise environments.
“We are excited to partner with Celona to help IT professionals improve application connectivity through private 5G,” said Tanuj Raja, Global Director, Strategic Partnerships at Google Cloud. “Deployed on Google Distributed Cloud Edge, Celona’s end-to-end private network solution enables enterprises to simplify and operate private 5G networks at scale and with the flexibility they need.”
“The combination of 5G LANs and edge compute unlocks a new generation of enterprise IT architecture capable of keeping up with the immense rate of digital transformation and automation occurring across almost every industry,” said Rajeev Shah, Celona’s co-founder and CEO. “The intrinsic agility of our edgeless enterprise architecture gives organizations tremendous flexibility and scale by bringing closer together critical network services and mobile edge compute applications.”
Celona’s Edgeless Enterprise architecture is anchored by the Celona Edge, a cloud-native 4G/5G private core network operating system that delivers an all-in-one network service overlay with advanced policy-based routing, QoS, and security segmentation functions. Celona’s unique approach supports the convergence of radio access network (RAN), application, and network service traffic, automatically shifting the delivery route of services based on performance, policy requirements, and network paths’ real-time health. It is the only private 5G solution in the industry that has been purpose-built to help organizations easily deploy, operate, and integrate 5G cellular technology with their existing infrastructure.
Since launching the first fully integrated 5G LAN platform in November 2020, Celona has seen strong demand from a range of enterprises, managed service providers, and mobile network operators looking to satisfy strategic digital business initiatives not adequately addressed today by Wi-Fi or other networking technologies. The company’s diverse customer base includes world-class organizations such as Verizon, NTT Ltd, SBA Communications, St. Luke’s Hospital System, Purdue Research Foundation, Stanislaus State University, and many other brand-named enterprises.
Celona’s 5G LAN platform is used by manufacturers, retailers, hospitals, schools, and supply-chain leaders to drive transformational results for a wide range of mission-critical use cases that require the deterministic wireless connectivity on private & interference-free cellular spectrum and fast mobility for a new generation of highly mobile devices and robotics infrastructures.
ABOUT CELONA:
Celona, the enterprise 5G company, is focused on enabling organizations of all sizes to implement the latest generation of digital automation initiatives in enterprise wireless. Taking advantage of dynamic spectrum sharing options such as CBRS in the United States, Celona’s Edgeless Enterprise architecture is designed to automate the adoption of private cellular wireless by enterprise organizations and their technology partners. For more information, please visit celona.io and follow Celona on Twitter @celonaio.
MEDIA CONTACT:
David Callisch
Celona, Inc.
[email protected]
+1 (408)504-5487
References:
https://www.celona.io/resources/celona-partners-with-google-cloud-for-5g-lans
https://www.celona.io/resources/5g-lan-provider-celona-is-named-a-cool-vendor-by-gartner
How organizations are using Google Distributed Cloud Edge with 5G LANs to streamline operations by visiting https://www.celona.io/community-stories/google-distributed-cloud-edge-and-celona-5g-lans