AWS
Canalys & Gartner: AI investments drive growth in cloud infrastructure spending
According to market research firm Canalys, global spending on cloud infrastructure services [1.] increased by 21% year on year, reaching US$82.0 billion in the 3rd quarter of 2024. Customer investment in the hyperscalers’ AI offerings fueled growth, prompting leading cloud vendors to escalate their investments in AI.
Note 1. Canalys defines cloud infrastructure services as services providing infrastructure (IaaS and bare metal) and platforms that are hosted by third-party providers and made available to users via the Internet.
The rankings of the top three cloud service providers – Amazon AWS, Microsoft Azure and Google Cloud – remained stable from the previous quarter, with these providers together accounting for 64% of total expenditure. Total combined spending with these three providers grew by 26% year on year, and all three reported sequential growth. Market leader AWS maintained a year-on-year growth rate of 19%, consistent with the previous quarter. That was outpaced by both Microsoft, with 33% growth, and Google Cloud, with 36% growth. In actual dollar terms, however, AWS outgrew both Microsoft and Google Cloud, increasing sales by almost US$4.4 billion on the previous year.
In Q3 2024, the cloud services market saw strong, steady growth. All three cloud hyperscalers reported positive returns on their AI investments, which have begun to contribute to their overall cloud business performance. These returns reflect a growing reliance on AI as a key driver for innovation and competitive advantage in the cloud.
With the increasing adoption of AI technologies, demand for high-performance computing and storage continues to rise, putting pressure on cloud providers to expand their infrastructure. In response, leading cloud providers are prioritizing large-scale investments in next-generation AI infrastructure. To mitigate the risks associated with under-investment – such as being unprepared for future demand or missing key opportunities – they have adopted over-investment strategies, ensuring their ability to scale offerings in line with the growing needs of their AI customers. Enterprises are convinced that AI will deliver an unprecedented boost in efficiency and productivity, so they are pouring money into hyperscalers’ AI solutions. Accordingly, cloud service provider capital spending (CAPEX) will sustain their rapid growth trajectories and are expected to continue on this path into 2025.
“Continued substantial expenditure will present new challenges, requiring cloud vendors to carefully balance their investments in AI with the cost discipline needed to fund these initiatives,” said Rachel Brindley, Senior Director at Canalys. “While companies should invest sufficiently in AI to capitalize on technological growth, they must also exercise caution to avoid overspending or inefficient resource allocation. Ensuring the sustainability of these investments over time will be vital to maintaining long-term financial health and competitive advantage.”
“On the other hand, the three leading cloud providers are also expediting the update and iteration of their AI foundational models, continuously expanding their associated product portfolios,” said Yi Zhang, Analyst at Canalys. “As these AI foundational models mature, cloud providers are focused on leveraging their enhanced capabilities to empower a broader range of core products and services. By integrating these advanced models into their existing offerings, they aim to enhance functionality, improve performance and increase user engagement across their platforms, thereby unlocking new revenue streams.”
Amazon Web Services (AWS) maintained its lead in the global cloud market in Q3 2024, capturing a 33% market share and achieving 19% year-on-year revenue growth. It continued to enhance and broaden its AI offerings by launching new models through Amazon Bedrock and SageMaker, including Anthropic’s upgraded Claude 3.5 Sonnet and Meta’s Llama 3.2. It reported a triple-digit year-on-year increase in AI-related revenue, outpacing its overall growth by more than three times. Over the past 18 months, AWS has introduced nearly twice as many machine learning and generative AI features as the combined offerings of the other leading cloud providers. In terms of capital expenditure, AWS announced plans to further increase investment, with projected spending of approximately US$75 billion in 2024. This investment will primarily be allocated to expanding technology infrastructure to meet the rising demand for AI services, underscoring AWS’ commitment to staying at the forefront of technological innovation and service capability.
Microsoft Azure remains the second-largest cloud provider, with a 20% market share and impressive annual growth of 33%. This growth was partly driven by AI services, which contributed approximately 12% to the overall increase. Over the past six months, use of Azure OpenAI has more than doubled, driven by increased adoption by both digital-native companies and established enterprises transitioning their applications from testing phases to full-scale production environments. To further enhance its offerings, Microsoft is expanding Azure AI by introducing industry-specific models, including advanced multimodal medical imaging models, aimed at providing tailored solutions for a broader customer base. Additionally, the company announced new cloud and AI infrastructure investments in Brazil, Italy, Mexico and Sweden to expand capacity in alignment with long-term demand forecasts.
Google Cloud, the third-largest provider, maintained a 10% market share, achieving robust year-on-year growth of 36%. It showed the strongest AI-driven revenue growth among the leading providers, with a clear acceleration compared with the previous quarter. As of September 2024, its revenue backlog increased to US$86.8 billion, up from US$78.8 billion in Q2, signaling continued momentum in the near term. Its enterprise AI platform, Vertex, has garnered substantial user adoption, with Gemini API calls increasing nearly 14-fold over the past six months. Google Cloud is actively seeking and developing new ways to apply AI tools across different scenarios and use cases. It introduced the GenAI Partner Companion, an AI-driven advisory tool designed to offer service partners personalized access to training resources, enhancing learning and supporting successful project execution. In Q3 2024, Google announced over US$7 billion in planned data center investments, with nearly US$6 billion allocated to projects within the United States.
Separate statistics from Gartner corroborate hyperscale CAPEX optimism. Gartner predicts that worldwide end-user spending on public cloud services is on course to reach $723.4 billion next year, up from a projected $595.7 billion in 2024. All segments of the cloud market – platform-as-a-service (PaaS), software-as-a-service (SaaS), desktop-as-a-service (DaaS), and infrastructure-as-a-service (IaaS) – are expected to achieve double-digit growth.
While SaaS will be the biggest single segment, accounting for $299.1 billion, IaaS will grow the fastest, jumping 24.8 percent to $211.9 million.
Like Canalys, Gartner also singles out AI for special attention. “The use of AI technologies in IT and business operations is unabatedly accelerating the role of cloud computing in supporting business operations and outcomes,” said Sid Nag, vice president analyst at Gartner. “Cloud use cases continue to expand with increasing focus on distributed, hybrid, cloud-native, and multicloud environments supported by a cross-cloud framework, making the public cloud services market achieve a 21.5 percent growth in 2025.”
……………………………………………………………………………………………………………………………………………………………………………………………………..
References:
https://canalys.com/newsroom/global-cloud-services-q3-2024
https://www.telecoms.com/public-cloud/ai-hype-fuels-21-percent-jump-in-q3-cloud-spending
Cloud Service Providers struggle with Generative AI; Users face vendor lock-in; “The hype is here, the revenue is not”
MTN Consulting: Top Telco Network Infrastructure (equipment) vendors + revenue growth changes favor cloud service providers
IDC: Public Cloud software at 2/3 of all enterprise applications revenue in 2026; SaaS is essential!
IDC: Cloud Infrastructure Spending +13.5% YoY in 4Q-2021 to $21.1 billion; Forecast CAGR of 12.6% from 2021-2026
IDC: Worldwide Public Cloud Services Revenues Grew 29% to $408.6 Billion in 2021 with Microsoft #1?
Synergy Research: Microsoft and Amazon (AWS) Dominate IT Vendor Revenue & Growth; Popularity of Multi-cloud in 2021
Google Cloud revenues up 54% YoY; Cloud native security is a top priority
AI Echo Chamber: “Upstream AI” companies huge spending fuels profit growth for “Downstream AI” firms
According to the Wall Street Journal, the AI industry has become an “Echo Chamber,” where huge capital spending by the AI infrastructure and application providers have fueled revenue and profit growth for everyone else. Market research firm Bespoke Investment Group has recently created baskets for “downstream” and “upstream” AI companies.
- The Downstream group involves “AI implementation,” which consist of firms that sell AI development tools, such as the large language models (LLMs) popularized by OpenAI’s ChatGPT since the end of 2022, or run products that can incorporate them. This includes Google/Alphabet, Microsoft, Amazon, Meta Platforms (FB), along with IBM, Adobe and Salesforce.
- Higher up the supply chain (Upstream group), are the “AI infrastructure” providers, which sell AI chips, applications, data centers and training software. The undisputed leader is Nvidia, which has seen its sales triple in a year, but it also includes other semiconductor companies, database developer Oracle and owners of data centers Equinix and Digital Realty.
The Upstream group of companies have posted profit margins that are far above what analysts expected a year ago. In the second quarter, and pending Nvidia’s results on Aug. 28th , Upstream AI members of the S&P 500 are set to have delivered a 50% annual increase in earnings. For the remainder of 2024, they will be increasingly responsible for the profit growth that Wall Street expects from the stock market—even accounting for Intel’s huge problems and restructuring.
It should be noted that the lines between the two groups can be blurry, particularly when it comes to giants such as Amazon, Microsoft and Alphabet, which provide both AI implementation (e.g. LLMs) and infrastructure: Their cloud-computing businesses are responsible for turning these companies into the early winners of the AI craze last year and reported breakneck growth during this latest earnings season. A crucial point is that it is their role as ultimate developers of AI applications that have led them to make super huge capital expenditures, which are responsible for the profit surge in the rest of the ecosystem. So there is a definite trickle down effect where the big tech players AI directed CAPEX is boosting revenue and profits for the companies down the supply chain.
As the path for monetizing this technology gets longer and harder, the benefits seem to be increasingly accruing to companies higher up in the supply chain. Meta Platforms Chief Executive Mark Zuckerberg recently said the company’s coming Llama 4 language model will require 10 times as much computing power to train as its predecessor. Were it not for AI, revenues for semiconductor firms would probably have fallen during the second quarter, rather than rise 18%, according to S&P Global.
………………………………………………………………………………………………………………………………………………………..
A paper written by researchers from the likes of Cambridge and Oxford uncovered that the large language models (LLMs) behind some of today’s most exciting AI apps may have been trained on “synthetic data” or data generated by other AI. This revelation raises ethical and quality concerns. If an AI model is trained primarily or even partially on synthetic data, it might produce outputs lacking human-generated content’s richness and reliability. It could be a case of the blind leading the blind, with AI models reinforcing the limitations or biases inherent in the synthetic data they were trained on.
In this paper, the team coined the phrase “model collapse,” claiming that training models this way will answer user prompts with low-quality outputs. The idea of “model collapse” suggests a sort of unraveling of the machine’s learning capabilities, where it fails to produce outputs with the informative or nuanced characteristics we expect. This poses a serious question for the future of AI development. If AI is increasingly trained on synthetic data, we risk creating echo chambers of misinformation or low-quality responses, leading to less helpful and potentially even misleading systems.
……………………………………………………………………………………………………………………………………………
In a recent working paper, Massachusetts Institute of Technology (MIT) economist Daron Acemoglu argued that AI’s knack for easy tasks has led to exaggerated predictions of its power to enhance productivity in hard jobs. Also, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation). Indeed, data from the Census Bureau show that only a small percentage of U.S. companies outside of the information and knowledge sectors are looking to make use of AI.
References:
https://deepgram.com/learn/the-ai-echo-chamber-model-collapse-synthetic-data-risks
https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf
AI wave stimulates big tech spending and strong profits, but for how long?
AI winner Nvidia faces competition with new super chip delayed
SK Telecom and Singtel partner to develop next-generation telco technologies using AI
Telecom and AI Status in the EU
Vodafone: GenAI overhyped, will spend $151M to enhance its chatbot with AI
Data infrastructure software: picks and shovels for AI; Hyperscaler CAPEX
AI wave stimulates big tech spending and strong profits, but for how long?
Big tech companies have made it clear over the last week that they have no intention of slowing down their stunning levels of spending on artificial intelligence (AI), even though investors are getting worried that a big payoff is further down the line than most believe.
In the last quarter, Apple, Amazon, Meta, Microsoft and Google’s parent company Alphabet spent a combined $59 billion on capital expenses, 63% more than a year earlier and 161 percent more than four years ago. A large part of that was funneled into building data centers and packing them with new computer systems to build artificial intelligence. Only Apple has not dramatically increased spending, because it does not build the most advanced AI systems and is not a cloud service provider like the others.
At the beginning of this year, Meta said it would spend more than $30 billion in 2024 on new tech infrastructure. In April, he raised that to $35 billion. On Wednesday, he increased it to at least $37 billion. CEO Mark Zuckerberg said Meta would spend even more next year. He said he’d rather build too fast “rather than too late,” and allow his competitors to get a big lead in the A.I. race. Meta gives away the advanced A.I. systems it develops, but Mr. Zuckerberg still said it was worth it. “Part of what’s important about A.I. is that it can be used to improve all of our products in almost every way,” he said.
………………………………………………………………………………………………………………………………………………………..
This new wave of Generative A.I. is incredibly expensive. The systems work with vast amounts of data and require sophisticated computer chips and new data centers to develop the technology and serve it to customers. The companies are seeing some sales from their A.I. work, but it is barely moving the needle financially.
In recent months, several high-profile tech industry watchers, including Goldman Sachs’s head of equity research and a partner at the venture firm Sequoia Capital, have questioned when or if A.I. will ever produce enough benefit to bring in the sales needed to cover its staggering costs. It is not clear that AI will come close to having the same impact as the internet or mobile phones, Goldman’s Jim Covello wrote in a June report.
“What $1 trillion problem will AI solve?” he wrote. “Replacing low wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I’ve witnessed in my 30 years of closely following the tech industry.” “The reality right now is that while we’re investing a significant amount in the AI.space and in infrastructure, we would like to have more capacity than we already have today,” said Andy Jassy, Amazon’s chief executive. “I mean, we have a lot of demand right now.”
That means buying land, building data centers and all the computers, chips and gear that go into them. Amazon executives put a positive spin on all that spending. “We use that to drive revenue and free cash flow for the next decade and beyond,” said Brian Olsavsky, the company’s finance chief.
There are plenty of signs the boom will persist. In mid-July, Taiwan Semiconductor Manufacturing Company, which makes most of the in-demand chips designed by Nvidia (the ONLY tech company that is now making money from AI – much more below) that are used in AI systems, said those chips would be in scarce supply until the end of 2025.
Mr. Zuckerberg said AI’s potential is super exciting. “It’s why there are all the jokes about how all the tech C.E.O.s get on these earnings calls and just talk about A.I. the whole time.”
……………………………………………………………………………………………………………………
Big tech profits and revenue continue to grow, but will massive spending produce a good ROI?
Last week’s Q2-2024 results:
- Google parent Alphabet reported $24 billion net profit on $85 billion revenue.
- Microsoft reported $22 billion net profit on $65 billion revenue.
- Meta reported $13.5 billion net profit on $39 billion revenue.
- Apple reported $21 billion net profit on $86 billion revenue.
- Amazon reported $13.5 billion net profit on $148 billion revenue.
This chart sums it all up:
………………………………………………………………………………………………………………………………………………………..
References:
https://www.nytimes.com/2024/08/02/technology/tech-companies-ai-spending.html
https://www.axios.com/2024/08/02/google-microsoft-meta-ai-earnings
https://www.nvidia.com/en-us/data-center/grace-hopper-superchip/
AI Frenzy Backgrounder; Review of AI Products and Services from Nvidia, Microsoft, Amazon, Google and Meta; Conclusions
Deutsche Telekom with AWS and VMware demonstrate a global enterprise network for seamless connectivity across geographically distributed data centers
Deutsche Telekom (DT) has partnered up with AWS and VMware to demonstrate what the German network operator describes as a “globally distributed enterprise network” that combines Deutsche Telekom connectivity services in federation with third party connectivity, compute, and storage resources at campus locations in Prague, Czech Republic and Seattle, USA and an OGA (Open Grid Alliance) grid node in Bonn, Germany.
The goal is to allow customers to book connectivity services directly from DT using a unified interface for the management of the network across its various locations.
The POC demonstrates how the approach supports an optimized resource allocation for advanced AI based applications such as video analytics, autonomous vehicles and robotics. The demonstration use case is video analytics with distributed AI inference.
PoC setup:
The global enterprise network integrates Deutsche Telekom private 5G wireless solutions, AWS services and infrastructure, VMware’s multi-cloud telco platform, OGA grid nodes and Mavenir’s RAN/Core functions. Two 5G Standalone (SA) private wireless networks deployed at locations in Prague, Czech Republic and Seattle, USA are connected to a Mavenir 5G Core hosted on AWS Frankfurt Region leveraging the framework of the Integrated Private Wireless on AWS program. The convergence of the global network with local high-speed 5G connectivity is enabled by the AWS backbone and infrastructure.
The 5G SA private wireless network with User Plane Function (UPF) and RAN hosted at the Seattle location, is operating on the VMware Telco Cloud Platform to enable low latency services. The VMware Service Management and Orchestration (SMO) is also deployed in the same location and serves as the global orchestrator. The SMO framework helps to simplify, optimize and automate the RAN, Core and its applications in a multi-cloud environment.
To demonstrate the benefit of this approach, the deployed POC used a video analytics application where cameras were installed at both Prague and Seattle locations and connected through a private wireless global enterprise network. Using this approach, operators were able to run AI components concurrently for immediate analysis and inferencing. This helps demonstrate the ability for customers to seamlessly connect devices across locations using the global enterprise network. Leveraging OGA architectural principles for Distributed Edge AI Networking, an OGA grid node was established on Dell infrastructure in Bonn facilitating seamless connectivity across the European locations.
Statements:
“As AI gets engrained deeper in the ecosystem of our lives, it necessitates equitable access to compute and connectivity for everyone, everywhere across the globe. Multi-national enterprises are seeking trusted and sovereign compute & connectivity constructs that underpin an equitable and seamless access. Deutsche Telekom is excited to partner with the OGA ecosystem for co-creation on these essential constructs and the enablement of the Distributed Edge AI Networking applications of the future,” – Kaniz Mahdi, Group Chief Architect and SVP Technology Architecture and Innovation at Deutsche Telekom.
“VMware is proud to support this Proof of Concept – contributing know-how and a modern and scalable platform that aims to offer the agility required in distributed environments. VMware Telco Cloud Platform is suited to deliver the compute resources on-demand wherever critical customer workloads are needed. As a founding member of the Open Grid Alliance, VMware embraces both the principles of this initiative and the opportunity to collaborate more deeply with fellow alliance members AWS and Deutsche Telekom to help meet the evolving needs of global enterprise customers.” – Stephen Spellicy, vice president, Service Provider Marketing, Enablement and Business Development, VMware
References:
https://www.telekom.com/en/media/media-information/archive/global-enterprise-network-1050910
Deutsche Telekom Global Carrier Launches New Point-of-Presence (PoP) in Miami, Florida
AWS Integrated Private Wireless with Deutsche Telekom, KDDI, Orange, T-Mobile US, and Telefónica partners
Deutsche Telekom Achieves End-to-end Data Call on Converged Access using WWC standards
Deutsche Telekom exec: AI poses massive challenges for telecom industry
Cloud Service Providers struggle with Generative AI; Users face vendor lock-in; “The hype is here, the revenue is not”
Everyone agrees that Generative AI has great promise and potential. Martin Casado of Andreessen Horowitz recently wrote in the Wall Street Journal that the technology has “finally become transformative:”
“Generative AI can bring real economic benefits to large industries with established and expensive workloads. Large language models could save costs by performing tasks such as summarizing discovery documents without replacing attorneys, to take one example. And there are plenty of similar jobs spread across fields like medicine, computer programming, design and entertainment….. This all means opportunity for the new class of generative AI startups to evolve along with users, while incumbents focus on applying the technology to their existing cash-cow business lines.”
A new investment wave caused by generative AI is starting to loom among cloud service providers, raising questions about whether Big Tech’s spending cutbacks and layoffs will prove to be short lived. Pressed to say when they would see a revenue lift from AI, the big U.S. cloud companies (Microsoft, Alphabet/Google, Meta/FB and Amazon) all referred to existing services that rely heavily on investments made in the past. These range from the AWS’s machine learning services for cloud customers to AI-enhanced tools that Google and Meta offer to their advertising customers.
Microsoft offered only a cautious prediction of when AI would result in higher revenue. Amy Hood, chief financial officer, told investors during an earnings call last week that the revenue impact would be “gradual,” as the features are launched and start to catch on with customers. The caution failed to match high expectations ahead of the company’s earnings, wiping 7% off its stock price (MSFT ticker symbol) over the following week.
When it comes to the newer generative AI wave, predictions were few and far between. Amazon CEO Andy Jassy said on Thursday that the technology was in its “very early stages” and that the industry was only “a few steps into a marathon”. Many customers of Amazon’s cloud arm, AWS, see the technology as transformative, Jassy noted that “most companies are still figuring out how they want to approach it, they are figuring out how to train models.” He insisted that every part of Amazon’s business was working on generative AI initiatives and the technology was “going to be at the heart of what we do.”
There are a number of large language models that power generative AI, and many of the AI companies that make them have forged partnerships with big cloud service providers. As business technology leaders make their picks among them, they are weighing the risks and benefits of using one cloud provider’s AI ecosystem. They say it is an important decision that could have long-term consequences, including how much they spend and whether they are willing to sink deeper into one cloud provider’s set of software, tools, and services.
To date, AI large language model makers like OpenAI, Anthropic, and Cohere have led the charge in developing proprietary large language models that companies are using to boost efficiency in areas like accounting and writing code, or adding to their own products with tools like custom chatbots. Partnerships between model makers and major cloud companies include OpenAI and Microsoft Azure, Anthropic and Cohere with Google Cloud, and the machine-learning startup Hugging Face with Amazon Web Services. Databricks, a data storage and management company, agreed to buy the generative AI startup MosaicML in June.
If a company chooses a single AI ecosystem, it could risk “vendor lock-in” within that provider’s platform and set of services, said Ram Chakravarti, chief technology officer of Houston-based BMC Software. This paradigm is a recurring one, where a business’s IT system, software and data all sit within one digital platform, and it could become more pronounced as companies look for help in using generative AI. Companies say the problem with vendor lock-in, especially among cloud providers, is that they have difficulty moving their data to other platforms, lose negotiating power with other vendors, and must rely on one provider to keep its services online and secure.
Cloud providers, partly in response to complaints of lock-in, now offer tools to help customers move data between their own and competitors’ platforms. Businesses have increasingly signed up with more than one cloud provider to reduce their reliance on any single vendor. That is the strategy companies could end up taking with generative AI, where by using a “multiple generative AI approach,” they can avoid getting too entrenched in a particular platform. To be sure, many chief information officers have said they willingly accept such risks for the convenience, and potentially lower cost, of working with a single technology vendor or cloud provider.
A significant challenge in incorporating generative AI is that the technology is changing so quickly, analysts have said, forcing CIOs to not only keep up with the pace of innovation, but also sift through potential data privacy and cybersecurity risks.
A company using its cloud provider’s premade tools and services, plus guardrails for protecting company data and reducing inaccurate outputs, can more quickly implement generative AI off-the-shelf, said Adnan Masood, chief AI architect at digital technology and IT services firm UST. “It has privacy, it has security, it has all the compliance elements in there. At that point, people don’t really have to worry so much about the logistics of things, but rather are focused on utilizing the model.”
For other companies, it is a conservative approach to use generative AI with a large cloud platform they already trust to hold sensitive company data, said Jon Turow, a partner at Madrona Venture Group. “It’s a very natural start to a conversation to say, ‘Hey, would you also like to apply AI inside my four walls?’”
End Quotes:
“Right now, the evidence is a little bit scarce about what the effect on revenue will be across the tech industry,” said James Tierney of Alliance Bernstein.
Brent Thill, an analyst at Jefferies, summed up the mood among investors: “The hype is here, the revenue is not. Behind the scenes, the whole industry is scrambling to figure out the business model [for generative AI]: how are we going to price it? How are we going to sell it?”
………………………………………………………………………………………………………………
References:
https://www.ft.com/content/56706c31-e760-44e1-a507-2c8175a170e8
https://www.wsj.com/articles/companies-weigh-growing-power-of-cloud-providers-amid-ai-boom-478c454a
https://www.techtarget.com/searchenterpriseai/definition/generative-AI?Offer=abt_pubpro_AI-Insider
Global Telco AI Alliance to progress generative AI for telcos
Curmudgeon/Sperandeo: Impact of Generative AI on Jobs and Workers
Bain & Co, McKinsey & Co, AWS suggest how telcos can use and adapt Generative AI
Generative AI Unicorns Rule the Startup Roost; OpenAI in the Spotlight
Generative AI in telecom; ChatGPT as a manager? ChatGPT vs Google Search
Generative AI could put telecom jobs in jeopardy; compelling AI in telecom use cases
Qualcomm CEO: AI will become pervasive, at the edge, and run on Snapdragon SoC devices
Bain & Co, McKinsey & Co, AWS suggest how telcos can use and adapt Generative AI
Generative Artificial Intelligence (AI) uncertainty is especially challenging for the telecommunications industry which has a history of very slow adaptation to change and thus faces lots of pressure to adopt generative AI in their services and infrastructure. Indeed, Deutsche Telekom stated that AI poses massive challenges for telecom industry in this IEEE Techblog post.
Consulting firm Bain & Co. highlighted that inertia in a recent report titled, “Telcos, Stop Debating Generative AI and Just Get Going” Three partners stated network operators need to act fast in order to jump on this opportunity. “Speedy action trumps perfect planning here,” Herbert Blum, Jeff Katzin and Velu Sinha wrote in the brief. “It’s more important for telcos to quickly launch an initial set of generative AI applications that fit the company’s strategy, and do so in a responsible way – or risk missing a window of opportunity in this fast-evolving sector.”
Generative AI use cases can be divided into phases based on ease of implementation, inherent risk, and value:
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
Telcos can pursue generative AI applications across business functions, starting with knowledge management:
Separately, a McKinsey & Co. report opined that AI has highlighted business leader priorities. The consulting firm cited organizations that have top executives championing an organization’s AI initiatives, including the need to fund those programs. This is counter to organizations that lack a clear directive on their AI plans, which results in wasted spending and stalled development. “Reaching this state of AI maturity is no easy task, but it is certainly within the reach of telcos,” the firm noted. “Indeed, with all the pressures they face, embracing large-scale deployment of AI and transitioning to being AI-native organizations could be key to driving growth and renewal. Telcos that are starting to recognize this is non-negotiable are scaling AI investments as the business impact generated by the technology materializes.”
…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
Ishwar Parulkar, chief technologist for the telco industry at AWS, touted several areas that should be of generative AI interest to telecom operators. The first few were common ones tied to improving the customer experience. This includes building on machine learning (ML) to help improve that interaction and potentially reduce customer churn.
“We have worked with some leading customers and implemented this in production where they can take customer voice calls, translate that to text, do sentiment analysis on it … and then feed that into reducing customer churn,” Parulkar said. “That goes up another notch with generative AI, where you can have chat bots and more interactive types of interfaces for customers as well as for customer care agent systems in a call. So that just goes up another notch of generative AI.”
The next step is using generative AI to help operators bolster their business operations and systems. This is for things like revenue assurance and finding revenue leakage, items that Parulkar noted were in a “more established space in terms of what machine learning can do.”
However, Parulkar said the bigger opportunity is around helping operators better design and manage network operations. This is an area that remains the most immature, but one that Parulkar is “most excited about.” This can begin from the planning and installation phase, with an example of helping technicians when they are installing physical equipment.
“In installation of network equipment today, you have technicians who go through manuals and have procedures to install routers and base stations and connect links and fibers,” Parulkar said. “That all can be now made interactive [using] chat bot, natural language kind of framework. You can have a lot of this documentation, training data that can train foundational models that can create that type of an interface, improves productivity, makes it easier to target specific problems very quickly in terms of what you want to deploy.”
This can also help with network configuration by using large datasets to help automatically generate configurations. This could include the ability to help configure routers, VPNs and MPLS circuits to support network performance.
The final area of support could be in the running of those networks once they are deployed. Parulkar cited functions like troubleshooting failures that can be supported by a generative AI model.
“There are recipes that operators go through to troubleshoot and triage failure,” Parulkar said “A lot of times it’s trial-and-error method that can be significantly improved in a more interactive, natural language, prompt-based system that guides you through troubleshooting and operating the network.”
This model could be especially compelling for operators as they integrate more routers to support disaggregated 5G network models for mobile edge computing (MEC), private networks and the use of millimeter-wave (mmWave) spectrum bands.
Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel this week also hinted at the ability for AI to help manage spectrum resources.
“For decades we have licensed large slices of our airwaves and come up with unlicensed policies for joint use in others,” Rosenworcel said during a speech at this week’s FCC and National Science Foundation Joint Workshop. “But this scheme is not truly dynamic. And as demands on our airwaves grow – as we move from a world of mobile phones to billions of devices in the internet of things (IoT)– we can take newfound cognitive abilities and teach our wireless devices to manage transmissions on their own. Smarter radios using AI can work with each other without a central authority dictating the best of use of spectrum in every environment. If that sounds far off, it’s not. Consider that a large wireless provider’s network can generate several million performance measurements every minute. And consider the insights that machine learning can provide to better understand network usage and support greater spectrum efficiency.”
While generative AI does have potential, Parulkar also left open the door for what he termed “traditional AI” and which he described as “supervised and unsupervised learning.”
“Those techniques still work for a lot of the parts in the network and we see a combination of these two,” Parulkar said. “For example, you might use anomaly detection for getting some insights into the things to look at and then followed by a generative AI system that will then give an output in a very interactive format and we see that in some of the use cases as well. I think this is a big area for telcos to explore and we’re having active conversations with multiple telcos and network vendors.”
Parulkar’s comments come as AWS has been busy updating its generative AI platforms. One of the most recent was the launch of its $100 million Generative AI Innovation Center, which is targeted at helping guide businesses through the process of developing, building and deploying generative AI tools.
“Generative AI is one of those technological shifts that we are in the early stages of that will impact all organizations across the globe in some form of fashion,” Sri Elaprolu, senior leader of generative AI at AWS, told SDxCentral. “We have the goal of helping as many customers as we can, and as we need to, in accelerating their journey with generative AI.”
References:
https://www.bain.com/insights/telcos-stop-debating-generative-ai-and-just-get-going/
Deutsche Telekom exec: AI poses massive challenges for telecom industry
Generative AI in telecom; ChatGPT as a manager? ChatGPT vs Google Search
Generative AI could put telecom jobs in jeopardy; compelling AI in telecom use cases
Generative AI Unicorns Rule the Startup Roost; OpenAI in the Spotlight
Forbes: Cloud is a huge challenge for enterprise networks; AI adds complexity
Qualcomm CEO: AI will become pervasive, at the edge, and run on Snapdragon SoC devices
Bloomberg: China Lures Billionaires Into Race to Catch U.S. in AI
AWS expanding in Southeast Asia, especially Malaysia and Philippines
The adoption of cloud computing is accelerating across different customer segments in Southeast Asia, a top Malaysia-based regional executive for Amazon Web Services told Nikkei Asia, as the company competes for business with other global providers descending on the region.
AWS is investing big in the race to develop cloud data centers in Southeast Asia. It announced in March a 25.5 billion ringgit ($6 billion) investment in Malaysia after pouring money into Singapore, Indonesia and Thailand. AWS’ investment in Association of Southeast Asian Nations (ASEAN) countries now stands at $22.5 billion.
Other big names that offer cloud data services have joined the fray, including Microsoft, Alibaba, Tencent, IBM, Oracle, and Google. Microsoft announced a five-year $1 billion investment in 2021 in Malaysia, while Google will be setting up a cloud region — the location where the public cloud data is stored — in the country, one of its 33 such systems worldwide. Malaysia topped real estate consultancy Knight Frank’s inaugural SEA-5 Data Centre Opportunity Index published last month as the most attractive destination for data center investment among five Southeast Asian countries, beating out Indonesia, Vietnam, the Philippines and Thailand.
Peter Murray, head of Malaysia and ASEAN Emerging Markets for AWS, described a noticeable pickup in the embrace of cloud technology, including by startups and enterprises in various industries as well as sectors such as financial services, natural resources and energy. “We are seeing significant growth across media and telecommunications as well and we believe that will continue to play a key role in helping Malaysia, ASEAN as well as global organizations who may have operations and be based in Malaysia to increase their productivity,” Murray said in a recent interview. AWS’ strategy amid the intensifying competition is “to build what our customers are telling us is the most important thing to them,” he said. “And 90% of what Amazon builds is driven by what customers are telling us matters the most to them.”
Murray cited two banks in Malaysia AWS has worked with, Bank Islam and Al-Rajhi Bank, that are utilizing its cloud in launching digital banking services. Bank Islam’s Be U digital bank, meanwhile, was developed with AWS’ support to create new digital financial services like mobile applications, loan facilities and services that adhere to Islamic financial regulations.
Carsome, Southeast Asia’s largest integrated car e-commerce platform, is running their services on AWS’ serverless technologies and using its machine learning technology to digitalize and improve customer experience. Carsome, Malaysia’s first unicorn, or startup valued at $1 billion or more, utilized workflow system Amazon SageMaker to streamline customer services by developing machine learning systems that incorporated 175 car inspection points.
AWS is also helping Malaysia’s state-owned oil conglomerate Petronas commercialize its cloud-based logistics services to improve efficiency via the Stear platform, which was launched in November last year and jointly developed with Petronas, professional services company Accenture and AWS. Murray said Stear supports offshore exploration, production and development and is enabling improved fuel management, intelligent routing and better vessel scheduling with near real time voyage traffic tracking.
Murray said Petronas aims to use Stear to reduce carbon emissions associated with logistics operations. “That’s a really exciting future statement and intent that we will have with many customers, the way that they are able to build and run innovative new technology workloads, which are actually able to show a dividend in terms of the reduction in carbon consumption and the increase in energy efficiency as well,” he added.
………………………………………………………………………………………………………………………………………………………………………………………………………
“We see continued [cloud technology] adoption [and] we see continued growth and skills within the Philippines,” Eric Conrad, company regional managing director for Southeast Asia, told a press conference on the sidelines of the AWS ASEAN Summit in Singapore on Thursday.
AWS announced late last year its plan to launch a local zone in the Philippines, which is part of a bigger undertaking to establish 10 new local zones in the region. The local zones are meant to help AWS customers reduce latency of critical workloads and drive productivity, among others. In the Philippines, AWS provides cloud services to companies like BDO Unibank, Globe, GCash and UnionBank.
The upcoming local zone in the country is a “reflection” of AWS’s optimism in the Philippines, Conrad said. The facility will complement AWS’s existing infrastructure in the Philippines, which include Amazon CloudFront and AWS Outposts.
“In the Philippines, we see continued acceleration in terms of the digitalization and the use of technology to drive sustainability, and good environmental practices,” he added.
“We’re really excited with the momentum that we’re seeing,” Conor McNamara, company managing director for Southeast Asia, said in his keynote address.
In Singapore, AWS has spent over $6.5 billion on infrastructure and jobs in the island state. One of AWS’s clients is Singapore-based superapp Grab, which has powered its mapping system with the help of AWS’s cloud technology.
“We estimate better ETAs, and all of it are powered by data,” Philipp Kandal, chief product officer at Grab, said during the opening session of the AWS Summit.
Meanwhile, AWS has also promised billions of dollars in investments in Indonesia, Malaysia and Thailand. Since 2017, the Amazon unit has trained over 1 million people across the region on cloud skills.
“We offer the most complete set of relational and purpose-built databases,” Laura Grit, VP/distinguished engineer at AWS, said during the summit. “Our goal is for you to focus on innovation that matters for your business,” she added.
References:
https://aws.amazon.com/government-education/worldwide/asean/
AWS and OneWeb agreement to combine satellite connectivity with cloud and edge computing
Amazon Web Services (AWS) has signed an agreement this week with LEO satellite internet provider OneWeb to explore potential horizontal and vertical use cases that arise from bundling satellite connectivity with cloud and edge compute resources.
The objective is to develop a satellite constellation management solution as a service, making it available to both corporate clients and those already working in the space sector. OneWeb and AWS will work closely together on four key initiatives:
• Business Continuity: Bundling connectivity with cloud services and edge computing services, delivering continuity and resiliency through an integrated infrastructure backed by the LEO constellation.
• Virtualization of Mission Operations: Supporting virtual mission operations for customers through integrated and customizable solutions.
• Space Data Analytics: Aggregating and fusing new levels of predictive and trending big data analytics through data lakes to support space and ground operations.
• User Terminals & Edge Integration: Deploying seamless cloud to edge solutions with a LEO connected user terminal.
Image Credit: OneWeb
“We are incredibly excited to begin working with AWS to see cloud services extended even closer to the edge thanks to OneWeb’s network. This global agreement will change the market dynamics, with OneWeb’s high-speed, low-latency services powering connectivity that will enable customers to reach even the most remote edges of the world and everywhere in between,” said Maurizio Vanotti, VP for new markets at OneWeb, in a statement.
“We are excited to work with OneWeb in their efforts to provide cloud-based connectivity and deliver innovative services to customers worldwide. AWS is committed to helping customers reimagine space systems, accelerate innovation, and turn data into useful insights quickly. We look forward to working with OneWeb in their efforts to push the edge closer to where their customers need it most,” added Clint Crosier, director of aerospace and satellite solutions at AWS.
The agreement serves to highlight the importance of seamless connectivity to enterprise applications and data from just about anywhere. It also underscores just how far behind Amazon is with its own satellite strategy, Project Kuiper.
Its aim is to launch 1,500 LEO satellites over the next five years, increasing to precisely 3,236 over the longer term. So far though, it has launched zero. Amazon was due to launch a couple of prototypes late last year, but a last-minute change of rocket company pushed everything back. It was also waiting on the US Federal Communications Commission (FCC) to approve its ‘orbital debris mitigation plan’, which it eventually got in February.
Amazon’s new launch partner, United Launch Alliance (ULA), plans to include those two Kuiper prototypes on the inaugural flight of its brand new Vulcan Centaur rocket, but lift-off won’t take place until 4 May (May the fourth – geddit?) at the earliest.
This is a fairly long-winded way of saying Amazon is still a long way off from offering commercial LEO satellite broadband and cloud services via its own network, and so this OneWeb deal should give it some valuable real-world experience until its own constellation is ready.
This announcement is the latest effort by OneWeb in its mission to bridge the digital divide and bolster innovation through industry collaboration with best-in-class service providers, serving customers from government, telecommunications, airline, and shipping industries.
Meanwhile, AWS and OneWeb will need to have cloud security high on their mutual agenda, judging by some recent rumblings from the U.S. According to a Politico report last week, the White House plans to draw up cloud security regulations designed to prevent hackers from attacking cloud infrastructure. It will also roll out rules that aim to make it harder for foreign hackers to use US-based cloud providers as a staging point from which to conduct attacks.
With so many government bodies and private enterprises becoming increasingly reliant on public cloud for hosting their data and applications, the underlying infrastructure makes for a juicy target. The fear is that a successful attack could cause widespread disruption if important clients like hospitals and ports are suddenly and unexpectedly cut off.
“In the United States, we don’t have a national regulator for cloud. We don’t have a Ministry of Communication. We don’t have anybody who would step up and say, ‘it’s our job to regulate cloud providers,’” said Rob Knake, the deputy national cyber director for strategy and budget, in the Politico report, adding that this needs to change.
While the White House cracks on with working out how to regulate cloud security, it is also pushing ahead with implementing rules drawn up by the previous administration. The Trump-era executive order will impose ‘Know Your Customer’ (KYC) rules on cloud providers in an effort to stop foreign hackers from using US cloud infrastructure as a platform for their attacks.
References:
https://oneweb.net/resources/oneweb-announces-global-agreement-aws
https://telecoms.com/520618/oneweb-bags-aws-deal-as-cloud-security-comes-under-scrutiny/
Swisscom, Ericsson and AWS collaborate on 5G SA Core for hybrid clouds
Swiss network operator Swisscom have announced a proof-of-concept (PoC) collaboration with Ericsson 5G SA Core running on AWS. The objective is to explore hybrid cloud use cases with AWS, beginning with 5G core applications. The plan is for more applications to then gradually be added as the trial continues. With each cloud strategy (private, public, hybrid, multi) bringing its own drivers and challenges the idea here seems to be enabling the operator to take advantage of the specific characteristics of both hybrid and public cloud.
The PoC reconfirms Swisscom and Ericsson’s view of the potential hybrid cloud has as a complement to existing private cloud infrastructure. Both Swisscom and Ericsson are on a common journey with AWS to explore how use cases can benefit telecom operators.
The PoC will examine use cases that take advantage of the particular characteristics of hybrid and public cloud. In particular, the flexibility and elasticity it can offer to customers which can mean deployment efficiencies for use cases where capacity is not constantly needed. An example of this could be when maintenance activities are undertaken in Swisscom’s private cloud, or when there are traffic peaks, AWS can be used to offload and complement the private cloud.
Swisscom had already been collaborating with AWS on migrating its 5G infrastructure towards standalone 5G. In addition, it has also used the hyperscaler’s public cloud platform for its IT environments. Telco concerns linger [1.] around the use of public cloud in telecoms infrastructure (especially the core networks) for some operators, hybrid cloud is seemingly gaining momentum as a transitional approach.
Note 1. Telco concerns over public cloud:
- In a recent survey by Telecoms.com more than four in five industry respondents feared security concerns over running telco applications in the public cloud, including 37% who find it hard to make the business case for public cloud as private cloud remains vital in addressing security issues. This also means that any efficiency gains are offset by the IT environment and the network running over two cloud types.
- Many in the industry also fear vendor lock-in and lack of orchestration from public cloud providers. Around a third of industry experts from the same survey find it a compelling reason not to embrace and move workloads to the public cloud unless applications can run on all versions of public cloud and are portable among cloud vendors.
- There’s also a lack of interoperability and interconnectedness with public clouds. The services of different public cloud vendors are indeed not interconnected nor interoperable for the same types of workloads. This concern is one of the drivers to avoid public cloud, according to some network operators.
–>PLEASE SEE THE COMMENT ON THIS TOPIC IN THE BOX BELOW THE ARTICLE.
Quotes:
Mark Düsener, Executive Vice President Mobile Network & Services at Swisscom, says: “By bringing the Ericsson 5G Core onto AWS we will substantially change the way our networks will be built and operated. The elasticity of the cloud in combination with a new magnitude in automatization will support us in delivering even better quality more efficiently over time. In order to shape this new concept, we as Swisscom believe strategic and deep partnerships like the ones we have with Ericsson and AWS are the key for success.”
Monica Zethzon, Head of Solution Area Core Networks, Ericsson says: “5G innovation requires deep collaboration to create the foundations necessary for new and evolving use cases. This Proof-of-Concept project with Swisscom and AWS is about opening up the routes to innovation by using hybrid cloud’s flexible combination of private and public cloud resources. It demonstrates that through partnership, we can deliver a hybrid cloud solution which meets strict telecoms industry requirements and security while making best use of HCP agility and cloud economy of scale.”
Fabio Cerone, General Manager AWS Telco EMEA at AWS, says: “With this move, Swisscom is opening the door to cloud native networks, delivering full automation and elasticity at scale, with the ability to innovate faster and make 5G impactful to their customers. We are committed to working closely with partners, such as Ericsson, to explore new use cases and strategies that best support the needs of customers like Swisscom.”
“How to deploy software in different cloud environments – at a high level, it is hard making that work in practice,” said Per Narvinger, the head of Ericsson’s cloud software and services unit. “You have hyperscalers with their offering and groups trying to standardize and people trying to do it their own way. There needs to be harmonization of what is wanted.”
https://telecoms.com/520337/swisscom-ericson-and-aws-collaborate-on-hybrid-cloud-poc-on-5g-core/
https://telecoms.com/520055/telcos-and-the-public-cloud-drivers-and-challenges/
AWS Telco Network Builder: managed network automation service to deploy, run, and scale telco networks on AWS
Omdia and Ericsson on telco transitioning to cloud native network functions (CNFs) and 5G SA core networks
AWS Integrated Private Wireless with Deutsche Telekom, KDDI, Orange, T-Mobile US, and Telefónica partners
In addition to Telco Network Builder, AWS today announced its Integrated Private Wireless that acts as an infrastructure bridge for network operators that want to offer a private network service tapping into AWS’ infrastructure to end users. This allows AWS to connect incoming customers interested in a private network platform with the #1 cloud service provider’s telecom partners.
“We are really just connecting the customer with the telco, then that relationship is between the two of them,” said Jan Hofmeyr,VP of Amazon EC2. Initial telecom partners include Deutsche Telekom, KDDI, Orange, T-Mobile US, and Telefónica. Enterprise customers shopping for private wireless services will be able to purchase an installation from one of those participating operators. “The relationship is directly between the customer and the telco,” Hofmeyr said, noting that the resulting private wireless network will then run atop the AWS cloud.
Hofmeyr said that AWS’ goal is to provide customers with an easy set of options that will allow them to deploy or operate a private network in a manner that meets their needs and abilities. “Right now this is their ask, [it’s] helping us make this onboarding easier, and that’s exactly what we’re focusing on. In the future, we’ll continue to listen to what their needs are and continue to support that,” Hofmeyr added.
This new private network offering is different from AWS’ Private 5G platform that it initially unveiled in late 2021, and has since updated. That platform integrates small cell radio units, AWS’ Outposts servers, a 5G core, and radio access network (RAN) software running on AWS-managed hardware. AWS also handles the spectrum management of this service.
AWS will act as the portal, but telcos will be the managed service providers for the network on behalf of those enterprises or smaller service providers, the company said. As with the telco network builder, AWS will provide a dashboard for monitoring performance and modifying it as needed.
“That’s one of the friction points we saw as we started looking at the private network space,” said Ishwar Parulkar, chief technologist for the telco industry at AWS, in an interview. “There are a lot of enterprise customers who really don’t care about all of this. They just want to be able to use the network and run some applications on top. That’s one of the primary values that we bring with this: lifting that undifferentiated work away from them and managing it in the cloud.”
For Amazon, telcos represent a prime business opportunity: as carriers build new networks with increasing reliance on software and cloud services, Amazon is positioning itself as a tech and cloud partner to help run those services better and more cheaply. It’s been interesting to watch how it has worked to build trust among a group of businesses that have at times been very wary of big tech and the threat of being reduced to “dumb pipes” as tech companies lean on their own architecture and technology advances to build faster and cheaper services that compete directly with what carriers have and plan to roll out. As one example, the company is clear to call these new products “offerings” and not services to make clear that it is not the managed service provider, the carriers’ role.
“We’ve been on this journey for a few years now in terms of really getting the cloud to run telco networks,” said Parulkar. “Our goal here is to make AWS the best place to host 5g networks for both public and private. And on that journey, we’ve been making steady progress.”
For carriers, they are now in a world where arguably communications is just another tech service, so many of them believe that running them with less costs and in more flexible ways will be the key to winning more business, introducing more services and getting better margins. Whether carriers want to wholesale work closer with Amazon, or with any of the cloud providers, for such services, will be the big question.
References:
https://www.sdxcentral.com/articles/news/aws-expands-5g-telecom-private-wireless-work/2023/02/
https://au.finance.yahoo.com/news/amazons-aws-cozies-carriers-launches-170645578.html
AWS Telco Network Builder: managed service to deploy, run, and scale telco networks on AWS