Ericsson has developed an AI system for automated network management which has now been included in Saudi Arabia operator Mobily’s wireless network. The companies have successfully deployed the ‘Ericsson AI-based network solution’ into Mobily’s network in Saudi Arabia in order to enable some ‘enhanced and smart end-user experiences.’
This AI system will provide 5G network diagnostics, root cause analysis and recommendations for ‘superior user experiences.’ The network diagnostics capabilities within the cognitive software suite provides ‘proactive network optimization’, allowing the operator to identify and resolve network anomalies and providing reliable connectivity, we are told.
Ericsson’s AI-based network solution delivers comprehensive Machine Learning (ML) based 5G network diagnostics, root cause analysis and recommendations for superior user experiences. The smart, automated network diagnostics capabilities of Ericsson’s cognitive software suite results in proactive network optimization, supporting Mobily, the leading digital partner of the international technical conference LEAP 23, in identifying and resolving network anomalies and constantly providing reliable connectivity.
Ericsson is so excited by the product in fact that it says it ‘redefines the very nature of network operations,’ alongside the presence of Big Data and ‘ever-expanding and more accessible computing power.’
“From people in remote locations to large gatherings, individuals often expect uninterrupted and quality connectivity,” said Alaa Malki, Chief Technology Officer from Mobily. “Ericsson’s Artificial Intelligence (AI)-based solution enables our customers to enjoy superior and uninterrupted 5G connectivity to stay connected with loved ones or to document key moments anytime, anywhere. Our partnership with Ericsson has once more reinforced our commitment to Unlock Possibilities during times that matter most, and we look forward to carrying our mission forward. I want to thank Ericsson for its support which allowed us to use this data-driven concept to make all kinds of changes and optimizations within short timeframes.”
Ekow Nelson, Vice President at Ericsson Middle East and Africa said: “For numerous years, our partnership with Mobily has provided customers with assured and superior connectivity to stream live experiences and benefit from a multitude of services even in the most challenging environments. Our success relied on Ericsson’s Artificial Intelligence-based network solution built with Machine Learning models that learn from the live network using the multiple sources of data to deliver near real-time improvements, thus avoiding interruptions during critical and peak times.”
How AI interacts and disrupts different industries looks likely to be an increasingly prominent issue in the years to come, for all sorts of reasons. In an interview with Telecoms.com recently, Beerud Sheth – CEO of conversational AI firm Gupshup said, “Like almost any industry, telcos will also have to figure out how they see this disruption… it creates opportunities and threats. And I think you have to lean into the opportunities, and maybe mitigate the threats a little bit. It changes a lot of things, it changes consumer expectations, it changes what people expect and what they want to do and can do, and they have to keep pace with all of it. So, there’s a lot of work for telco executives.”
by Run Almog Head of Product Strategy, Drivenets (edited by Alan J Weissberger)
In the previous part I article, we covered the different attributes of AI/HPC workloads and the impact this has on requirements from the network that serves these applications. This concluding part II article will focus on an open standard solution that addresses these needs and enables these mega sized applications to run larger workloads without compromising on network attributes. Various solutions are described and contrasted along with a perspective from silicon vendors.
Networking for HPC/AI:
A networking solution serving HPC/AI workloads will need to carry certain attributes. Starting with scale of the network which can reach thousands of high speed endpoints and having all these endpoints run the same application in a synchronized manner. This requires the network to run like a scheduled fabric that offers full bandwidth between any group of endpoints at any given time.
Distributed Disaggregated Chassis (DDC):
DDC is an architecture that was originally defined by AT&T and contributed to the Open Compute Project (OCP) as an open architecture in September 2019. DDC defines the components and internal connectivity of a network element that is purposed to serve as a carrier grade network router. As opposed to the monolithic chassis-based router, the DDC defines every component of the router as a standalone device.
- The line card of the chassis is defined as a distributed chassis packet-forwarder (DCP)
- The fabric card of the chassis is defined as a distributed chassis fabric (DCF)
- The routing stack of the chassis is defined as a distributed chassis controller (DCC)
- The management card of the chassis is defined as a distributed chassis manager (DCM)
- All devices are physically connected to the DCM via standard 10GbE interfaces to establish a control and a management plane.
- All DCP are connected to all DCF via 400G fabric interfaces in a Clos-3 topology to establish a scheduled and non-blocking data plane between all network ports in the DDC.
- DCP hosts both fabric ports for connecting to DCF and network ports for connecting to other network devices using standard Ethernet/IP protocols while DCF does not host any network ports.
- The DCC is in fact a server and is used to run the main base operating system (BaseOS) that defines the functionality of the DDC
Advantages of the DDC are the following:
- It’s capacity since there is no metal chassis enclosure that needs to hold all these components into a single machine. This allows building a wider Clos-3 topology that expands beyond the boundaries of a single rack making it possible for thousands of interfaces to coexist on the same network element (router).
- It is an open standard definition which makes it possible for multiple vendors to implement the components and as a result, making it easier for the operator (Telco) to establish a multi-source procurement methodology and stay in control of price and supply chain within his network as it evolves.
- It is a distributed array of components that each has an ability to exist as a standalone as well as act as part of the DDC. This gives a very high level of resiliency to services running over a DDC based router vs. services running over a chassis-based router.
AT&T announced they use DDC clusters to run their core MPLS in a DriveNets based implementation and as standalone edge and peering IP networks while other operators worldwide are also using DDC for such functionality.
Figure 1: High level connectivity structure of a DDC
LC is defined as DCP above, Fabric module is defined as DCF above, RP is defined as DCC above, Ethernet SW is defined as DCM above
Source: OCP DDC specification
DDC is implementing a concept of disaggregation. The decoupling of the control plane from data plane enables the sourcing of the software and hardware from different vendors and assembling them back into a unified network element when deployed. This concept is rather new but still has had a lot of successful deployments prior to it being used as part of DDC.
Disaggregation in Data Centers:
The implementation of a detached data plane from the control plane had major adoption in data center networks in recent years. Sourcing the software (control plane) from one vendor while the hardware (data plane) is sourced from a different vendor mandate that the interfaces between the software and hardware be very precise and well defined. This has brought up a few components which were developed by certain vendors and contributed to the community to allow for the concept of disaggregation to go beyond the boundaries of implementation in specific customers networks.
Such components include open network install environment (ONIE) which enables mounting of the software image onto a platform (typically a single chip 1RU/2RU device) as well as the switch abstraction interface (SAI) which enable the software to directly access the application specific integrated circuit (ASIC) and operate directly onto the data plane at line rate speeds.
Two examples of implementing disaggregation networking in data centers are:
- Microsoft which developed their network operating system (NOS) software Sonic as one that runs on SAI and later contributed its source code to the networking community via OCP and he Linux foundation.
- Meta has defined devices called “wedge” who are purpose built to assume various NOS versions via standard interfaces.
These two examples of hyperscale companies are indicative to the required engineering effort to develop such interfaces and functions. The fact that such components have been made open is what enabled other smaller consumers to enjoy the benefits of disaggregation without the need to cater for large engineering groups.
The data center networking world today has a healthy ecosystem with hardware (ASIC and system) vendors as well as software (NOS and tools) which make a valid and widely used alternative to the traditional monolithic model of vertically integrated systems.
Reasons for deploying a disaggregated networking solution are a combination of two. First, is a clear financial advantage of buying white box equipment vs. the branded devices which carry a premium price. Second, is the flexibility which such solution enables, and this enables the customer to get better control over his network and how it’s run, as well as enable the network administrators a lot of room to innovate and adapt their network to their unique and changing needs.
The image below reflects a partial list of the potential vendors supplying components within the OCP networking community. The full OCP Membership directory is available at the OCP website.
Between DC and Telco Networking:
Data center networks are built to serve connectivity towards multiple servers which contain data or answer user queries. The size of data as well as number of queries towards it is a constantly growing function as humanity grows its consumption model of communication services. Traffic in and out of these servers is divided to north/south that indicates traffic coming in and goes out of the data center, and east/west that indicates traffic that runs inside the data center between different servers.
As a general pattern, the north/south traffic represent most of the traffic flows within the network while the east/west traffic represent the most bandwidth being consumed. This is not an accurate description of data center traffic, but it is accurate enough to explain the way data center networks are built and operated.
A data center switch connects to servers with a high-capacity link. This tier#1 switch is commonly known as a top of rack (ToR) switch and is a high capacity, non-blocking, low latency switch with some minimal routing capabilities.
- The ToR is then connected to a Tier#2 switch that enables it to connect to other ToR in the data center.
- The Tier#2 switches are connected to Tier#3 to further grow the connectivity.
- Traffic volumes are mainly east/west and best kept within the same Tier of the network to avoid scaling the routing tables.
- In theory, a Tier#4/5/6 of this network can exist, but this is not common.
- The higher Tier of the data center network is also connected to routers which interface the data center to the outside world (primarily the Internet) and these routers are a different design of a router than the tiers of switching devices mentioned earlier.
- These externally facing routers are commonly connected in a dual homed logic to create a level of redundancy for traffic to come in and out of the datacenter. Further functions on the ingress and egress of traffic towards data centers are also firewalled, load-balanced, address translated, etc. which are functions that are sometimes carried by the router and can also be carried by dedicated appliances.
As data centers density grew to allow better service level to consumers, the amount of traffic running between data center instances also grew and data center interconnect (DCI) traffic became predominant. A DCI router on the ingress/egress point of a data center instance is now a common practice and these devices typically connect over larger distance of fiber connectivity (tens to hundreds of Km) either towards other DCI routers or to Telco routers that is the infrastructure of the world wide web (AKA the Internet).
While data center network devices shine is their high capacity and low latency and are built from the ASIC level via the NOS they run to optimize on these attributes, they lack in their capacity for routing scale and distance between their neighboring routers. Telco routers however are built to host enough routes that “host” the Internet (a ballpark figure used in the industry is 1M routes according to CIDR) and a different structure of buffer (both size and allocation) to enable long haul connectivity. A telco router has a superset of capabilities vs. a data center switch and is priced differently due to the hardware it uses as well as the higher software complexity it requires which acts as a filter that narrows down the number of vendors that provide such solutions.
Attributes of an AI Cluster:
As described in a previous article HPC/AI workloads demand certain attributes from the network. Size, latency, lossless, high bandwidth and scale are all mandatory requirements and some solutions that are available are described in the next paragraphs.
Chassis Based Solutions:
This solution derives from Telco networking.
Chassis based routers are built as a black box with all its internal connectivity concealed from the user. It is often the case that the architecture used to implement the chassis is using line cards and fabric cards in a Clos-3 topology as described earlier to depict the structure of the DDC. As a result of this, the chassis behavior is predictable and reliable. It is in fact a lossless fabric wrapped in sheet metal with only its network interfaces facing the user. The caveat of a chassis in this case is its size. While a well-orchestrated fabric is a great fit for the network needs of AI workloads, it’s limited capacity of few hundred ports to connect to servers make this solution only fitting very small deployments.
In case chassis is used at a scale larger than the sum number of ports per single chassis, a Clos (this is in fact a non-balanced Clos-8 topology) of chassis is required and this breaks the fabric behavior of this model.
Standalone Ethernet Solutions:
This solution derives from data center networking.
As described previously in this paper, data center solutions are fast and can carry high bandwidth of traffic. They are however based on standalone single chip devices connected in a multi-tiered topology, typically a Clos-5 or Clos-7. as long as traffic is only running within the same device in this topology, behavior of traffic flows will be close to uniform. With the average number of interfaces per such device limited to the number of servers physically located in one rack, this single ToR device cannot satisfy the requirements of a large infrastructure. Expanding the network to higher tiers of the network also means that traffic patterns begin to alter, and application run-to-completion time is impacted. Furthermore, add-on mechanisms are mounted onto the network to turn the lossy network into a lossless one. Another attribute of the traffic pattern of AI workloads is the uniformity of the traffic flows from the perspective of the packet header. This means that the different packets of the same flow, will be identified by the data plane as the same traffic and be carried in the exact same path regardless of the network’s congestion situation, leaving parts of the Clos topology poorly utilized while other parts can be overloaded to a level of traffic loss.
Proprietary Locked Solutions:
Additional solutions in this field are implemented as a dedicated interconnect for a specific array of servers. This is more common in the scientific domain of heavy compute workloads, such as research labs, national institutes, and universities. As proprietary solutions, they force
the customer into one interconnect provider that serves the entire server array starting from the server itself and ending on all other servers in the array.
The nature of this industry is such where a one-time budget is allocated to build a “super-computer” which means that the resulting compute array is not expected to further grow but only be replaced or surmounted by a newer model. This makes the vendor-lock of choosing a proprietary interconnect solution more tolerable.
On the plus side of such solutions, they perform very well, and you can find examples on the top of the world’s strongest supercomputers list which use solutions from HPE (Slingshot), Intel (Omni-Path), Nvidia (InfiniBand) and more.
Perspective from Silicon Vendors:
DSF like solutions have been presented in the last OCP global summit back in October-2022 as part of the networking project discussions. Both Broadcom and Cisco (separately) have made claims of superior silicon implementation with improved power consumption or a superior implementation of a Virtual Output Queueing (VOQ) mechanism.
There are differences between AI and HPC workloads and the required network for each.
While the HPC market finds proprietary implementations of interconnect solutions acceptable for building secluded supercomputers for specific uses, the AI market requires solutions that allow more flexibility in their deployment and vendor selection. This boils down to Ethernet based solutions of various types.
Chassis and standalone Ethernet based solutions provide reasonable solutions up to the scale of a single machine but fail to efficiently scale beyond a single interconnect machine and keep the required performance to satisfy the running workloads.
A distributed fabric solution presents a standard solution that matches the forecasted industry need both in terms of scale and in terms of performance. Different silicon implementations that can construct a DSF are available. They differ slightly but all show substantial benefits vs. chassis or standard ethernet solutions.
This paper does not cover the different silicon types implementing the DSF architecture but only the alignment of DSF attributes to the requirements from interconnect solutions built to run AI workloads and the advantages of DSF vs. other solutions which are predominant in this space.
–>Please post a comment in the box below this article if you have any questions or requests for clarification for what we’ve presented here and in part I.
Artificial Intelligence (AI) in telecom uses software and algorithms to estimate human perception in order to analyze big data such as data consumption, call record, and use of the application to improve the customer experience. Also, AI helps telecommunication operators to detect flaws in the network, network security, network optimization & offer virtual assistance. Moreover, AI enables the telecom industry to extract insights from their vast data sets and made it easier to manage the daily business and resolve issues more efficiently and also provide improved customer service and satisfaction.
The growing adoption of AI solutions in various telecom applications is driving market growth. The rising number of AI-enabled smartphones with a number of features such as image recognition, robust security, voice recognition and many as compared to traditional phones is boosting the growth of AI in the telecommunication market. Furthermore, to cater to complex processes or telecom services, AI provides a simpler and easier interface in telecommunication. In addition, growing Over-The-Top (OTT) services, such as video streaming, have transformed the dissemination and consumption of audio and video content. With more consumers turning to OTT services, consumer demand for bandwidth has grown considerably. Carrying such ever-growing traffic from OTT services leads to high operational Expenditure (OpEx) for the telecommunication industry. Hence, AI helps the telecom industry to reduce operational costs by minimizing the human intervention needed for network configuration and maintenance. However, the major restraint of the AI in telecommunication market is the incompatibility between telecommunication systems and AI technology. Contrarily, the increasing penetration of AI-enabled smartphones in the telecommunication industry, and the advent of 5G technology in smartphones are expected to provide major growth opportunities for the growth of the market. Since advancements such as 5G technology in mobile and the rising need to monitor content on the tale communication network to eliminate human error from telecommunication are driving the growth of the market. For an instance, the Chinese government trying to improve its network services and telecommunication services; hence China Telecom Corporation has started a new 5G base station in Lanzhou city. Therefore, these factors are expected to provide numerous opportunities for the expansion of the AI in telecommunication market during the forecast period.
Allied Market Research published a report, titled, “AI in Telecommunication Market by Component (Solution, Service), by Deployment Model (On-Premise, Cloud), by Technology (Machine Learning, Natural Language Processing (NLP), Data Analytics, Others), by Application (Customer Analytics, Network Security, Network Optimization, Self-Diagnostics, Virtual Assistance, Others): Global Opportunity Analysis and Industry Forecast, 2021-2031.”
According to the report, the global AI in telecommunication industry generated $1.2 billion in 2021, and is estimated to reach $38.8 by 2031, witnessing a CAGR of 41.4% from 2022 to 2031. The report offers a detailed analysis of changing market trends, top segments, key investment pockets, value chain, regional landscape, and competitive scenario.
Drivers, Restraints, and Opportunities:
Growing adoption of AI solutions in various telecom applications, the ability of AI to provide a simpler and easier interface in telecommunication and reduce the human intervention needed for network configuration and maintenance, and the growing demand for high bandwidth with more consumers turning to OTT services drive the growth of the global AI in telecommunication market. However, the incompatibility between telecommunication systems and AI technology hampers the global market growth. On the other hand, the increasing penetration of AI-enabled smartphones in the telecommunication industry, and the advent of 5G technology in smartphones likely to create potential opportunities for growth of the global market in the coming years.
- The global artificial intelligence in telecommunication market saw a stable growth during the COVID-19 pandemic, owing to the increasing digital penetration and rise in automation.
- Moreover, the pandemic led the telecommunications infrastructure to keep businesses, governments, and communities connected and operational. The social and financial disruption caused by the pandemic forced people to depend on technology such as AI for information and remote working.
- AI also helped the telecom industry to reinvent customer relationships by identifying personalized needs and engaging with customers through hyper-personalized one-to-one contacts. It also helped configure fixed-line and mobile-network bundles that combine VPN, teleconferencing, and productivity apps.
The solution segment to dominate in terms of revenue during the forecast period:
Based on component, the solution segment was the largest market in 2021, contributing to more than two-thirds of the global AI in telecommunication market, and is expected to maintain its leadership status during the forecast period. This is due to the adoption of solutions by various end users for the automated processes. On the other hand, the service segment is projected to witness the fastest CAGR of 44.9% from 2022 to 2031, due to surge in the adoption of managed and professional services.
The on-premise segment to garner the largest revenue during the forecast period:
Based on deployment model, the on-premise segment held the largest market share of nearly three-fifths of the global AI in telecommunication market in 2021 and is expected to maintain its dominance during the forecast period. This is because it provides added security of data. The cloud segment, however, is projected to witness the largest CAGR of 43.8% from 2022 to 2031, as cloud provides flexibility, scalability, complete visibility, and efficiency to all processes.
The machine learning segment to exhibit a progressive revenue growth during the forecast period:
Based on technology, the machine learning segment held the largest market share of more than two-fifths of the global AI in telecommunication market in 2021, and would maintain its dominance during the forecast period. This is because machine learning algorithms are designed to keep improving accuracy and efficiency. The data analytics segment, however, is projected to witness the largest CAGR of 46.1% from 2022 to 2031, as it helps telecom companies to increase profitability by optimizing network usage and services.
Purchase Inquiry: https://www.alliedmarketresearch.com/purchase-enquiry/9717
Asia-Pacific to maintain its leadership in terms of revenue by 2031:
Based on region, North America was the largest market in 2021, capturing more than one-third of the global AI in telecommunication market. The growth in the region can be attributed to the infrastructure development and technology adoption in countries like the U.S. and Canada. However, the market in Asia-Pacific is expected to lead in terms of revenue and manifest the fastest CAGR of 45.7% during the forecast period, owing to the growing digital and economic transformation of the region.
Leading Market Players:
- Intel Corporation
- Nuance Communications, Inc.
- Infosys Limited
- ZTE Corporation
- IBM Corporation
- Google LLC
- Salesforce, Inc.
- Cisco Systems, Inc.
The report analyzes these key players of the global AI in telecommunication market. These players have adopted various strategies such as expansion, new product launches, partnerships, and others to increase their market penetration and strengthen their position in the industry. The report is helpful in determining the business performance, operating segments, product portfolio, and developments by every market player.
Download free sample of this report at:
You may buy this report at:
The case for and against AI in telecommunications; record quarter for AI venture funding and M&A deals
Many pundits believe that telcos will need AI driven solutions. Some of the benefits: enable telcos to configure new offers and products in hours and days, fail fast/ learn fast when 5G applications don’t gain market traction, service customers more effectively and radically simplify their operations.
An AI-powered “decisioning engine” might help telcos take the correct action during every interaction in real time with customers, suppliers, and partners.
Proponents say that with AI-driven capabilities in place, telcos can:
Grow revenue through upsell and cross-sell of services: Telecom Providers (aka telcos or network operators) can increase average revenue per user (ARPU) by anticipating customer needs using real-time context, so they can make the right offer on the right channel when it is needed.
Accelerate subscriber growth: Net subscriber additions are critical to success. Key telecom industry partners can build customer interest in preferred channels, guide prospects to find the right bundle, and delight them with a flawless omni-channel experience.
Proactive digital customer service: By combining AI-driven decisioning with end-to-end automation, telcos can deliver proactive, personalized service across channels. This might give customers and agents a guided, intuitive experience that delivers the best outcomes for everyone seamlessly.
Resolve billing enquiries: To avoid costly calls to service centers and keep customers happy, telcos need to stay one step ahead. AI driven capabilities such as real-time monitoring and pattern detection can enable them to sense a potential billing issue, then send a proactive notification to the customer.
Guided service setup: In order to make a great first impression and reduce calls to the service center, AI can drive a self-serve guided setup for services like internet connectivity to make customers’ experience easy and frictionless. Step-by step visual instructions can help to get set up successfully, and troubleshooting tips allow customers to easily navigate challenges along the way.
Intelligent automation: To increase network capacity, efficiently deploy new 5G and fiber networks, or simplify order fulfillment, telecoms providers can use AI in combination with robotics and end-to-end automation to streamline and digitize complex operations, keeping margins high and bringing value to customers fast. With intelligent automation and robotics, telecoms can:
Orchestrate, automate, and deliver customer orders: With a better connection between front and back offices, partners, and customers across all channels, telcos can optimize operations, reduce costs and boost customer satisfaction.
Build and deploy new networks faster: Telecoms providers can accelerate fiber and 5G mobile network rollout with intelligent automation. Case management, robotics, and low-code development capabilities can help them build out critical infrastructure more efficiently and faster at lower cost.
Automatically resolve network outages and events: Telcos can provide end-to-end visibility of complex processes and analyze live data related to business rules, costs, and other criteria. The most effective delivery methods, equipment, vendors, or contractors can be selected to address and resolve problems.
However, the AI cheerleaders never talk about the shortcomings of cyclically ultra hyped AI technology. We call attention to the cover story on this month’s IEEE Spectrum (the flagship publication of IEEE). “Why is AI so Dumb?” Here’s an excerpt:
AI has suffered numerous, sometimes deadly, failures. And the increasing ubiquity of AI means that failures can affect not just individuals but millions of people. Increasingly, the AI community is cataloging these failures with an eye toward monitoring the risks they may pose.
“There tends to be very little information for users to understand how these systems work and what it means to them,” says Charlie Pownall, founder of the AI, Algorithmic and Automation Incident & Controversy Repository.
“I think this directly impacts trust and confidence in these systems. There are lots of possible reasons why organizations are reluctant to get into the nitty-gritty of what exactly happened in an AI incident or controversy, not the least being potential legal exposure, but if looked at through the lens of trustworthiness, it’s in their best interest to do so.”
Part of the problem is that the neural network technology that drives many AI systems can break down in ways that remain a mystery to researchers.
“It’s unpredictable which problems artificial intelligence will be good at, because we don’t understand intelligence itself very well,” says computer scientist Dan Hendrycks at the University of California, Berkeley.
CB Insights: What you need to know about AI venture in Q3-2021:
- New record: $17.9B in global funding for AI startups across 841 deals in Q3-2021. This marks an 8% increase in funding and 43% increase in deals QoQ.
- At $50B, 2021 YTD funding has already surpassed 2020 levels by 55%. 75% Growth in megarounds YTD.
- The number of $100M+ mega-rounds has reached a record-high 138 in 2021 YTD.
- There were 45+ mega-deals in each of the first 3 quarters in 2021 — the highest quarterly numbers ever.
- 100+ AI acquisitions. Quarterly M&A deals have surpassed 100 for 2 consecutive quarters, putting total M&A exits at a record 253 in 2021 YTD.
- Annual IPOs and SPACs are also up this year. In Q3-2021, there were 3 SPACs and 8 IPOs.
- The largest M&A deal of Q3-2021 was PayPal’s acquisition of buy now, pay later startup Paidy for $2.7B — 370% bigger than the next largest deal. Paidy uses machine learning to determine consumer creditworthiness and underwrite transactions instantly.
- 43% QoQ increase in median US deal size. In Q3-2021, global markets saw strong QoQ growth in the median size of funding rounds: 43% in the US, 64% in Asia, and 67% in Europe.
- Across regions, median deal size was $7M, while average deal size reached a record $33M.
The Global AI in Telecommunication Market [1.] is estimated to be $1.2 Billion (B) in 2021 and is expected to reach $6.3B by 2026, growing at a CAGR of 38%, according to a report by Research and Markets.
For comparison, Valuates says the global AI In Telecommunication market size is projected to reach $14.99B by 2027, from $11.89B in 2020, at a CAGR of 42.6% during 2021-2027.
Note 1. Artificial Intelligence in Telecom includes handling large volumes of data using machine learning and analytics, automating detection and correction of failures in transmission, automating customer care services, and complementing Internet of Things(IoT), e-mail, voice call, and database storage services.
Key factors of AI in telecom include the deployment of 5G mobile networks, growing demand for effective and efficient network management solutions have been driving AI in telecommunications market growth. Increasing AI-embedded smartphones and the growing adoption of AI solutions in various telecom applications are likely to further drive market growth.
- Increasing Adoption of AI for Various Applications in the Telecommunication Industry
- AI Can Be the Key to Self-Driving Telecommunication Networks
- Increased Need for Monitoring the Content Spread on Telecommunication Networks
- Growing Demand for Effective and Efficient Network Management Solutions
Telecom vendors commonly use AI for customer service applications, such as chatbots and virtual assistants, to address many support requests for installation, maintenance, and troubleshooting. To improve customer experience, telecom operators are adopting AI.
Other common uses of AI in Telecom include:
Fraud detection and prevention
Robotic process automation (RPA)
- Cloud-Based AI Offerings in the Telecommunication Industry
- Utilization of AI-Enabled Smartphones
Conversely, incompatibility between telecommunication systems and AI technology, which leads to integration complexity in these solutions, is the major constraint for market growth. Also, the lack of skilled expertise and privacy & identity concerns of individuals are some other factors hindering the market growth.
by Harikrishna Kundariya, CEO at eSparkbiz Technologies
Artificial Intelligence (AI) is a technology that has the potential to shape our future. Today, almost all business verticals are utilizing AI in one way or another. AI is a large field, and there are many things yet to be researched, but it’s definitely been ground-breaking for many industries. Daily new research findings are emerging. Most of these have shown how AI can help businesses improve operations and be more productive.
AI is a black box for some, whereas it is a portal to unlocking great potential for others. Most businesses have started adopting AI as much as they can. It is predicted that by the end of 2023, companies will spend $10.83 billion on AI and automation.
Considering AI’s involvement in every business sector, the telecom industry isn’t far behind. Telecom companies are doing their individual research on AI to improve their business models. Using AI, it is easier for telecom companies to make accurate decisions. Moreover, with the right predictions from AI systems, they can get an insight into their decisions before they implement them in real life. Using AI’s predictive capabilities, telecom companies can get an edge over their competition.
To sustain the competition, businesses try to adhere to market standards and trends. Trends justify the changes that are widespread and followed by everyone to gain some benefits.
Here are some trends that are up and coming in the telecom industry.
Improve telecom network maintenance:
Telecom network maintenance is essential. When a network goes down, it is not only the users who suffer, but the telecom company also suffers a more significant loss. Loss of network shows the company’s insincerity towards its services and lack of care for its customers. The business also suffers monetary losses due to network breakdown. If there is some significant fault, the company has to get it rectified quickly, and this is costly too.
Hence AI is being used to overcome this problem. With AI, telecom companies can quickly identify the point of failure. Most of the time in network maintenance is spent behind finding the first point where maintenance is needed. With the availability of AI, it has become easy. Moreover, telecom companies are also leveraging IoT, which is a great technology.
Companies are looking forward to developing context-aware AI systems. Such AI systems are brilliant and can identify their state quickly. These systems follow the observe-orient-decide-act model to make decisions.
Using AI, downtime can be minimized. Moreover, the maintenance work can be carried out quickly by benefitting from context-aware systems and IoT processes.
Many companies are carrying out network maintenance with the help of drones. Comarch is one such company that creates solutions for telecom network maintenance with the help of AI-enabled drones.
Optimize network performance:
Network performance is vital if you want to be in the market. No user prefers a slow network. If your telephone towers are weak, you’ll face difficulty in adding new customers as well as maintaining the current ones.
There are many solutions for optimizing network performance. With the advent of AI, telecom service providers are using AI to optimize their networks.
One of the most common ways in network optimization is to predict network traffic and usage based on past conditions. AI can find out trends based on past data. These trends can then be used to create strategies to serve customers in a better way.
Telecom service providers create intelligent AI and ML systems that can accurately predict network traffic for any region. The results generated from AI systems are pretty accurate, and companies use those to optimize network performance. Usage data for any area is freely available with the service providers, so they can use this data to benefit.
Network performance can be optimized by increasing a tower’s capacity and range during certain peak hours when the area has high usage. Also, it can be decreased at a later stage to accommodate lower traffic levels.
Using AI, network performance can be controlled just like a remote-controlled device. The service providers are loving this benefit; hence AI is being used extensively. Many companies like AT&T and other telecom leaders are using self-organizing network technologies. These technologies have AI at their base and can work effectively under heavy traffic conditions.
Taking network performance a step ahead, Intel and Capgemini have tied up hands to develop a one-of-a-kind solution. These companies are already working on increasing the 5G spectrum’s capacity. Their project macaroni aims to boost a customer’s network experience by using real-time predictive analytics. Using this AI solution, every cell phone tower can handle more traffic than before, ultimately resulting in better network performance under a heavy customer base.
Improve network security/authentication:
Security is a big concern in the telecom industry. Tower Hijacking, wiretapping, and call forwarding pose a severe risk to the telecom business. To secure the user’s data from theft and cyberattacks, telecom service providers are using new and unique techniques. Many of these techniques include AI at their base.
AI can be used to authenticate users and also provide security to towers. When users sign up for a new connection, the chances of fraud are highest. They can use fake addresses, proofs, images, and any other thing. Identifying these fake things manually is nearly impossible. Hence, telecom companies are using AI to authenticate new users.
AI systems are being trained to spot fake documents. There are specific characteristics of fake documents that are well known. AI systems are trained to identify such characteristics on documents. When they reach a certain confidence level, they are used in everyday authentication work to ensure that no imposter is served.
Towers can be secured by using preventive AI technologies. These models are trained to look for defects in the towers every now and then. Sometimes the systems try to attack the towers to test the security procedure’s working. Using AI, it is easy for telecom companies to find towers in need of security. Such towers can be found by constantly monitoring and reporting if even a slight change is found in the tower’s characteristics.
End-user data protection is important because today, hackers are more active than ever before. Moreover, hackers are targeting places like telephone company’s databases where they can get a lot of personally identifiable data easily.
Many telecom service providers in the US are already using Cujo.ai’s network security solutions. Companies like Verizon, AT&T, and Charter communications rely on AI services from cujo.ai to secure their networks.
Cujo.ai has a unique offering named Sentry that can process large datasets in seconds. This AI system is well developed and it can make its own decisions regarding whether there is a security issue or not. Moreover, these systems are trained heavily with real-world data, so they can easily detect and take actions on unauthorized actions over a telecom network.
When AI is leveraged, the need for better standards increases. Hence, many telecom service providers use end-to-end encryption and other newly created security protocols and encryption standards. With suitable security systems, the data is fully secure and free from any interference.
There are many trends that are seen within the telecom industry, and AI constitutes the majority of them. The telecom industry is being modernized at a large scale, and so they are trying to include AI as much as possible in their business models. Above, you’ve seen the three major trends seen in the telecom industry. These are the ones that are now becoming benchmarks for the telecom industry.
About Harikrishna Kundariya:
Mr. Harikrishna Kundariya is a serial entrepreneur leading eSparkBiz since 2010. Under his leadership the company has built its reputation as an excelling offshore development company. He values building relationships with clients rather than just focusing on the business at hand.
Nokia and Vodafone have partnered to jointly develop a new machine learning (ML) system designed to detect and remediate network anomalies before they impact customers. Based on Nokia’s Bell Labs algorithm, the Anomaly Detection Service product runs on Google Cloud and is already being rolled out across Vodafone’s pan-European network.
In a joint statement, the partners said the ML system quickly detects and troubleshoots irregularities, such as mobile site congestion and interference, as well as unexpected latency, that may have an impact on customer service quality. Following an initial deployment in Italy on more than 60,000 LTE cells, Vodafone said it will be extending the service to all its European markets by early 2022, and there are plans to eventually apply it on the company’s 5G and core networks.
Vodafone added that it expects that around 80 percent of all its anomalous mobile network issues and capacity demands to be automatically detected and addressed using Anomaly Detection Service.
Vodafone’s deal with Nokia signed last year complements its recent six-year agreement with Google Cloud to jointly build integrated cloud-based capabilities backed by hubs of networking and software engineering expertise.
The platform, called ’Nucleus’, will house a new system ‘Dynamo’, which will drive data throughout Vodafone to enable it to more quickly offer its customers new, personalized products and services across multiple markets. Dynamo is expected to help Vodafone to tailor new connectivity services for homes and businesses through the release of new features such as providing a sudden broadband speed boost.
Capable of processing around 50 TB of data per day, Nucleus and Dynamo are considered “industry firsts”. Being built in-house by Vodafone and Google Cloud specialist teams, the project involves up to 1,000 employees of both companies located in Spain, the UK and the US.
Vodafone said it has already identified more than 700 use-cases to deliver new products and services quickly across its markets, support fact-based decision-making, reduce costs, remove duplication of data sources, and simplify and centralize operations.
Johan Wibergh, Chief Technology Officer, Vodafone, said: “We are building an automated and programmable network that can respond quickly to our customers’ needs. As we extend 5G across Europe, it is important to match the speed and responsiveness of this new technology with a great service. With machine learning, we can ensure a consistently high-quality performance that is as smart as the technology behind it.”
Amol Phadke, Managing Director, Telecom Industry Solutions, Google Cloud, said:
“We are thrilled to partner with Nokia and Vodafone to deliver a data- and AI-driven solution that scales quickly and leverages automation to increase cost efficiency and ensures seamless customer experiences across Europe. As behaviors change and the data needed for analysis increases in velocity, volume, and complexity, automation and a cloud-based data platform are now key in making fast and informed decisions.”
Anil Rao, Research Director, Analysys Mason, said: “Vodafone’s anomaly detection use case, developed in partnership with Nokia and run on Google Cloud, automates root-cause analysis for efficient network planning, optimization, and operations. This type of partnership provides a new opportunity for operators to rethink data management and increase the focus on use cases and application development.”
Raghav Sahgal, President of Cloud and Network Services, Nokia, said: “This first commercial deployment of Anomaly Detection Service with Vodafone on Google Cloud provides a great boost to customer service. It not only addresses the critical need to quickly detect and remedy anomalies impacting network performance using machine learning-based algorithms, but it also highlights Nokia’s technology leadership and the deep technical expertise of Nokia Bell Labs.”
Vodafone said it will convert its entire SAP environment to Google Cloud, including the migration of its core SAP workloads and key corporate SAP modules such as SAP Central Finance.
At MWC today Intel and Capgemini Engineering unveiled the industry’s first Machine Learning-based RAN application to boost 5G spectrum capacity. Capgemini says their solution gives mobile network operators a significant advantage to monetize 5G services faster. Entitled “Project Marconi,” it conforms to O-RAN (Open Radio Access Network) guidelines to maximize spectrum efficiency. The solution intelligently boosts subscriber quality of experience (QoE) with real-time predictive analytics.
Project Marconi is the industry’s first Artificial Intelligence / Machine Learning (AI/ML) based radio network application for 5G Medium Access Control (MAC) scheduler. Optimized with Intel AI Software and 3rd Gen Intel Xeon Scalable processors.
Network providers globally have invested heavily in spectrum and are looking for solutions to develop and gain 5G services faster. According to the Global Mobile Suppliers Association, the total value of spectrum auctions reached over $27 billion in 2020.
Capgemini’s application (running on Intel Architecture) increases the amount of traffic each cell can handle. It allows operators to serve more subscribers and deliver an outstanding experience, while launching new Industry 4.0 services such as enhanced Mobile Broadband (eMBB) and Ultra Reliable Low Latency Communications (URLLC) use cases.
Walid Negm, Chief Research and Innovation Officer at Capgemini Engineering said: “Our teams worked closely with Intel to create a truly innovative solution that can really move the needle for operators. We gathered and utilized over one terabyte of data and conducted countless test runs with NetAnticipate5G to fine-tune the predictive analytics to meet diverse operator requirements. In short, machine learning can be deployed for intelligent decision-making on the RAN without any additional hardware requirement. This makes it cost efficient in the short run and future proof in the long run as we move into Cloud Native RAN implementations.”
Cristina Rodriguez, VP of Wireless Access Network Division at Intel said: “Our 3rd Gen Intel Xeon Scalable processors with built-in AI acceleration provide high performance for deep learning on the Net Anticipate 5G platform. Together, our collaboration delivered ultra-fast inference data to enhance the Open-Source ML libraries resulting in an intelligent RAN that can predict and quickly react to subscriber coverage requirements while reducing TCO.”
Capgemini deployed its NetAnticipate5G and RATIO O-RAN platform to introduce advanced AI/ML techniques. The AI powered predictive analytical solution forecasts and assigns the appropriate MCS (modulation and coding scheme) values for signal transmission through forecasting of the user signal quality and mobility patterns accurately. In this way, the RAN can intelligently schedule MAC resources to achieve up to 40% more accurate MCS prediction and yield to 15% better spectrum efficiency in the case studies and testing. As a result, it delivers faster data speeds, better and more consistent QoE to subscribers and robust coverage for use cases that rely on low latency connectivity such as robotics-based manufacturing and V2X (vehicle-to-everything).
More information can be found on Capgemeni’s website.
Last week, Capgemini Research Institute released a report titled, “Accelerating the 5G Industrial Revolution: State of 5G and edge in industrial operations” stating that industrial 5G adoption is still at the ideation and planning stages, with only 30% of industrial organizations having moved to the pilot stage or beyond. This means there is a huge window of opportunity for telcos and those industrial organizations that are yet to make a move.
Signaling a paradigm shift, 40% of industrial organizations surveyed expect to roll out 5G at scale at a single site within two years, and the experience of early adopters could persuade others to make the move. 5G trials and early implementations are delivering strong business benefits, with 60% of early adopters saying that 5G has helped to realize higher operational efficiency, while 43% saying they have experienced increased flexibility.
The study also found that industrial organizations are optimistic that 5G will drive revenues by enabling the introduction of new products, services, and business models. In fact, 51% of industrial organizations plan to leverage 5G to offer new products, and 60% plan to offer new services enabled by 5G.
Furthermore, industrial organizations are aware of the role of edge computing in their 5G initiatives and view it as essential to realizing the full potential of 5G. 64% of organizations plan to adopt 5G-based edge computing services within three years, driven by the increased performance, reliability, data security and privacy it offers. More than a third of industrial organizations across sectors surveyed prefer to deploy private 5G networks, with interest in private 5G networks led by the semiconductor and high-tech sector (50%), followed by aerospace and defense (46%).
“Industrial 5G is a key catalyst in unlocking the potential of intelligent industry and accelerating data-driven digital transformation,” comments Fotis Karonis, Group Leader of 5G and Edge Computing at Capgemini. “Enterprises need to take advantage of the benefits of 5G by engaging with the ecosystem to tap into the shared expertise and co-create innovative, sustainable solutions for tomorrow. An element of iteration is required, but organizations should seek to leverage the 5G ecosystem to jointly test solutions and progress with full-scale 5G adoption, fine-tuning the approach as the ecosystem evolves.”
Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organization of 270,000 team members in nearly 50 countries. With its strong 50 year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering and platforms. The Group reported in 2020 global revenues of €16 billion.
Network quality driven by significant investments in 5G and fiber:
AT&T believes that its recent and anticipated network investments will bolster its network foundation to compete as the need for high-quality connectivity only continues to increase. At a Morgan Stanley European Investor Conference, AT&T CFO John Stephens indicated that AT&T’s integrated fiber strategy is expected to improve the company’s connectivity offering for both consumer and enterprise markets and enhance its 5G network quality in a cost-efficient manner.
AT&T CTO Andre Fuetsch said: “Obviously what happened was everyone basically started working, started schooling from home, and all of a sudden we had to readjust our lives to work from home, learn from home, and all of a sudden we had to adapt very quickly to that. Within our homes, we had to have these different personas that we normally don’t do — whether it’s doing your day job, performing that duty, helping your children get online so they can do their schooling, and then all the other things in life. That was a blurring, in a way, of these sort of enterprise and consumer segments coming together.”
“All of this technology is great, but at the end of it, we are humans and anything we can do to help facilitate [and] build better, stronger human connections” will benefit society at large, Fuetsch added. “This year we’re really getting pushed and challenged to do that. I really think this type of technology is just going to make things better.”
Artificial Intelligence (AI) Improves Operations:
Some of these technologies, like Artificial Intelligence (AI), are already helping AT&T improve its operations, especially among its field technicians, he said, noting that AT&T’s entire routing and scheduling program relies heavily on AI.
“Any given day we have 35,000 network technicians driving around in trucks installing, and repairing, and maintaining our network. It’s essentially a very complex logistics algorithm and, as you can imagine with a company of our scale, just a single percentage improvement in efficiencies can lead to big, big dollars,” Fuetsch said.
AT&T is also trialing the use of drones with computer vision analytics to help improve inspections of its roughly 70,000 cell sites. When those drones take flight, they are scanning towers, looking for excessive heat dissipation, corrosion, loose cables, and bird nests, among other signs that indicate a required repair.
“All of this is getting fed back into a neural network, which is basically AI based,” and that program identifies the repair checklist, the technician and skill sets required, and the parts needed to remedy the problem, Fuetsch said.
AT&T’s experiences here and elsewhere gives him confidence that “the camera is still and will be the killer app” for the foreseeable future. However, the use of cameras is undergoing dramatic changes, he said.
“We carry about 400 petabytes a day across our network. About 50% of that traffic we carry is video traffic. Most of that is going out in a sort of downstream way. The future is going to be about upstream,” Fuetsch said.
Use of Video Cameras:
Fuetsch envisions new applications that “can help better manage our lives through a simple video camera” with the aid of video analytics and sensing. These advancements are occurring not just despite the scourge of COVID-19, but rather because of it in some ways as well, he said.
“This pandemic has really created some new norms here. I think the good news for operators is connectivity is so important and so relevant for everything we do. As we go into 2021, certainly with hopefully a light at the end of the tunnel here in terms of the pandemic with the latest news we’re hearing about vaccines, I’m actually very optimistic.”
“As we go into 2021, certainly with hopefully a light at the end of the tunnel here in terms of the pandemic with the latest news we’re hearing about vaccines, I’m actually very optimistic,” Fuetsch added.
ITU-T Study Group 13 Focus Group on Machine Learning for Future Networks including 5G (FG ML5G) has accomplished its mission. The FG ML5G was active from January 2018 until July 2020.
During its lifetime, FG ML5G delivered ten technical specifications.. Four of those specifications have already been approved by ITU-T SG13 and published by ITU-T. Six further technical specifications are being considered by ITU-T SG13. These ten technical specifications are publicly available free of charge. Please refer to ITU-T FG ML5G webpage to download the documents. [All ITU-T Focus Group publications are available for download at ITU-T Focus Group webpage]
Deliverables processed by ITU-T SG13 and published by ITU-T are:
This Supplement describes use cases of machine learning in future networks including IMT-2020. For each use case description, along with the benefits of the use case, the most relevant possible requirements related to the use case are provided. Classification of the use cases into categories is also provided.
ITU-T Y.3172 specifies an architectural framework for machine learning (ML) in future networks including IMT-2020. A set of architectural requirements and specific architectural components needed to satisfy these requirements are presented. These components include, but are not limited to, an ML pipeline as well as ML management and orchestration functionalities. The integration of such components into future networks including IMT-2020 and guidelines for applying this architectural framework in a variety of technology-specific underlying networks are also described.
ITU-T Y.3173 specifies a framework for evaluating the intelligence of future networks including IMT-2020 and a method for evaluating the intelligence levels of future networks including IMT-2020 is introduced. An architectural view for evaluating network intelligence levels is also described according to the architectural framework specified in Recommendation ITU-T Y.3172.
In addition, the relationship between the framework described in this Recommendation and corresponding work in other standards or industry bodies, as well as the application of the method for evaluating network intelligence levels on several representative use cases are also provided.
ITU-T Y.3174: Framework for data handling to enable machine learning in future networks including IMT-2020
ITU-T Y.3174 describes a framework for data handling to enable machine learning in future networks including International Mobile Telecommunications (IMT)-2020. The requirements for data collection and processing mechanisms in various usage scenarios for machine learning in future networks including IMT-2020 are identified along with the requirements for applying machine learning output in the machine learning underlay network. Based on this, a generic framework for data handling and examples of its realization on specific underlying networks are described.
This document is at an advanced stage in ITU-T SG13:
Draft Recommendation ITU-T Y.3176: “ML marketplace integration in future networks including IMT-2020”
This document is a draft Recommendation under study by Q20 of SG13. This draft Recommendation provides the architecture for integration of ML marketplace in future networks including IMT-2020. The scope of this draft Recommendation includes: – Challenges and motivations for ML marketplace integration – High level requirements of ML marketplace integration – Architecture for integration of ML marketplace in networks.
The July 2020 ITU-T SG13 meeting started the approval process for this draft new Recommendation, which is largely based on the output of the FG ML5G.
Deliverables which FG ML5G submitted to ITU-T SG13 for consideration:FG ML5G specification:
This technical specification discusses the requirements for machine learning function orchestrator (MLFO). These requirements are derived from the use cases for machine learning in future networks including IMT-2020. Based on these requirements, an architecture and design for the machine learning function orchestrator is described.
FG ML5G specification: “Serving framework for ML models in future networks including IMT-2020”
This specification describes a serving framework for ML models in future networks including IMT-2020. The specification includes requirements and architecture components for such a framework.
FG ML5G specification: “Machine Learning Sandbox for future networks including IMT-2020: requirements and architecture framework”
Use cases for integrating machine learning (ML) to future networks including IMT-2020 has been documented in Supplement 55 and an architecture framework for this integration was specified in ITU-T Y.3172. However, network stakeholders are apprehensive about using ML-driven approaches directly in live networking systems because it can lead to unexpected situations that can degrade KPIs. This is mostly due to the apparent complexity of ML mechanisms (e.g., deep learning), the incompleteness of the available training data, the uncertainty produced by exploration-exploitation approaches (e.g., reinforcement learning), etc. In the face of such impediments, the ML Sandbox emerges as a potential solution that allows mobile network operators (MNOs) for improving the degree of confidence in ML solutions before their application to the network infrastructure. This technical specification deals with the requirements, architecture, and implementation examples for ML Sandbox in future networks including IMT-2020.
FG ML5G specification: “Machine learning based end-to-end network slice management and orchestration”
This document proposes the framework and requirements of machine learning based end-to-end network slice management and orchestration in multi-domain environments.
FG ML5G specification: “Vertical-assisted Network Slicing Based on a Cognitive Framework”
This technical specification proposes a new framework that enables vertical QoE-aware network slice management empowered by machine learning technologies.
The activities of the FG ML5G were concluded and its mandate was accomplished. SG13 closed the FG ML5G while recognizing the FG ML5G chairman Prof. Dr. Slawomir Stanczak (Frauenhofer HHI, Germany) and his management team, active contributors and all the FG members.
Tel: 41 58460 5752