Bain & Co, McKinsey & Co, AWS suggest how telcos can use and adapt Generative AI

Generative Artificial Intelligence (AI) uncertainty is especially challenging for the telecommunications industry which has a history of very slow adaptation to change and thus faces lots of pressure to adopt generative AI in their services and infrastructure.  Indeed, Deutsche Telekom stated that AI poses massive challenges for telecom industry in this IEEE Techblog post.

Consulting firm Bain & Co. highlighted that inertia in a recent report titled,Telcos, Stop Debating Generative AI and Just Get Going”  Three partners stated network operators need to act fast in order to jump on this opportunity. “Speedy action trumps perfect planning here,” Herbert Blum, Jeff Katzin and Velu Sinha wrote in the brief.  “It’s more important for telcos to quickly launch an initial set of generative AI applications that fit the company’s strategy, and do so in a responsible way – or risk missing a window of opportunity in this fast-evolving sector.”

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Telcos can pursue generative AI applications across business functions, starting with knowledge management:

 

Separately, a McKinsey & Co. report opined that AI has highlighted business leader priorities. The consulting firm cited organizations that have top executives championing an organization’s AI initiatives, including the need to fund those programs. This is counter to organizations that lack a clear directive on their AI plans, which results in wasted spending and stalled development. “Reaching this state of AI maturity is no easy task, but it is certainly within the reach of telcos,” the firm noted. “Indeed, with all the pressures they face, embracing large-scale deployment of AI and transitioning to being AI-native organizations could be key to driving growth and renewal. Telcos that are starting to recognize this is non-negotiable are scaling AI investments as the business impact generated by the technology materializes.”

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Ishwar Parulkar, chief technologist for the telco industry at AWS, touted several areas that should be of generative AI interest to telecom operators. The first few were common ones tied to improving the customer experience. This includes building on machine learning (ML) to help improve that interaction and potentially reduce customer churn.

“We have worked with some leading customers and implemented this in production where they can take customer voice calls, translate that to text, do sentiment analysis on it … and then feed that into reducing customer churn,” Parulkar said. “That goes up another notch with generative AI, where you can have chat bots and more interactive types of interfaces for customers as well as for customer care agent systems in a call. So that just goes up another notch of generative AI.”

The next step is using generative AI to help operators bolster their business operations and systems. This is for things like revenue assurance and finding revenue leakage, items that Parulkar noted were in a “more established space in terms of what machine learning can do.”

However, Parulkar said the bigger opportunity is around helping operators better design and manage network operations. This is an area that remains the most immature, but one that Parulkar is “most excited about.”  This can begin from the planning and installation phase, with an example of helping technicians when they are installing physical equipment.

“In installation of network equipment today, you have technicians who go through manuals and have procedures to install routers and base stations and connect links and fibers,” Parulkar said. “That all can be now made interactive [using] chat bot, natural language kind of framework. You can have a lot of this documentation, training data that can train foundational models that can create that type of an interface, improves productivity, makes it easier to target specific problems very quickly in terms of what you want to deploy.”

This can also help with network configuration by using large datasets to help automatically generate configurations. This could include the ability to help configure routers, VPNs and MPLS circuits to support network performance.

The final area of support could be in the running of those networks once they are deployed. Parulkar cited functions like troubleshooting failures that can be supported by a generative AI model.

“There are recipes that operators go through to troubleshoot and triage failure,” Parulkar said “A lot of times it’s trial-and-error method that can be significantly improved in a more interactive, natural language, prompt-based system that guides you through troubleshooting and operating the network.”

This model could be especially compelling for operators as they integrate more routers to support disaggregated 5G network models for mobile edge computing (MEC), private networks and the use of millimeter-wave (mmWave) spectrum bands.

Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel this week also hinted at the ability for AI to help manage spectrum resources.

“For decades we have licensed large slices of our airwaves and come up with unlicensed policies for joint use in others,” Rosenworcel said during a speech at this week’s FCC and National Science Foundation Joint Workshop. “But this scheme is not truly dynamic. And as demands on our airwaves grow – as we move from a world of mobile phones to billions of devices in the internet of things (IoT)– we can take newfound cognitive abilities and teach our wireless devices to manage transmissions on their own. Smarter radios using AI can work with each other without a central authority dictating the best of use of spectrum in every environment. If that sounds far off, it’s not. Consider that a large wireless provider’s network can generate several million performance measurements every minute. And consider the insights that machine learning can provide to better understand network usage and support greater spectrum efficiency.”

While generative AI does have potential, Parulkar also left open the door for what he termed “traditional AI” and which he described as “supervised and unsupervised learning.”

“Those techniques still work for a lot of the parts in the network and we see a combination of these two,” Parulkar said. “For example, you might use anomaly detection for getting some insights into the things to look at and then followed by a generative AI system that will then give an output in a very interactive format and we see that in some of the use cases as well. I think this is a big area for telcos to explore and we’re having active conversations with multiple telcos and network vendors.”

Parulkar’s comments come as AWS has been busy updating its generative AI platforms. One of the most recent was the launch of its $100 million Generative AI Innovation Center, which is targeted at helping guide businesses through the process of developing, building and deploying generative AI tools.

“Generative AI is one of those technological shifts that we are in the early stages of that will impact all organizations across the globe in some form of fashion,” Sri Elaprolu, senior leader of generative AI at AWS, told SDxCentral. “We have the goal of helping as many customers as we can, and as we need to, in accelerating their journey with generative AI.”

References:

https://www.sdxcentral.com/articles/analysis/aws-sees-a-role-for-generative-ai-in-the-telecom-space/2023/07/

https://www.bain.com/insights/telcos-stop-debating-generative-ai-and-just-get-going/

https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-ai-native-telco-radical-transformation-to-thrive-in-turbulent-times

Deutsche Telekom exec: AI poses massive challenges for telecom industry

Generative AI in telecom; ChatGPT as a manager? ChatGPT vs Google Search

Generative AI could put telecom jobs in jeopardy; compelling AI in telecom use cases

Generative AI Unicorns Rule the Startup Roost; OpenAI in the Spotlight

Forbes: Cloud is a huge challenge for enterprise networks; AI adds complexity

Qualcomm CEO: AI will become pervasive, at the edge, and run on Snapdragon SoC devices

Bloomberg: China Lures Billionaires Into Race to Catch U.S. in AI

 

Generative AI Unicorns Rule the Startup Roost; OpenAI in the Spotlight

Introduction:

Despite mounting pressure on venture capital in a difficult economic environment, money is still flowing into generative Artificial Intelligence (AI) startups.  Indeed, AI startups have emerged as a bright spot for VC investments this year amid a wider slowdown in funding caused by rising interest rates, a slowing economy and high inflation.

VCs have already poured $10.7 billion into Generative AI [1.] start-ups within the first three months of this year, a thirteen-fold increase from a year earlier, according to PitchBook, which tracks start-ups.

Note 1. Generative AI is a type of artificial intelligence that can create new content, such as text, synthetic data, images, and audio.  The recent buzz around Generative AI has been driven by the simplicity of new user interfaces for creating high-quality content in a matter of seconds.

….………………………………………………………………………………….

Tech giants have poured effort and billions of dollars into what they say is a transformative technology, even amid rising concerns about A.I.’s role in spreading misinformation, killing jobs and one day matching human intelligence.  What they don’t publicize is that the results (especially from ChatGPT) may be incorrect or inconclusive.

We take a close look at Generative AI Unicorns with an emphasis on OpenAI (the creator of ChatGPT) and the competition it will face from Google DeepMind.

Generative AI Unicorns and OpenAI:

AI startups make up half of all new unicorns (startups valued at more than $1B) in 2023, says CBInsights.

At Generative AI firms, startups are reaching $1 billion valuations at lightning speed.  There are currently 13 Generative AI unicorns (see chart below), according to CBInsights which said they attained their unicorn status nearly twice as fast as the average $1 billion startup.

Across the 13 Generative AI unicorns, the average time to reach unicorn status was 3.6 years but for the unicorn club as a whole the average is 7 years — almost twice as long.

OpenAI, the poster child for Generative AI with its Chat GPT app, tops the list with a valuation of almost $30 billion.  Microsoft is the largest investor as it provided OpenAI with a $1 billion investment in 2019 and a $10 billion investment in 2023.  Bloomberg reported that the company recently  closed an investment fund, exceeding expectations with a value that surpasses $175 million.

However, OpenAI may have a formidable competitor in Google DeepMind (more details in DeepMind section below).

….……………………………………………………………………………………………………………………………………………………….

Anthropic is #2 with a valuation of $4.4B. It’s an AI safety and research company based in San Francisco, CA.  The company says they “develop large-scale AI systems so that we can study their safety properties at the technological frontier, where new problems are most likely to arise. We use these insights to create safer, steerable, and more reliable models, and to generate systems that we deploy externally, like Claude (to be used with Slack).”

In Q1-2023, Generative AI companies accounted for three of the entrants to the unicorn club with Anthropic, Adept, and Character.AI all gaining valuations of $1B or above.

New Generative AI Unicorns in May:

Ten companies joined the Crunchbase Unicorn Board in May 2023 — double the count for April 2023. Among them were several AI startups:

  • Toronto-basedCohere, a generative AI large language model developer for enterprises, raised $270 million in its Series C funding. The funding was led by Inovia Capital  valuing the 4-year-old company at $2.2 billion.
  • Generative video AI company Runway, based out of New York, raised a $100 million Series D led by Google. The funding valued the 5-year-old company at $1.5 billion.
  • Synthesia, a UK-based artificial intelligence (AI) startup, has raised about $90 million at a valuation of $1 billion from a funding round led by venture capital firms Accel and Nvidia-owned NVentures.  “While we weren’t actively looking for new investment, Accel and NVIDIA share our vision for transforming traditional video production into a digital workflow,” said Victor Riparbelli, co-founder and CEO of Synthesia.

….…………………………………………………………………………………………………………………………………………………..

Google DeepMind:

Alphabet CEO Sundar Pichai said in a blog post, “we’ve been an AI-first company since 2016, because we see AI as the most significant way to deliver on our mission.”

In April, Alphabet Inc. created “Google DeepMind,” in order to bring together two leading research groups in the AI field: the Brain team from Google Research, and DeepMind (the AI startup Google acquired in 2014). Their collective accomplishments in AI over the last decade span AlphaGo, Transformers, word2vec, WaveNet, AlphaFold, sequence to sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX for expressing, training and deploying large scale Machine Learning (ML) models.

By launching DeepMind as Google’s Generative AI solution, there could be a new battle front opening in quantum computing, machine learning perception, gaming and mobile systems, NLP and human-computer interaction and visualization.

A recent DeepMind paper says the Alphabet unit has extended AI capabilities with faster sorting algorithms to create ordered lists.  Their paper says it shows “how artificial intelligence can go beyond the current state of the art,” because ultimately AlphaDev’s sorts use fewer lines of code for sorting sequences with between three elements and eight elements — for every number of elements except four. And these shorter algorithms “do indeed lead to lower latency,” the paper points out, “as the algorithm length and latency are correlated.”

Their researchers created a program based on DeepMind’s AlphaZero program, which beat the world’s best players in chess and Go. That program trained solely by playing games against itself, getting better and better using a kind of massively automated trial-and-error that eventually determines the most optimal approach.

DeepMind’s researchers modified into a new coding-oriented program called AlphaDev, calling this an important next step. “With AlphaDev, we show how this model can transfer from games to scientific challenges, and from simulations to real-world applications,” they wrote on the DeepMind blog.  The newly-discovered sorting algorithms “contain new sequences of instructions that save a single instruction each time they’re applied. AlphaDev skips over a step to connect items in a way that looks like a mistake, but is actually a shortcut.”

….………………………………………………………………………………………………………………………………………………………..

Conclusions:

While many luminaries, such as Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, have lauded Generative AI as the greatest invention since the printing press, the technology has yet to prove itself worthy of the enormous praise.  Their central thesis, that a computer program could “transform the human cognitive process” in a way tantamount to the Enlightenment, is a huge stretch.

Gary Marcus, a well-known professor and frequent critic of A.I. technology, said that OpenAI hasn’t been transparent about the data its uses to develop its systems. He expressed doubt in CEO Sam Altman’s prediction that new jobs will replace those killed off by A.I.

“We have unprecedented opportunities here but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability,” Dr. Marcus said.

The promise and potential of Generative AI will not be realized for many years.  Think of it as a “research work in progress” with many twists and turns along the way.

….………………………………………………………………………………………………………………………………..

References:

https://www.cbinsights.com/research/generative-ai-unicorns-valuations-revenues-headcount/

https://pitchbook.com/news/articles/Amazon-Bedrock-generative-ai-q1-2023-vc-deals

Curmudgeon/Sperandeo:  Impact of Generative AI on Jobs and Workers

Generative AI in telecom; ChatGPT as a manager? ChatGPT vs Google Search

Generative AI could put telecom jobs in jeopardy; compelling AI in telecom use cases

 

 

 

Generative AI in telecom; ChatGPT as a manager? ChatGPT vs Google Search

Generative AI is probably the most hyped technology in the last 60 years [1.].  While the potential and power of microprocessors, Ethernet, WiFi, Internet, 4G, and cloud computing all lived up to or exceeded expectations, generative AI has yet to prove itself worthy of its enormous praise.  Simply put, Generative AI is a type of artificial intelligence that can create new content, such as text, images, and audio.

Note 1.  This author has been observing computer and communications technologies for 57 years.  His first tech job for pay was in the summer of 1966 in Dallas, TX.  He did mathematical simulations of: 1.)  Worst Case Data Load on 3 Large Screen Displays (LSDs)-each 7 ft x 7 ft. and 2.) Efficiency of Manual Rate Aided Radar Tracking.  In the summer of 1967 he helped install and test electronic modules for the central command and control system for the Atlantic Fleet Weapons Range at Roosevelt Roads Naval Air station in Puerto Rico.  While there also did a computer simulation of a real time naval air exercise (battle ships, aircraft carriers, jets, helicopters, drones, etc) and displayed the results on the 3 LSDs.  Skipping over his career in academia, industry and as a volunteer officer/chairman at IEEE ComSoc and IEEE SV Tech History, Alan has overseen the IEEE Techblog for over 14 years (since he was asked to do so in March 2009 by the IEEE ComSoc NA Chairman at that time).

………………………………………………………………………………………………………………………………………………………………………………..

Interest in Generative A.I. has exploded. Tech giants have poured effort and billions of dollars into what they say is a transformative technology, even amid rising concerns about A.I.’s role in spreading misinformation, killing jobs and one day matching human intelligence.

It’s been claimed that Generative AI can be used to optimize telecom networks and make them more efficient. This can lead to faster speeds, better reliability, and lower costs.  Another way that generative AI is changing telecommunications is by improving customer service. Generative AI can be used to create virtual assistants that can answer customer questions and provide support. This can free up human customer service representatives to focus on more complex issues.

Generative AI is also being used to improve network security. Generative AI can be used to detect and prevent fraud and other security threats. This can help to protect customers and their data.

Here are some specific examples of how generative AI is planning to be used in the telecommunications industry:

  • Network optimization: Generative AI can be used to analyze network traffic and identify patterns. This information can then be used to optimize the network and improve performance. For example, generative AI can be used to route traffic more efficiently or to add capacity to areas of the network that are experiencing congestion.
  • Predictive maintenance: Generative AI can be used to analyze data from network equipment to identify potential problems before they occur. This information can then be used to schedule preventive maintenance, which can help to prevent outages and improve reliability. For example, generative AI can be used to monitor the temperature of network equipment and identify components that are at risk of overheating.
  • Fraud detection: Generative AI can be used to analyze customer behavior and identify patterns that may indicate fraud. This information can then be used to prevent fraud and protect customers. For example, generative AI can be used to identify customers who are making suspicious calls or sending large amounts of text messages.
  • Customer service: Generative AI can be used to create virtual assistants that can answer customer questions and provide support. This can free up human customer service representatives to focus on more complex issues. For example, generative AI can be used to create a virtual assistant that can answer questions about billing or troubleshoot technical issues.

Postscript: Gary Marcus, a well-known professor and frequent critic of A.I. technology, said that OpenAI hasn’t been transparent about the data its uses to develop its systems. He expressed doubt in CEO Sam Altman’s prediction that new jobs will replace those killed off by A.I.

“We have unprecedented opportunities here but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability,” Dr. Marcus said.

References:

The AI-native telco: Radical transformation to thrive in turbulent times; https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-ai-native-telco-radical-transformation-to-thrive-in-turbulent-times#/

Generative AI in Telecom Industry | The Ultimate Guide; https://www.xenonstack.com/blog/generative-ai-telecom-industry#:~:text=Generative%20AI%20can%20predict%20equipment,equipment%20failures%20before%20they%20occur.

Microsoft dangles generative AI for telcos and slams ‘DIY’ clouds; https://www.lightreading.com/aiautomation/microsoft-dangles-generative-ai-for-telcos-and-slams-diy-clouds/d/d-id/783438

Deutsche Telekom exec: AI poses massive challenges for telecom industry

Arista Networks unveils cloud-delivered, AI-driven network identity service

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

ChatGPT (from OpenAI) is the poster child for Generative AI. Here is a study which showed in many ways in which Generative AI can not properly replace a manager.  JobSage wanted to see how ChatGPT performed when it comes to sensitive management scenarios and had responses ranked by experts.

Key takeaways:

  • Sensitive management scenarios: 60% found to be acceptable while 40% failed.

  • ChatGPT was better at addressing diversity and worse at addressing compensation and underperforming employees.

  • ChatGPT earned its strongest marks addressing an employee being investigated for sexual harassment and a company switching healthcare providers to cut costs.

  • ChatGPT performed weakest when asked to respond to an employee concerned about pay equity, a company that needs people to work harder than ever, and a company’s freeze of raises despite record payout to the CEO.

ChatGPT showed inconsistent performance in management situations:

Using the same scoring scale, ChatGPT revealed that while it could provide balance and empathy with some employee-specific and company-wide communication, at other times that empathy and balance was missing, making it appear tone deaf.

ChatGPT even gave responses that many would deem inappropriate while other responses highlighted a more broad limitation of ChatGPT: its inability to provide detailed, tailored information about company policies and scenarios that occur.

This section details where this chatbot failed to deliver by responses scored from negative to very negative.

Negative: Notifying an employee they were being terminated for not working hard enough

Our experts had issues with ChatGPT’s response in this scenario. It emphasized the employee’s performance as compared to peers and offered an overall negative tone that would potentially make its recipient feel quite terrible about themself.

Negative: Notifying an employee that a complaint had been filed against them for being intoxicated on the job

For this response, ChatGPT employs a severe tone, which may discourage the employee from sharing the underlying issue that is motivating them to drink on the job. Management did deem this to be an outstanding response, though one wonders if this would be a conversation better conducted in person than over email.

Negative: Notifying an employee that they’ve worn clothing that’s revealing and inappropriate 

ChatGPT failed to understand how language can be judgmental, and its response was less than informative. Its use of the word “revealing” to describe the clothing is subjective and the human resources expert provided the feedback that it “screams sexism and provides no meaningful detail about what the policy is and what part they violated.”

Very negative: Notifying the company to let them know they need to work harder

ChatGPT again came up short on necessary detail. The email neglects to include examples or benchmarks of what working harder actually looks like, which sets an unrealistic bar to measure employees against. And according to the human resources expert“it assumes individuals aren’t already striving for excellence.”

Very negative: Notifying a company about a freeze on raises despite a record payout to the CEO

Legal found ChatGPT’s response “too direct in summarizing the record compensation received by the CEO.” With a very personal issue like compensation at play, an employee would hope this would be dealt with more empathy than what’s displayed here.

Very negative: Addressing an employee’s concerns about pay equity.

ChatGPT’s response focuses primarily on the company’s situation while failing to show empathy for the employee’s situation. It also discloses details about another’s employee’s compensation and how it was determined, which many would consider inappropriate. Since compensation was found to be the top factor of work-related stress in a 2023 survey, trusting this to ChatGPT would not be wise.

Conclusions: 

Based on the Jobsage.com study, one might find ways to utilize ChatGPT to draft emails that serve as announcements or notifications. However, those emails still need a human touch.  If you trust ChatGPT to draft correspondence, be sure those are reviewed by the company’s internal team to ensure they accurately reflect the company’s protocols and policies.

While ChatGPT may be useful in some ways, it simply can’t replace the empathy and human touch of a good manager, especially when dealing with sensitive situations. Employees want to feel respected, and receiving a blanket response full of corporate jargon likely won’t make them feel that way. Employees want to know a human is communicating with them, not a computer.

If you want to find a workplace that treats you with empathy and provides a safe, inclusive workspace that addresses your unique needs, JobSage provides an open forum for employees to leave honest, authentic company reviews. You can really get to know a company’s culture and see if its style of management and communication carves a meaningful place for you.

Addendum – ChatGPT vs Google Search Engine:

I asked ChatGPT: “What is the IEEE Techblog?”  Here is the result I got:

“As of my last knowledge update in September 2021, the IEEE (Institute of Electrical and Electronics Engineers) Techblog was not a specific entity or platform associated with IEEE. However, IEEE is a renowned professional association dedicated to advancing technology and innovation across various fields, including electrical engineering, computer science, and telecommunications.”

When I did a Google search on the same keywords, I got the CORRECT answer as the 1st search result:

Technology Blog – ComSoc Technology Blog – IEEE …

IEEE Communications Society

https://techblog.comsoc.org

About the Blog

NOTE: IEEE Techblog posts may not be reposted without …

About the Techblog Editorial …

Alan J Weissberger, ScD. Content Manager and principal …

IEEE Techblog recognized by …

The IEEE ComSoc Techblog was voted #2 best broadband blog …

Contact Us

IEEE websites place cookies on your device to give you the best …

 

……………………………………………………………………………………………………………………………………………………………………….
Next, I asked ChatGPT: “Who is the author of the IEEE ComSoc Techblog?” Here is the result I got:

“The IEEE ComSoc Techblog features contributions from various authors who are experts, researchers, and professionals in the field of communications engineering. The blog does not have a single author, but rather includes posts from multiple individuals who are associated with the IEEE Communications Society (ComSoc) and its technical community.”  No mention of Alan J Weissberger!

When I asked Google the same thing, I got:

Alan J Weissberger, ScD

Content Manager and principal contributing author to IEEE ComSoc blog sites since March 2009. IEEE volunteer and technical conference session organizer since March 1974.

About the Techblog Editorial Team – Technology Blog

About the Techblog Editorial Team

References:

Would ChatGPT Be a Good Manager?

Curmudgeon/Sperandeo:  Impact of Generative AI on Jobs and Workers

 

Deutsche Telekom exec: AI poses massive challenges for telecom industry

Deutsche Telekom’s VP of technology strategy, Ahmed Hafez, co-hosted the DSP Leaders World Forum 2023 session entitled “Creating a framework for the AI-native telco” this week in the UK.  He said that AI will deliver the telecom sector its biggest ever challenges and opportunities, but to take advantage of the benefits that AI will bring the industry needs to figure out a way to evolve from being opportunistic to becoming AI-native.

To date, the telecom sector has been exploring the potential of AI without looking at the bigger picture, and that holistic view needs to be taken in order to figure out the best way to go, Hafez believes.

Like so many other pundits and cheerleaders, Hafez regards the impact of AI as “the biggest transformation we will ever encounter.” And this is not only about the magnitude of what AI will do, but also the pace – it will outpace our understanding of things so fast, so we need to be ready…

“Previous transformations have [happened at an] accommodating pace – they were not changing so fast that we couldn’t comprehend or adapt to them. In order for us to adapt to AI, we need to transform as individuals, not [just as] companies. On an individual level you need to be able to comprehend what’s going on and pick the right information.”

To illustrate the magnitude of the challenges that AI will deliver to the telecom sector, Hafez presented a few supporting statistics:

  • The AI market was worth $136bn in 2022 and is set to be worth $1.8tn by 2030
  • The telecom AI market alone was worth $2.2bn in 2022
  • Global private investment in AI reached $91.9bn in 2022
  • AI delivers a 40% increase in business productivity, according to a study by Accenture (Hafez thinks that number is too low, that productivity gains will be much higher)
  • There are already thousands of AI-focused companies – by 2018 there were already nearly 3,500
  • AI will drive the need for 500x compute power between now and 2030 (“What does that mean for telcos? How can we deal with that?” asked Hafez)
  • In terms of human resources, 63% of executives believe their biggest skills shortage is in AI expertise
  • Three in every four CEOs believe they don’t have enough transparency when it comes to AI and are concerned about skewed bias in the AI sector

So a lot of eye-opening trends that should give the telecom industry food for thought, especially when it comes to attracting employees with AI skills. “How will we get the people we need if there are thousands of AI companies” attracting the experts, he asked.

Hafez also related how he encountered what he described as some “depressing” information about how unattractive telecom operators are to potential employees, especially those of a younger generation. Of the top-50 most attractive companies in advanced economies for employees, none of them are telcos: “This is a worrying trend… we need to become more attractive to the younger generations,” he noted.

The telecom industry began exploring the use of AI in earnest less than 10 years ago, noted the DT executive, when it started looking into its potential with proofs of concept and trials. “Then we took the opportunistic approach to AI – use case-based, where you find a good use case, you implement it and it’s concrete. There’s nothing bad about that, as it’s the right thing to do… and we’ve been doing that for a while and it’s delivering value. That’s fine as long as you are doing a few tens of use cases.”

But using AI at scale, which is what the industry needs to do to become AI-native, where AI is fully integrated into everything and becomes part of all operations and decision-making processes, throws up a lot of new questions about how the sector progresses from being opportunistic to becoming AI-native – what are the missing steps, Hafez asked?

Source: Deutsche Telekom

“Once we start to ask, what would the future be with AI in everything we do, in every appliance, in every application, in every network component, it would be over the top. You would have data that is being worked on by five or six AI engines, creating different things…. You would have not just tens of use cases, but hundreds, or thousands. Are we prepared for that? Are we ready to embrace such scale? Are we building AI for scale? I don’t think so.

“We are building AI trying to get things done – which is okay. But in order for us to get through this journey, through this transformation, what stages do we need to pass through? What are the steps that we need to take to… make sure that the problem is clear. If we have a huge amount of AI, do we run the risk of conflicting AI? So if I have AI for energy efficiency and I have another one that actually improves network quality, could they create conflicts? Can they be a problem? If I have AI that is on the optical layer and AI on the IP layer, can they make different decisions because they consume data differently?

“If we look at things from this perspective, do we need, within our organisations, another stream of hiring people and the need to upskill leadership? Do we need to upskill ourselves to help our teams? What do we need to do? If you look at technologies, do we need to change the perspective of how, for example, the 3GPP is building the standards in order to make sure the standards are AI friendly? Do we need separate standard bodies to look at AI? What would be their functions? What would be their scope?” asked Hafez.

And does the industry need a framework that can provide guidance so that the telecom sector can develop in the same direction with its use of AI?

“This is the discussion we want to have, and I hope the message is clear – we have a great opportunity, but opportunities do not come without challenges,” he cautioned.

Hafez set the scene for a great discussion with his fellow speakers, Juniper’s chief network strategist Neil McRae, Rakuten Symphony CMO Geoff Hollingworth, Nokia’s CTO for Europe Azfar Aslam, and Digital Catapult’s CTO Joe Butler – and it’s fair to say there were differences of opinion! You can view the full session on demand here.

…………………………………………………………………………………………………………………………………………………………………….

Here are some specific examples of how AI is being used in the telecom industry in 2023:

Network optimization:


AI is being used to analyze data from network sensors to identify potential problems before they occur. This allows telecom providers to take proactive steps to fix problems and prevent outages. For example, companies are using AI to predict network congestion and proactively reroute traffic to avoid outages. 5G networks began to roll out in 2019 and are predicted to have more than 1.7 billion subscribers worldwide – 20% of global connections — by 2025.  AI is essential for helping CSPs build self-optimizing networks (SONs) to support this growth. These allow operators to automatically optimize network quality based on traffic information by region and time zone. AI in the telecom industry uses advanced algorithms to look for patterns within the data, enabling telecoms to both detect and predict network anomalies. As a result of using AI in telecom, CSPs can proactively fix problems before customers are negatively impacted.

Customer service automation and Virtual Assistants:

AI-powered chatbots can answer customer questions and resolve issues without the need for human intervention. This can free up customer service representatives to focus on more complex issues. For example, Verizon is using AI to power its Virtual Assistant, which can answer customer questions about billing, service plans, and technical support.

Predictive Maintenance:

AI-driven predictive analytics are helping telecoms provide better services by utilizing data, sophisticated algorithms, and machine learning techniques to predict future results based on historical data. This means operators can use data-driven insights to monitor the state of equipment and anticipate failure based on patterns. Implementing AI in telecoms also allows CSPs to proactively fix problems with communications hardware, such as cell towers, power lines, data center servers, and even set-top boxes in customers’ homes. In the short term, network automation and intelligence will enable better root cause analysis and prediction of issues. Long term, these technologies will underpin more strategic goals, such as creating new customer experiences and dealing efficiently with emerging business needs.

Robotic Process Automation (RPA) for Telecoms:

CSPs have vast numbers of customers engaged in millions of daily transactions, each susceptible to human error. Robotic Process Automation (RPA) is a form of business process automation technology based on AI. RPA can bring greater efficiency to telecom functions by allowing telcos to more easily manage their back-office operations and large volumes of repetitive and rules-based actions. RPA frees up CSP staff for higher value-add work by streamlining the execution of complex, labor-intensive, and time-consuming processes, such as billing, data entry, workforce management, and order fulfillment. According to Statista, the RPA market is forecast to grow to 13 billion USD by 2030, with RPA achieving almost universal adoption within the next five years. Telecom, media, and tech companies expect cognitive computing to “substantially transform” their companies within the next few years.

Fraud Prevention:

Telecoms are harnessing AI’s powerful analytical capabilities to combat instances of fraud. AI and machine learning algorithms can detect anomalies in real-time, effectively reducing telecom-related fraudulent activities, such as unauthorized network access and fake profiles. The system can automatically block access to the fraudster as soon as suspicious activity is detected, minimizing the damage. With industry estimates indicating that 90% of operators are targeted by scammers on a daily basis – amounting to billions in losses every year –  this AI application is especially timely for CSPs.

Revenue Growth:

AI in telecommunications has a powerful ability to unify and make sense out of a wide range of data, such as devices, networks, mobile applications, geolocation data, detailed customer profiles, service usage, and billing data. Using AI-driven data analysis, telecoms can increase their rate of subscriber growth and average revenue per user (ARPU) through smart upselling and cross-selling of their services. By anticipating customer needs using real-time context, telecoms can make the right offer at the right time over the right channel.

………………………………………………………………………………………………………………………………………………………

References:

https://www.telecomtv.com/content/network-automation/towards-the-ai-native-telco-47596/

https://www.telecomtv.com/content/dsp-leaders-forum/

6 Common Uses of AI in Telecommunications

Generative AI could put telecom jobs in jeopardy; compelling AI in telecom use cases

Allied Market Research: Global AI in telecom market forecast to reach $38.8 by 2031 with CAGR of 41.4% (from 2022 to 2031)

SK Telecom inspects cell towers for safety using drones and AI

The case for and against AI in telecommunications; record quarter for AI venture funding and M&A deals

Global AI in Telecommunication Market at CAGR ~ 40% through 2026 – 2027

Cybersecurity threats in telecoms require protection of network infrastructure and availability

China to launch world’s first 5G cruise ship via China Telecom Corp Ltd Shanghai Branch

 

Generative AI could put telecom jobs in jeopardy; compelling AI in telecom use cases

The loss of jobs due to AI forecasts are very grim.  IBM’s CEO Arvind Krishna this week announced a hiring freeze while speculating that 7,800 jobs could be replaced by AI in the next few years.  A new report from the World Economic Forum (WEF) states that AI will cause 14 million jobs to be lost by 2027. The organization’s Future of Jobs Report 2023 shows that 590 million jobs will not change, while 69 million will be created and 83 million positions will be lost.

Even more scary was Goldman Sachs issued a report in March predicting AI would “replace” 300 million jobs and citing the recent impact of generative AI.  Generative AI, able to create content indistinguishable from human work, is “a major advancement”, the report says.  However, those predictions don’t usually forecast let alone mention the new jobs that will be created in an AI prevalent world.

According to Light Reading’s Iain Morris, new types of AI like Hawk-Eye, ChatGPT, GitHub Copilot and other permutations threaten a jobs apocalypse.  The telecom sector looks extraordinarily exposed. For one thing, it’s stocked with people in sales, marketing and customer services, including high-street stores increasingly denuded of workers, like those coffee chains where you select your beverage on a giant touchscreen instead of telling somebody what you want. Chatbots have already replaced some roles. One very big (unnamed) network operator is known to be exploring the use of ChatGPT in customer services for added efficiency – a move that could turn thinned ranks anorexic.

The schema is that telco networks could feasibly be a self-operating, self-healing entity, stripped clean of people, run by an AI that’s probably been developed by Google or Microsoft even though it lives in facilities owned by the telco to keep GDPR watchdogs and other regulatory authorities on side. All those fault-monitoring, trouble-ticketing and other routine technical jobs have gone. If staff have been “freed up,” it’s not to do other jobs at the telco.

Opinion:  This author strongly disagrees as these new versions of AI have not proven themselves to be that effective in doing telecom network tasks.  Meanwhile, chat bots are somewhere between ineffective and totally dysfunctional so won’t replace live/real person chat or call centers till they improve.

……………………………………………………………………………………………………………………………………………………………….

At big telcos tracked by Light Reading, collective headcount fell nearly 58,000 last year. Across AT&T, T-Mobile and Verizon, the big three of the U.S. mobile telecom market, around 45,000 jobs disappeared in 2022, more than 11% of the end-2021 total.

Source: Companies tracked by Light Reading

Outside the U.S., around 11,000 jobs were cut at Deutsche Telekom, Orange, Telecom Italia and Telefónica. That was a much smaller 2.5% of the earlier total, and yet more than 67,000 non-US jobs have been cut from the payrolls of these companies since 2018, a figure equal to 13.5% of headcount at the end of the previous year.

Much of this attrition has very little if anything to do with technology. Instead, it’s the result of more routine efficiency measures and the disposal of assets, including geographical units, infrastructure (such as towers) once but no longer deemed strategically important, and IT resources farmed out to the public cloud. This is a frightening thought for employees.

Morris asks, “If jobs were disappearing this fast before the arrival of ChatGPT, what does the future hold?”

AI Can Improve Telecom Industry without causing major job losses:

We think AI has the potential to improve various aspects of the telecommunication industry without causing major job losses.  For example,  Ericsson has reported that the implementation of AI-powered solutions in networks can lead to a 35 percent decrease in critical incidents and a 60 percent decrease in network performance problems. Additionally, energy costs can be reduced by 15 percent through the automation, making the network more environmentally sustainable.

AI can help telcos optimize their networks by automatically adjusting network settings and configurations to improve performance and reduce costs. AI algorithms can further be used to analyze vast amounts of data generated by telecommunication networks, providing valuable insights into network performance, and helping to identify and resolve issues in real-time. This can significantly improve network reliability and reduce downtime, ultimately leading to enhanced customer satisfaction.

Some of the other compelling AI use cases in telecom are:

  • Fraud detection and prevention: AI algorithms can play a crucial role by analyzing massive amounts of data to detect and prevent various forms of fraudulent activities in real time, such as SIM-swapping, unauthorized network access, fake profiles, and bill fraud.
  • Predictive maintenance: AI can analyze data from telecom equipment to predict when it will require maintenance—reducing downtime and costs associated with maintenance.
  • Personalized marketing: AI can analyze customer data to create targeted marketing campaigns—improving customer engagement and reducing the costs associated with marketing efforts. Using machine learning models to recommend products or services to customers based on their usage patterns and preferences.
  • Automated decision making: Using deep learning models to automate decisions such as network routing, dynamic pricing, and more.

References:

https://www.lightreading.com/aiautomation/after-hefty-cuts-ai-puts-thousands-more-telco-jobs-on-line/a/d-id/784674?

https://www.microsoft.com/en-us/industry/blog/telecommunications/2023/04/11/unlock-the-potential-of-ai-in-the-telecommunications-industry/

https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-ai-native-telco-radical-transformation-to-thrive-in-turbulent-times

Allied Market Research: Global AI in telecom market forecast to reach $38.8 by 2031 with CAGR of 41.4% (from 2022 to 2031)

The case for and against AI in telecommunications; record quarter for AI venture funding and M&A deals

Global AI in Telecommunication Market at CAGR ~ 40% through 2026 – 2027

SK Telecom inspects cell towers for safety using drones and AI

Cybersecurity threats in telecoms require protection of network infrastructure and availability

 

Arista Networks unveils cloud-delivered, AI-driven network identity service

At the RSA Conference today, Arista Networks announced a cloud-delivered, AI-driven network identity service for enterprise security and IT operations. Based on Arista’s flagship CloudVision platform, Arista Guardian for Network Identity (CV AGNI™) expands Arista’s zero trust networking approach to enterprise security. CV AGNI helps to secure IT operations with simplified deployment and cloud scale for all enterprise network users, their associated endpoints, and Internet of Things (IoT) devices.

“Proliferation of IoT devices in the healthcare network creates a huge management and security challenge for our IT and security operations. The ease of securely onboarding devices on the network by CV AGNI and its integration with Medigate by Claroty for device profiling greatly simplifies this problem for a healthcare network,” said Aaron Miri, CIO of Baptist Healthcare.

AI-Driven Network Identity brings Simplicity and Security at Scale

While enterprise networks have seen massive transformation in recent years with the adoption of cloud and the acceleration of a post-pandemic, perimeter-less enterprise, Network Access Control (NAC) solutions have changed little for decades. Traditional NAC solutions continue to suffer from the complexity of on-premises deployment and administration and have been unable to adapt to the explosion of SaaS-based identity stores, users, devices and their associated profiles across the enterprise.

CloudVision AGNI takes a novel approach to enterprise network identity management. Built on a modern, cloud-native microservices architecture, the CV AGNI solution leverages AI/ML to greatly simplify the secure onboarding and troubleshooting for users and devices and the management of ever-expanding security policies.

CV AGNI is based on Arista’s foundational NetDL architecture and leverages AVA™ (Autonomous Virtual Assist) for a conversational interface that removes the complexity inherent in managing network identity from a traditional legacy NAC solution. AVA codifies real-world network and security operations expertise and leverages supervised and unsupervised ML models into an ‘Ask AVA’ service, a chat-like interface for configuring, troubleshooting and analyzing enterprise security policies and device onboarding. CV AGNI also adds user context into Arista’s network data lake (NetDL), greatly simplifying the integration of device and user information across Arista’s products and third-party systems.

CloudVision AGNI delivers key attributes from client to cloud across the cognitive enterprise:

  • Simplicity: CV AGNI is a cloud service that eliminates the complexity of planning and scaling the compute resources for an on-premises solution. Administrative actions take a fraction of the time compared to a traditional NAC solution. It also natively integrates with industry-leading identity stores.
  • Security: CV AGNI leapfrogs legacy NAC solutions by redefining and greatly simplifying how enterprise networks can be secured and segmented by leveraging user and device context in the security policies.
  • Scale: With a modern microservices-based architecture, the CV AGNI solution scales elastically with the growing needs of any enterprise.

CloudVision Delivers Network Identity as-a-Service

Based on the CloudVision platform, CV AGNI delivers network identity as a service to any standards-based wired or wireless network.

CloudVision AGNI’s key features include the following:

  • User self-service onboarding for wireless with per-user unique pre-shared keys (UPSK) and 802.1X digital certificates.
  • Certificate management with a cloud-native PKI infrastructure
  • Enterprise-wide visibility of all connected devices. Devices are discovered, profiled and classified into groups for single-pane-of-glass control.
  • Security policy enforcement that goes beyond the traditional inter-group macro-segmentation and includes intra-group micro-segmentation capabilities when combined with Arista networking platforms through VLANs, ACLs, Unique-PSK and Arista MSS-Group techniques.
  • AI-driven network policy enforcement based on AVA for behavioral anomalies. When a threat is detected by Arista NDR, it will work with CV AGNI to quarantine the device or reduce its level of access.

Tailored for Multi-vendor Integration

CloudVision AGNI leverages cognitive context from third-party systems, including solutions for mobile device management, endpoint protection, and security information and event management. This greatly simplifies the identification and onboarding process and application of segmentation policies. Examples include:

  • Endpoint Management: Medigate by Claroty, CrowdStrike XDR, Palo Alto Cortex XDR
  • Identity Management: Okta, Google Workspace, Microsoft Azure, Ping Identity and OneLogin.
  • MDM: Microsoft Intune, JAMF
  • SIEM: Splunk
  • Networking devices: Multi-vendor interoperability in addition to Arista platforms

Availability

CV AGNI is integrated into Arista CloudVision to provide a complete identity solution. CV AGNI is in trials now with general availability in Q2 2023.

Visit us at booth #1443 at RSA. Learn more about AI-driven network identity at Arista’s webinar on May 18, register here. For more insight on this announcement, read Jayshree Ullal’s blog here.

About Arista

Arista Networks is an industry leader in data-driven, client to cloud networking for large data center, campus and routing environments. Arista’s award-winning platforms deliver availability, agility, automation, analytics and security through an advanced network operating stack. For more information, visit www.arista.com.

Competing Product:

SailPoint’s AI driven Identity Security Platform

 

References:

https://www.arista.com/en/company/news/press-release/17244-pr-20230424

https://www.sailpoint.com/platform/?campaignid=11773644133

Arista’s WAN Routing System targets routing use cases such as SD-WANs

 

AT&T realizes huge value from AI; will use full suite of NVIDIA AI offerings

Executive Summary:

AT&T Corp. and NVIDIA today announced a collaboration in which AT&T will continue to transform its operations and enhance sustainability by using NVIDIA-powered AI for processing data, optimizing service-fleet routing and building digital avatars for employee support and training.

AT&T is the first telecommunications provider to explore the use of a full suite of NVIDIA AI offerings. This includes enhancing its data processing using the NVIDIA AI Enterprise software suite, which includes the NVIDIA RAPIDS Accelerator for Apache Spark; enabling real-time vehicle routing and optimization with NVIDIA cuOpt; adopting digital avatars with NVIDIA Omniverse Avatar Cloud Engine and NVIDIA Tokkio; and utilizing conversational AI with NVIDIA Riva.

“We strive each day to deliver the most efficient global network, as we drive towards net zero emissions in our operations,” said Andy Markus, chief data officer at AT&T. “Working with NVIDIA to drive AI solutions across our business will help enhance experiences for both our employees and customers.”  He said it’s AT&T’s goal to make AI part of the fabric of the company, to have “all parts of the business leveraging AI and creating AI” rather than limit its use to creation of AI by its specialist data scientists.

“Industries are embracing a new era in which chatbots, recommendation engines and accelerated libraries for data optimization help produce AI-driven innovations,” said Manuvir Das, vice president of Enterprise Computing at NVIDIA. “Our work with AT&T will help the company better mine its data to drive new services and solutions for the AI-powered telco.”

The Data Dilemma:
AT&T, which has pledged to be carbon neutral by 2035, has instituted broad initiatives to make its operations more efficient. A major challenge is optimizing energy consumption while providing network infrastructure that delivers data at high speeds.  AT&T processes more than 590 petabytes of data on average a day. That is the equivalent of about 6.5 million 4K movies or more than 8x the content housed in the U.S. Library of Congress if all its collections were digitized.

Telecoms aiming to reduce energy consumption face challenges across their operations. Within networks, the radio access network (RAN) consumes 73% of energy, while core network services, data centers and operations use 13%, 9% and 5%, respectively, according to the GSMA, a mobile industry trade group.

AT&T first adopted NVIDIA RAPIDS Accelerator for Apache Spark to capitalize on energy-efficient GPUs across its AI and data science pipelines. This helped boost its operational efficiency across everything from training AI models and maintaining network quality and optimization, to reducing customer churn and improving fraud detection.

Of the data and AI pipelines targeted with Spark-RAPIDS, AT&T saves about half of its cloud computing spend and sees faster performance, while enabling reductions in its carbon footprint.

Enhanced Field Dispatch Services:
AT&T, which operates one of the largest field dispatch teams to service its customers, is currently testing NVIDIA cuOpt software to enhance its field dispatch capabilities to handle more complex technician routing and optimization challenges.  AT&T has a fleet of roughly 30,000 vehicles with over 700 million options in how they can be dispatched and routed. The operator would run dispatch optimization algorithms overnight to get plans for the next day, but it took too long and couldn’t account for the realities that would crop up the next morning: Workers calling in sick, vehicles breaking down, and so on.

“It wasn’t as good at noon as it was at 8 in the morning,” Markus said. Using Nvidia GPUs and software, he said, AT&T was able to speed up its processing 60x so that it could run the scenario in near-real-time, as often as it needed to and achieve more jobs in a day (as well as reduce its cloud-related costs by 40%).

Routing requires trillions of computations to factor in a variety of factors, from traffic and weather conditions to customer change of plans or a technician’s skill level, where a complicated job might then require an additional truck roll.

In early trials, cuOpt delivered solutions in 10 seconds, while the same computation on x86 CPUs took 1,000 seconds. The results yielded a 40% reduction in cloud costs and allowed technicians to complete more service calls each day. NVIDIA cuOpt allows AT&T to run nearly continuous dispatch optimization software by combining NVIDIA RAPIDS with local search heuristics algorithms and metaheuristics such as Tabu search.

Pleasing Customers, Speeding Network Design:
As part of its efforts to improve productivity for its more than 150,000 employees, AT&T is moving to adopt NVIDIA Omniverse ACE and NVIDIA Tokkio, cloud-native AI microservices, workflows and application frameworks for developers to easily build, customize and deploy interactive avatars that see, perceive, intelligently converse and provide recommendations to enhance the customer service experience.

For conversational AI, the carrier also uses the NVIDIA Riva software development kit and is examining other customer service and operations use cases for digital twins and generative AI.

AT&T also is taking advantage of fast 5G and its fiber network to deliver NVIDIA GeForce NOW™ cloud gaming at 120 frames per second on mobile and 240 FPS at home.

Markus added that AI-powered Nvidia tools are also helping AT&T to both serve its customers better through various channels, from sales recommendations to customer care; and that its internal processes are leveraging AI as well, to help employees be more efficient. The company is embracing Nvidia’s AI solutions as a foundation for development of interactive and intelligent customer service avatars.

In the past 12 months,  AI has created more than $2.5 billion in value for AT&T. About half of that came via Marcus’ team, but the other half came from what he calls “citizen data scientists” across the company who have been able to leverage AI to solve problems in their respective areas, whether than was marketing, network operations, software development or finance.

“As we mobilize that citizen data-scientist across the company, we’re doing that via a self-service platform that we call AI-as-a-service, where we’re bringing a unified experience together. But behind the experience, we’re allowing those users to leverage AI in a curated way for their use case,” he explained. “So they bring their subject matter expertise to the problem that they’re trying to solve, and we … enable the technology [and processes for them to create] robust AI. But we also govern it with some guardrails, so the AI we’re creating is ethical and responsible.”

In AT&T’s automation development, 92% of its automation is created by employees via self-service to solve a problem. “The goal is that over time, we bake in incredible functionality like Nvidia, so that AI-as-a-service is delivering that self-service functionality so that we do most of our routine AI creation via the platform, where you don’t have to have a professional data scientist, a code warrior, to be your sherpa,” Markus concluded.

References:

https://nvidianews.nvidia.com/news/at-t-supercharges-operations-with-nvidia-ai

AT&T leans into AI, and leans on Nvidia to do it

Using a distributed synchronized fabric for parallel computing workloads- Part II

by Run Almog​ Head of Product Strategy, Drivenets (edited by Alan J Weissberger)

Introduction:

In the previous part I article, we covered the different attributes of AI/HPC workloads and the impact this has on requirements from the network that serves these applications. This concluding part II article will focus on an open standard solution that addresses these needs and enables these mega sized applications to run larger workloads without compromising on network attributes.  Various solutions are described and contrasted along with a perspective from silicon vendors.

Networking for HPC/AI:

A networking solution serving HPC/AI workloads will need to carry certain attributes. Starting with scale of the network which can reach thousands of high speed endpoints and having all these endpoints run the same application in a synchronized manner. This requires the network to run like a scheduled fabric that offers full bandwidth between any group of endpoints at any given time.

Distributed Disaggregated Chassis (DDC):

DDC is an architecture that was originally defined by AT&T and contributed to the Open Compute Project (OCP) as an open architecture in September 2019. DDC defines the components and internal connectivity of a network element that is purposed to serve as a carrier grade network router. As opposed to the monolithic chassis-based router, the DDC defines every component of the router as a standalone device.

  • The line card of the chassis is defined as a distributed chassis packet-forwarder (DCP)
  • The fabric card of the chassis is defined as a distributed chassis fabric (DCF)
  • The routing stack of the chassis is defined as a distributed chassis controller (DCC)
  • The management card of the chassis is defined as a distributed chassis manager (DCM)
  • All devices are physically connected to the DCM via standard 10GbE interfaces to establish a control and a management plane.
  • All DCP are connected to all DCF via 400G fabric interfaces in a Clos-3 topology to establish a scheduled and non-blocking data plane between all network ports in the DDC.
  • DCP hosts both fabric ports for connecting to DCF and network ports for connecting to other network devices using standard Ethernet/IP protocols while DCF does not host any network ports.
  • The DCC is in fact a server and is used to run the main base operating system (BaseOS) that defines the functionality of the DDC

Advantages of the DDC are the following:

  • It’s capacity since there is no metal chassis enclosure that needs to hold all these components into a single machine. This allows building a wider Clos-3 topology that expands beyond the boundaries of a single rack making it possible for thousands of interfaces to coexist on the same network element (router).
  • It is an open standard definition which makes it possible for multiple vendors to implement the components and as a result, making it easier for the operator (Telco) to establish a multi-source procurement methodology and stay in control of price and supply chain within his network as it evolves.
  • It is a distributed array of components that each has an ability to exist as a standalone as well as act as part of the DDC. This gives a very high level of resiliency to services running over a DDC based router vs. services running over a chassis-based router.

AT&T announced they use DDC clusters to run their core MPLS in a DriveNets based implementation and as standalone edge and peering IP networks while other operators worldwide are also using DDC for such functionality.

Figure 1: High level connectivity structure of a DDC

……………………………………………………………………………………………………………………………………………………..

LC is defined as DCP above, Fabric module is defined as DCF above, RP is defined as DCC above, Ethernet SW is defined as DCM above

Source: OCP DDC specification

DDC is implementing a concept of disaggregation. The decoupling of the control plane from data plane enables the sourcing of the software and hardware from different vendors and assembling them back into a unified network element when deployed. This concept is rather new but still has had a lot of successful deployments prior to it being used as part of DDC.

Disaggregation in Data Centers:

The implementation of a detached data plane from the control plane had major adoption in data center networks in recent years. Sourcing the software (control plane) from one vendor while the hardware (data plane) is sourced from a different vendor mandate that the interfaces between the software and hardware be very precise and well defined. This has brought up a few components which were developed by certain vendors and contributed to the community to allow for the concept of disaggregation to go beyond the boundaries of implementation in specific customers networks.

Such components include open network install environment (ONIE) which enables mounting of the software image onto a platform (typically a single chip 1RU/2RU device) as well as the switch abstraction interface (SAI) which enable the software to directly access the application specific integrated circuit (ASIC) and operate directly onto the data plane at line rate speeds.

Two examples of implementing disaggregation networking in data centers are:

  • Microsoft which developed their network operating system (NOS) software Sonic as one that runs on SAI and later contributed its source code to the networking community via OCP and he Linux foundation.
  • Meta has defined devices called “wedge” who are purpose built to assume various NOS versions via standard interfaces.

These two examples of hyperscale companies are indicative to the required engineering effort to develop such interfaces and functions. The fact that such components have been made open is what enabled other smaller consumers to enjoy the benefits of disaggregation without the need to cater for large engineering groups.

The data center networking world today has a healthy ecosystem with hardware (ASIC and system) vendors as well as software (NOS and tools) which make a valid and widely used alternative to the traditional monolithic model of vertically integrated systems.

Reasons for deploying a disaggregated networking solution are a combination of two. First, is a clear financial advantage of buying white box equipment vs. the branded devices which carry a premium price. Second, is the flexibility which such solution enables, and this enables the customer to get better control over his network and how it’s run, as well as enable the network administrators a lot of room to innovate and adapt their network to their unique and changing needs.

The image below reflects a partial list of the potential vendors supplying components within the OCP networking community. The full OCP Membership directory is available at the OCP website.

Between DC and Telco Networking:

Data center networks are built to serve connectivity towards multiple servers which contain data or answer user queries. The size of data as well as number of queries towards it is a constantly growing function as humanity grows its consumption model of communication services. Traffic in and out of these servers is divided to north/south that indicates traffic coming in and goes out of the data center, and east/west that indicates traffic that runs inside the data center between different servers.

As a general pattern, the north/south traffic represent most of the traffic flows within the network while the east/west traffic represent the most bandwidth being consumed. This is not an accurate description of data center traffic, but it is accurate enough to explain the way data center networks are built and operated.

A data center switch connects to servers with a high-capacity link. This tier#1 switch is commonly known as a top of rack (ToR) switch and is a high capacity, non-blocking, low latency switch with some minimal routing capabilities.

  • The ToR is then connected to a Tier#2 switch that enables it to connect to other ToR in the data center.
  • The Tier#2 switches are connected to Tier#3 to further grow the connectivity.
  • Traffic volumes are mainly east/west and best kept within the same Tier of the network to avoid scaling the routing tables.
  • In theory, a Tier#4/5/6 of this network can exist, but this is not common.
  • The higher Tier of the data center network is also connected to routers which interface the data center to the outside world (primarily the Internet) and these routers are a different design of a router than the tiers of switching devices mentioned earlier.
  • These externally facing routers are commonly connected in a dual homed logic to create a level of redundancy for traffic to come in and out of the datacenter. Further functions on the ingress and egress of traffic towards data centers are also firewalled, load-balanced, address translated, etc. which are functions that are sometimes carried by the router and can also be carried by dedicated appliances.

As data centers density grew to allow better service level to consumers, the amount of traffic running between data center instances also grew and data center interconnect (DCI) traffic became predominant. A DCI router on the ingress/egress point of a data center instance is now a common practice and these devices typically connect over larger distance of fiber connectivity (tens to hundreds of Km) either towards other DCI routers or to Telco routers that is the infrastructure of the world wide web (AKA the Internet).

While data center network devices shine is their high capacity and low latency and are built from the ASIC level via the NOS they run to optimize on these attributes, they lack in their capacity for routing scale and distance between their neighboring routers. Telco routers however are built to host enough routes that “host” the Internet (a ballpark figure used in the industry is 1M routes according to CIDR) and a different structure of buffer (both size and allocation) to enable long haul connectivity. A telco router has a superset of capabilities vs. a data center switch and is priced differently due to the hardware it uses as well as the higher software complexity it requires which acts as a filter that narrows down the number of vendors that provide such solutions.

Attributes of an AI Cluster:

As described in a previous article HPC/AI workloads demand certain attributes from the network. Size, latency, lossless, high bandwidth and scale are all mandatory requirements and some solutions that are available are described in the next paragraphs.

Chassis Based Solutions:

This solution derives from Telco networking.

Chassis based routers are built as a black box with all its internal connectivity concealed from the user. It is often the case that the architecture used to implement the chassis is using line cards and fabric cards in a Clos-3 topology as described earlier to depict the structure of the DDC. As a result of this, the chassis behavior is predictable and reliable. It is in fact a lossless fabric wrapped in sheet metal with only its network interfaces facing the user. The caveat of a chassis in this case is its size. While a well-orchestrated fabric is a great fit for the network needs of AI workloads, it’s limited capacity of few hundred ports to connect to servers make this solution only fitting very small deployments.

In case chassis is used at a scale larger than the sum number of ports per single chassis, a Clos (this is in fact a non-balanced Clos-8 topology) of chassis is required and this breaks the fabric behavior of this model.

Standalone Ethernet Solutions:

This solution derives from data center networking.

As described previously in this paper, data center solutions are fast and can carry high bandwidth of traffic. They are however based on standalone single chip devices connected in a multi-tiered topology, typically a Clos-5 or Clos-7. as long as traffic is only running within the same device in this topology, behavior of traffic flows will be close to uniform. With the average number of interfaces per such device limited to the number of servers physically located in one rack, this single ToR device cannot satisfy the requirements of a large infrastructure. Expanding the network to higher tiers of the network also means that traffic patterns begin to alter, and application run-to-completion time is impacted. Furthermore, add-on mechanisms are mounted onto the network to turn the lossy network into a lossless one. Another attribute of the traffic pattern of AI workloads is the uniformity of the traffic flows from the perspective of the packet header. This means that the different packets of the same flow, will be identified by the data plane as the same traffic and be carried in the exact same path regardless of the network’s congestion situation, leaving parts of the Clos topology poorly utilized while other parts can be overloaded to a level of traffic loss.

Proprietary Locked Solutions:

Additional solutions in this field are implemented as a dedicated interconnect for a specific array of servers. This is more common in the scientific domain of heavy compute workloads, such as research labs, national institutes, and universities. As proprietary solutions, they force

the customer into one interconnect provider that serves the entire server array starting from the server itself and ending on all other servers in the array.

The nature of this industry is such where a one-time budget is allocated to build a “super-computer” which means that the resulting compute array is not expected to further grow but only be replaced or surmounted by a newer model. This makes the vendor-lock of choosing a proprietary interconnect solution more tolerable.

On the plus side of such solutions, they perform very well, and you can find examples on the top of the world’s strongest supercomputers list which use solutions from HPE (Slingshot), Intel (Omni-Path), Nvidia (InfiniBand) and more.

Perspective from Silicon Vendors:

DSF like solutions have been presented in the last OCP global summit back in October-2022 as part of the networking project discussions. Both Broadcom and Cisco (separately) have made claims of superior silicon implementation with improved power consumption or a superior implementation of a Virtual Output Queueing (VOQ) mechanism.

Conclusions:

There are differences between AI and HPC workloads and the required network for each.

While the HPC market finds proprietary implementations of interconnect solutions acceptable for building secluded supercomputers for specific uses, the AI market requires solutions that allow more flexibility in their deployment and vendor selection. This boils down to Ethernet based solutions of various types.

Chassis and standalone Ethernet based solutions provide reasonable solutions up to the scale of a single machine but fail to efficiently scale beyond a single interconnect machine and keep the required performance to satisfy the running workloads.

A distributed fabric solution presents a standard solution that matches the forecasted industry need both in terms of scale and in terms of performance. Different silicon implementations that can construct a DSF are available. They differ slightly but all show substantial benefits vs. chassis or standard ethernet solutions.

This paper does not cover the different silicon types implementing the DSF architecture but only the alignment of DSF attributes to the requirements from interconnect solutions built to run AI workloads and the advantages of DSF vs. other solutions which are predominant in this space.

–>Please post a comment in the box below this article if you have any questions or requests for clarification for what we’ve presented here and in part I.

References:

Using a distributed synchronized fabric for parallel computing workloads- Part I

SK Telecom inspects cell towers for safety using drones and AI

 SK Telecom, South Korea’s largest wireless carrier, announced on Tuesday that it’s developed a new cell tower safety inspection system using drones and image analysis artificial intelligence (AI).   The newly-developed image analysis AI model checks the status of nuts and bolts by analyzing images taken by drones.

Cell towers with antennas for sending and receiving telecommunications signals are installed across the country, with their maximum height estimated at 75 meters.  Since cell towers require regular maintenance to prevent accidents that can be caused by deterioration such as corrosion or loosening of nuts and bolts, specialized personnel had to climb them to inspect their condition with their bare eyes.

Engineers from a subsidiary of SK Telecom Co. inspect a cell tower in this photo provided by the wireless carrier on Jan. 31, 2023.

…………………………………………………………………………………………………………………………………………….

Now with an intelligent safety inspection system in place, not only can SK Telecom prevent accidents due to aging cell towers, but it can also ensure the safety of workers by minimizing the need to go up the cell towers. Moreover, the company can drive up work productivity through the application of an AI model that automatically identifies defects by analyzing images taken by drones.

Previously, safety inspectors had to study around 100 images to complete the inspection of one cell tower by inspecting multiple images taken by drones. With the adoption of the new AI analysis model, SK Telecom has been able to reduce the time required for the process by 95%, while increasing the reliability and consistency of the analysis results.

The company says, going forward, it will enhance the system even further by adding inspection items such as wind pressure safety/inclination. It will also look to improve the AI model and link the application with the safety management system.

In addition to drone-based cell tower inspections, the telecom company is actively applying AI to other areas of its network, including equipment error/anomaly detection, power cost reduction, and work completion inspection.

Park Myung-soon, SKT’s vice president and head of Infra DT Office, said: “By building an intelligent safety inspection system that can complement the existing visual inspection, we have secured greater safety for workers. We will continue to make efforts to achieve AI transformation of our telecommunication networks, while focusing on developing our field workers into experts who can develop and operate AI.”

References:

http://koreabizwire.com/sk-telecom-inspects-cell-towers-using-drones-and-ai/239441

South Korean telecom giant innovates safety inspection with drones

 

 

Allied Market Research: Global AI in telecom market forecast to reach $38.8 by 2031 with CAGR of 41.4% (from 2022 to 2031)

Executive Summary:

Artificial Intelligence (AI) in telecom uses software and algorithms to estimate human perception in order to analyze big data such as data consumption, call record, and use of the application to improve the customer experience. Also, AI helps telecommunication operators to detect flaws in the network, network security, network optimization & offer virtual assistance. Moreover, AI enables the telecom industry to extract insights from their vast data sets and made it easier to manage the daily business and resolve issues more efficiently and also provide improved customer service and satisfaction.

The growing adoption of AI solutions in various telecom applications is driving market growth. The rising number of AI-enabled smartphones with a number of features such as image recognition, robust security, voice recognition and many as compared to traditional phones is boosting the growth of AI in the telecommunication market. Furthermore, to cater to complex processes or telecom services, AI provides a simpler and easier interface in telecommunication. In addition, growing Over-The-Top (OTT) services, such as video streaming, have transformed the dissemination and consumption of audio and video content. With more consumers turning to OTT services, consumer demand for bandwidth has grown considerably. Carrying such ever-growing traffic from OTT services leads to high operational Expenditure (OpEx) for the telecommunication industry. Hence, AI helps the telecom industry to reduce operational costs by minimizing the human intervention needed for network configuration and maintenance. However, the major restraint of the AI in telecommunication market is the incompatibility between telecommunication systems and AI technology. Contrarily, the increasing penetration of AI-enabled smartphones in the telecommunication industry, and the advent of 5G technology in smartphones are expected to provide major growth opportunities for the growth of the market. Since advancements such as 5G technology in mobile and the rising need to monitor content on the tale communication network to eliminate human error from telecommunication are driving the growth of the market. For an instance, the Chinese government trying to improve its network services and telecommunication services; hence China Telecom Corporation has started a new 5G base station in Lanzhou city. Therefore, these factors are expected to provide numerous opportunities for the expansion of the AI in telecommunication market during the forecast period.

Allied Market Research published a report, titled, “AI in Telecommunication Market by Component (Solution, Service), by Deployment Model (On-Premise, Cloud), by Technology (Machine Learning, Natural Language Processing (NLP), Data Analytics, Others), by Application (Customer Analytics, Network Security, Network Optimization, Self-Diagnostics, Virtual Assistance, Others): Global Opportunity Analysis and Industry Forecast, 2021-2031.”

According to the report, the global AI in telecommunication industry generated $1.2 billion in 2021, and is estimated to reach $38.8 by 2031, witnessing a CAGR of 41.4% from 2022 to 2031.  The report offers a detailed analysis of changing market trends, top segments, key investment pockets, value chain, regional landscape, and competitive scenario.

Drivers, Restraints, and Opportunities:

Growing adoption of AI solutions in various telecom applications, the ability of AI to provide a simpler and easier interface in telecommunication and reduce the human intervention needed for network configuration and maintenance, and the growing demand for high bandwidth with more consumers turning to OTT services drive the growth of the global AI in telecommunication market. However, the incompatibility between telecommunication systems and AI technology hampers the global market growth. On the other hand, the increasing penetration of AI-enabled smartphones in the telecommunication industry, and the advent of 5G technology in smartphones likely to create potential opportunities for growth of the global market in the coming years.

Covid-19 Scenario:

  • The global artificial intelligence in telecommunication market saw a stable growth during the COVID-19 pandemic, owing to the increasing digital penetration and rise in automation.
  • Moreover, the pandemic led the telecommunications infrastructure to keep businesses, governments, and communities connected and operational. The social and financial disruption caused by the pandemic forced people to depend on technology such as AI for information and remote working.
  • AI also helped the telecom industry to reinvent customer relationships by identifying personalized needs and engaging with customers through hyper-personalized one-to-one contacts. It also helped configure fixed-line and mobile-network bundles that combine VPN, teleconferencing, and productivity apps.

The solution segment to dominate in terms of revenue during the forecast period:

Based on component, the solution segment was the largest market in 2021, contributing to more than two-thirds of the global AI in telecommunication market, and is expected to maintain its leadership status during the forecast period. This is due to the adoption of solutions by various end users for the automated processes. On the other hand, the service segment is projected to witness the fastest CAGR of 44.9% from 2022 to 2031, due to surge in the adoption of managed and professional services.

The on-premise segment to garner the largest revenue during the forecast period:

Based on deployment model, the on-premise segment held the largest market share of nearly three-fifths of the global AI in telecommunication market in 2021 and is expected to maintain its dominance during the forecast period. This is because it provides added security of data. The cloud segment, however, is projected to witness the largest CAGR of 43.8% from 2022 to 2031, as cloud provides flexibility, scalability, complete visibility, and efficiency to all processes.

The machine learning segment to exhibit a progressive revenue growth during the forecast period:

Based on technology, the machine learning segment held the largest market share of more than two-fifths of the global AI in telecommunication market in 2021, and would maintain its dominance during the forecast period. This is because machine learning algorithms are designed to keep improving accuracy and efficiency. The data analytics segment, however, is projected to witness the largest CAGR of 46.1% from 2022 to 2031, as it helps telecom companies to increase profitability by optimizing network usage and services.

Purchase Inquiry: https://www.alliedmarketresearch.com/purchase-enquiry/9717

Asia-Pacific to maintain its leadership in terms of revenue by 2031:

Based on region, North America was the largest market in 2021, capturing more than one-third of the global AI in telecommunication market. The growth in the region can be attributed to the infrastructure development and technology adoption in countries like the U.S. and Canada. However, the market in Asia-Pacific is expected to lead in terms of revenue and manifest the fastest CAGR of 45.7% during the forecast period, owing to the growing digital and economic transformation of the region.

Leading Market Players:

  • Intel Corporation
  • Nuance Communications, Inc.
  • AT&T
  • Infosys Limited
  • ZTE Corporation
  • IBM Corporation
  • Google LLC
  • Microsoft
  • Salesforce, Inc.
  • Cisco Systems, Inc.

The report analyzes these key players of the global AI in telecommunication market. These players have adopted various strategies such as expansion, new product launches, partnerships, and others to increase their market penetration and strengthen their position in the industry. The report is helpful in determining the business performance, operating segments, product portfolio, and developments by every market player.

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Download free sample of this report at:

https://www.alliedmarketresearch.com/request-sample/9717

You may buy this report at:

https://www.alliedmarketresearch.com/checkout-final/a6dc279b20c4a61f8a7f328812bfd76c

……………………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.prnewswire.com/news-releases/ai-in-telecommunication-market-to-reach-38-8-billion-globally-by-2031-at-41-4-cagr-allied-market-research-301722277.html

https://www.alliedmarketresearch.com/ai-in-telecommunication-market-A09352

Global AI in Telecommunication Market at CAGR ~ 40% through 2026 – 2027

The case for and against AI in telecommunications; record quarter for AI venture funding and M&A deals

Emerging AI Trends In The Telecom Industry

 

 

Page 2 of 4
1 2 3 4