Co-Packaged Optics to play an important role in data center switches

The commercialization of co-packaged optics (CPO) has been long anticipated but is becoming increasingly desirable as data needs accelerate. Co-Packaged Optics are an advanced heterogeneous integration of optics and silicon on a single packaged substrate aimed at addressing next generation bandwidth and power challenges.

As the bandwidth of data center switches increases, a disproportionate amount of power is becoming dedicated to the switch – optics interface. Reducing the physical separation between these two components by co-packaging enables system power savings which is essential to continued bandwidth scaling.

CPO brings together a wide range of expertise in fiber optics, digital signal processing (DSP), switch ASICs, and state-of-the-art packaging and test to provide disruptive system value for the data center and cloud infrastructure.

The companies and institutions working on CPO have made great strides in developing suitable electronic components. But hundreds of meters of fiber will be packed into the switch box for the first time, and faceplate connections will have unprecedented densities. As a result, the design and development of optical system solutions will also be critical elements in the success of CPO.  Optical components with performance tailored to the CPO application and effective solutions for managing the fiber in the switch box are vital in optimizing the complete optical system. Three aspects of CPO deployment, in particular, hinge on the properties of the fiber and the optical interfaces: optical power loss, the trade-off between minimizing bend loss and controlling for MPI and maintaining the polarization state if external lasers are used.

Image Courtesy of Broadcom

……………………………………………………………………………………………………………………………………….

Data centers face substantial challenges as they scale, particularly in reducing power dissipation and cost per bit. CPO will play a significant role in helping to meet those challenges.  In today’s data center switches, external fiber optic connections that carry data terminate on pluggable transceivers on the housing faceplate. The optical data stream is coupled to the electrical signals at that interface.

With a CPO realization of a 51.2 Tbps switch, the substrate connects a central regulator ASIC to 16 optoelectronic (O/E) tiles on the substrate perimeter. These tiles are connected to optical fiber signal cables that run to the switch box faceplate and receive power from external lasers that they modulate to produce the outgoing optical signal stream.

They communicate between the transceiver and the switch application-specific integrated circuit (ASIC) via copper traces on printed circuit boards. Under the CPO paradigm, as the optoelectronic conversion is pushed back from the faceplate to the switch substrate, long electrical traces are replaced with virtually loss-free optical fiber.

With CPO, the fiber path continues past a connector at the faceplate and into the switch box, ending at photonic integrated circuits (PICs) on optical tiles attached to the switch substrate. This shift presents the novel challenge of routing and connecting hundreds of optical fibers within a compact and crowded space, creating a need to minimize the footprint of the optics while still achieving performance and reliability targets.

CPO will soon be a reality that relies on a system of complex, interconnected components working well together. For optimum overall performance, these components must be designed with the specific requirements of CPO in mind, which for the optical subsystem include efficient and unobtrusive deployment within a crowded switch box, low power losses, absence of MPI impairments, and good reliability. Some CPO realizations also need optical polarization state control.

The familiar fiber and connectivity products, while having impressive attributes, are not optimum for the CPO application, and there is great scope for enhancing the performance of the optics by moving beyond default solutions to those specifically designed for the role.

Minimizing the optics footprint could mean routing fiber on the shortest path – consistent with the fiber properties – between the optical tile and its associated faceplate connector, but this would lead to at least eight different cable lengths for a 51.2 Tbps switch with 16 optical tiles and mirror symmetry. This proliferation of parts might be undesirable from a manufacturing point of view. If a reduced set of cable lengths were to be used, then the “constant length” routing would have to accommodate excess cable in some paths.

Image Courtesy of LightWave 

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

With space inside the switch box at a premium, the risk of mechanical interference with other components should be reduced as much as possible. When building switch boxes containing hundreds of fibers, it will be essential to have them deployed predictably while minimizing trouble spots like crossings and avoiding issues such as cable buckling.

This management goal will be greatly facilitated by using tightly bent fiber to follow short paths between the faceplate and chip. With typical telecommunications-grade single-mode fiber, too much light may be lost at these bends, but we can mitigate this by using bend-insensitive fiber designs.  However, in using such designs, care will be required to control multipath interference (MPI).

Power can be coupled and propagated in more than one fiber mode at each optical interface in the switch box (e.g., connectors, FAUs). Given the short fiber lengths likely to be used in CPO, power in higher-order modes (HOMs) will not be extinguished before the following interface, where the multiple modes will interfere with each other – the phenomenon known as MPI – ultimately causing wavelength-dependent power at the detector. At some wavelengths, that power reduction could be up to twice in decibels, arising from the independent losses at each interface.

Thus, unmitigated MPI could complicate some benefits of using fiber with low bend-loss. For these systems, a bend-insensitive fiber that also suppresses MPI in very short fiber lengths would need to be designed. One potential approach is to reduce the fiber cut-off wavelength to increase HOM loss substantially.

Even if MPI is reduced to insignificance, the coupling losses at those interfaces matter, too. The redesigned bend-insensitive fiber must maintain low coupling loss to Corning® SMF-28® Ultra or other fiber used in the data center to connect switches. This imposes constraints on the mode-field diameter of the CPO signal fiber.

To permit practical, low-cost provisioning of the switch box optical cables, the fiber management approach must include some means to accommodate length variations introduced by the cable manufacturing process. One strategy is to tie down the cable at points along its path and allow it to take a relatively unconstrained path between these tie-down points. The smaller the radius of curvature of that path, the less a bundle of cables will spread out for a given length variation.

An alternative is to provide specific accumulator structures to contain excess cable length. To keep such structures unobtrusive, the fiber should tolerate deployment in very tight loops, as small as 10-mm in diameter, retaining its low-bend loss and high reliability. These attributes are required of fiber that lends itself to “shortest path” and “constant length” routing.

Conclusions:

CPO will soon be a reality that relies on a system of complex, interconnected components working well together. For optimum overall performance, these components must be designed with the specific requirements of CPO in mind, which for the optical subsystem include efficient and unobtrusive deployment within a crowded switch box, low power losses, absence of MPI impairments, and good reliability. Some CPO realizations also need optical polarization state control.

The familiar fiber and connectivity products, while having impressive attributes, are not optimum for the CPO application, and there is great scope for enhancing the performance of the optics by moving beyond default solutions to those specifically designed for the role.

References:

https://www.broadcom.com/info/optics/cpo

https://www.lightwaveonline.com/data-center/article/14300451/datacenter-providers-see-future-proofed-possibilities-in-co-packaged-optics

Coherent Optics: Synergistic for telecom, Data Center Interconnect (DCI) and inter-satellite Networks

Heavy Reading: Coherent Optics for 400G transport and 100G metro edge

ABI Research: Telco transformation measured via patents and 3GPP contributions; 5G accelerating in China

Every single telecom operator in the world is now attempting to transform from telco to techco, to break free from their antiquated, legacy, and stale connectivity business and evolve to sell technology platforms, a considerably more lucrative and promising business. Their success is not guaranteed, and many find it difficult – if not impossible – to unshackle themselves from their history and comfort zone.

ABI Research now says it’s measuring the progress of telco transformation by quantifying the number of patents that telcos hold and also measuring their involvement in standards-setting initiatives like 3GPP (whose specs are standardized by ETSI and ITU-R).

Telecom operators from China and Japan are currently at the forefront of technology transformation, which shows in their involvement in 3GPP and patent holdings,” says Dimitris Mavrakis, Senior Research Director at ABI Research. “China MobileNTT Docomo, and China Telecom have invested time, effort, and capital in both domains, which now translates to significant expertise, knowledge, and recognition in the industry. Although this is not the only metric for innovation, these leading network operators are well suited to transforming their business, technology, and strategic platforms to look to the future.”

The findings of the latest ABI Research report on telecom operator innovation indicate that they consistently contribute to 3GPP work, approximately 8% of the total contribution. Of these telecom operator contributions to 3GPP, 43% originate from China, 29% from Japan, 14% from Europe, and 12% from the United States. Leading operators are China Mobile, NTT Docomo, China Telecom, OrangeVodafone, and Deutsche Telekom. Their Standards Essential Patent (SEP) holdings are similar, with China Mobile and NTT Docomo leading the market.

Standards contributions and patent holdings are good measures of willingness to innovate and get involved in leading the market. “Telecom operators must get involved and not let other companies lead the direction of the market – especially when geopolitics and semiconductor supply constraints are affecting the market. With 5G Advanced and upcoming 6G, they have the technology to innovate, but they must now take more risks and lead the market,” Mavrakis concludes.

Fierce Wireless asked why T-Mobile didn’t rank, given the good progress that it’s made with its 5G SA network and network slicing trials. Mavrakis said, “T-Mobile US is part of Deutsche Telekom, which is represented in the chart above. They are indeed making progress toward network slicing, but our report measures 3GPP standards activities and patents, which is a different area of innovation.”

These findings are from ABI Research’s Telco versus Techco: Operators’ Role in Shaping Cellular Innovation and 3GPP Standards application analysis report. This report is part of the company’s Cellular Standards & IPR research service, which includes research, data, and ABI Insights. Based on extensive primary interviews, Application Analysis reports present in-depth analysis on key market trends and factors for a specific application, which could focus on an individual market or geography.

…………………………………………………………………………………………………………………………………

Separately, ABI Research says 5G end-user services deployment continue to accelerate in China, which is very much leaving other markets in its wake. Not only does it have 3.2 million 5G base stations up and running, but also a wide range of 5G-to-Business (5GtoB) applications.

According to China’s Ministry of Industry and Information Technology (MIIT), the country has built or upgraded more than 3.2 million 5G base stations—accounting for 30% of the overall mobile base stations nationwide—which has already exceeded the initial target of deploying 2.9 million 5G base stations by the end of 2023. A fourth mobile operator, China Broadnet, has also been issued a 5G mobile cellular license to help stimulate consumer and enterprise competition.

5G subscriber adoption has been robust. At the end of 1Q 2023, the number of 5G subscriptions in the country had increased to around 1.3 billion, which is an increase of more than 53% from approximately 850 million 5G subscribers as of March 2022. The China Telecom Research Institute reported that the average download speed for 5G is a very robust 340 Megabits per Second (Mbps).

China’s mobile operators have seen an overall increase in service revenue. China Mobile reported an 8.1% Year-over-Year (YoY) increase in telecommunication service revenue, with mobile Average Revenue per User (ARPU) up 0.4% to CNY49 (US$6.9). China Telecom also reported a 3.7% YoY increase in mobile communications service revenue with mobile ARPU up 0.4% to CNY45.2 (US$6.3), whereas China Unicom saw a 3-year consecutive growth in mobile ARPU to CNY44.3 (US$6.2).

Growth in revenue has been primed by an expansion in revenue models the telcos can offer. Revenue for China Mobile’s 5G private networks also saw an increase of 107.4% YoY growth, reaching RMB2.55 billion (US$365.5 million) by December 2022. Meanwhile, China Unicom experienced a spike in 5G industry virtual private network customers from 491 to 5,816 between June 2022 and June 2023. Across the board, the three operators have collectively reached a cumulative total of more than 49,000 5G commercial enterprise projects, with China’s MIIT reporting that the operators have built more than 6,000 5G private networks, to date.

China’s mobile cellular ecosystem is not resting on its laurels. Urged on by China’s government, the sector has been embracing 5G-Advanced, as underpinned by The 3rd Generation Partnership Project’s (3GPP) Release 18. Included in Release 18 are greater support for Artificial Intelligence (AI) integration, 10 Gigabits per Second (Gbps) for peak downlink and 1 Gbps for peak uplink experience, supporting a wider range of Internet of Things (IoT) scenarios, and integrated sensing & communication. Information gathered through sensors can enable communication to be more deterministic, which improves the accuracy of channel conditions assessment. Another example is dynamic beam alignment for vehicle communications using Millimeter Wave (mmWave). China’s mobile operators and vendors are keen to adopt 5G-Advanced due to its ability to support a 10X densification of IoT devices compared to 5G. There is also support for passive 5G IoT devices that can be queried by campus and/or indoor small cells to provide telemetry-related data. Instead of a field or warehouse worker, or even an Autonomous Guided Vehicle (AGV) with a portable Radio Frequency Identification (RFID) reader, the campus cellular network can track asset tags in real time and remotely—eliminating the need to check up and down warehouse aisles individually.

5G-Advanced (not yet standardized) deployments are materializing in China. China Mobile Hangzhou launched its Dual 10 Gigabit City project in early 2023. This project focuses on using 5G-Advanced technologies to support applications such as glasses-free Three-Dimensional (3D) experiences on different devices during the Asian Games. Such early experimental projects are not limited to only one city in China. To the northeast of Hangzhou, China Mobile Shanghai has also started its own project to build the first 5G-Advanced intelligent 10 Gigabit Everywhere City (10 GbE City). The network is built using the 2.6 Gigahertz (GHz) network initially for the main urban areas before expanding the coverage to the entirety of Shanghai.

5G deployment, integration, and usage is accelerating. The China Academy of Information and Communications Technology anticipates that US$232 billion will have been invested in 5G by 2025. An additional US$37.9 billion (RMB3.5 trillion) of investment will also take place in the upstream and downstream segments of the industrial chain. During a 2023 Science and Technology Week and Strategic Emerging Industries Co-creation and Development Conference, MIIT stated that 5G connectivity has been integrated into “60 out of 97 national economic categories, covering over 12,000 application themes.” ABI Research has not verified all the use cases reported by MIIT, but ABI Research’s ongoing research into the 5G-to-Business (5GtoB) market in Asia has validated that there are a wide range of 5GtoB trials, pilots, and commercial rollouts taking place in China.

A further ABI Insight that you may find interesting is “China Telecom Is the First Operator Worldwide to Launch a “Device-to-Device” Service on a Smartphone to Improve Coverage.”

About ABI Research:

ABI Research is a global technology intelligence firm delivering actionable research and strategic guidance to technology leaders, innovators, and decision makers around the world. Our research focuses on the transformative technologies that are dramatically reshaping industries, economies, and workforces today.

References:

https://www.prnewswire.com/news-releases/3gpp-activities-and-patent-holdings-paint-a-bleak-picture-of-telcos-failure-to-innovate-toward-techco-status-301981014.html

https://www.fiercewireless.com/5g/abi-research-praises-china-mobile-ntt-docomo-5g-innovation

https://www.abiresearch.com/market-research/insight/7782761-china-is-leaving-the-rest-of-the-world-in-/

6th Digital China Summit: China to expand its 5G network; 6G R&D via the IMT-2030 (6G) Promotion Group

ABI Research: 5G Network Slicing Market Slows; T-Mobile says “it’s time to unleash Network Slicing”

ABI Research: Expansion of 5G SA Core Networks key to 5G subscription growth

ABI Research: Major contributors to 3GPP; How 3GPP specs become standards

ABI Research: 5G-Advanced (not yet defined by ITU-R) will include AI/ML and network energy savings

Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers

Introduction:

Many generative AI tools rely on a type of natural-language processing called large language models (LLMs) to first learn and then make inferences about languages and linguistic structures (like code or legal-case prediction) used throughout the world.  Some companies that use LLMs include: Anthropic (now collaborating with Amazon), Microsoft, OpenAI, Google, Amazon/AWS, Meta (FB), SAP, IQVIA. Here are some examples of LLMs: Google’s BERT, Amazon’s Bedrock, Falcon 40B, Meta’s Galactica, Open AI’s GPT-3 and GPT-4, Google’s LaMDA Hugging Face’s BLOOM Nvidia’s NeMO LLM.

The training process of the Large Language Models (LLMs) used in generative artificial intelligence (AI) is a cause for concern. LLMs can consume many terabytes of data and use over 1,000 megawatt-hours of electricity.

Alex de Vries is a Ph.D. candidate at VU Amsterdam and founder of the digital-sustainability blog Digiconomist  published a report in Joule which predicts that current AI technology could be on track to annually consume as much electricity as the entire country of Ireland (29.3 terawatt-hours per year).

“As an already massive cloud market keeps on growing, the year-on-year growth rate almost inevitably declines,” John Dinsdale, chief analyst and managing director at Synergy, told CRN via email. “But we are now starting to see a stabilization of growth rates, as cloud provider investments in generative AI technology help to further boost enterprise spending on cloud services.”

Hardware vs Algorithmic Solutions to Reduce Energy Consumption:

Roberto Verdecchia is an assistant professor at the University of Florence and the first author of a paper published on developing green AI solutions. He says that de Vries’s predictions may even be conservative when it comes to the true cost of AI, especially when considering the non-standardized regulation surrounding this technology.  AI’s energy problem has historically been approached through optimizing hardware, says Verdecchia. However, continuing to make microelectronics smaller and more efficient is becoming “physically impossible,” he added.

In his paper, published in the journal WIREs Data Mining and Knowledge Discovery, Verdecchia and colleagues highlight several algorithmic approaches that experts are taking instead. These include improving data-collection and processing techniques, choosing more-efficient libraries, and improving the efficiency of training algorithms.  “The solutions report impressive energy savings, often at a negligible or even null deterioration of the AI algorithms’ precision,” Verdecchia says.

……………………………………………………………………………………………………………………………………………………………………………………………………………………

Another Solution – Data Centers Powered by Alternative Energy Sources:

The immense amount of energy needed to power these LLMs, like the one behind ChatGPT, is creating a new market for data centers that run on alternative energy sources like geothermal, nuclear and flared gas, a byproduct of oil production.  Supply of electricity, which currently powers the vast majority of data centers, is already strained from existing demands on the country’s electric grids. AI could consume up to 3.5% of the world’s electricity by 2030, according to an estimate from IT research and consulting firm Gartner.

Amazon, Microsoft, and Google were among the first to explore wind and solar-powered data centers for their cloud businesses, and are now among the companies exploring new ways to power the next wave of AI-related computing. But experts warn that given their high risk, cost, and difficulty scaling, many nontraditional sources aren’t capable of solving near-term power shortages.

Exafunction, maker of the Codeium generative AI-based coding assistant, sought out energy startup Crusoe Energy Systems for training its large-language models because it offered better prices and availability of graphics processing units, the advanced AI chips primarily produced by Nvidia, said the startup’s chief executive, Varun Mohan.

AI startups are typically looking for five to 25 megawatts of data center power, or as much as they can get in the near term, according to Pat Lynch, executive managing director for commercial real-estate services firm CBRE’s data center business. Crusoe will have about 200 megawatts by year’s end, Lochmiller said. Training one AI model like OpenAI’s GPT-3 can use up to 10 gigawatt-hours, roughly equivalent to the amount of electricity 1,000 U.S. homes use in a year, University of Washington research estimates.

Major cloud providers capable of providing multiple gigawatts of power are also continuing to invest in renewable and alternative energy sources to power their data centers, and use less water to cool them down. By some estimates, data centers account for 1% to 3% of global electricity use.

An Amazon Web Services spokesperson said the scale of its massive data centers means it can make better use of resources and be more efficient than smaller, privately operated data centers. Amazon says it has been the world’s largest corporate buyer of renewable energy for the past three years.

Jen Bennett, a Google Cloud leader in technology strategy for sustainability, said the cloud giant is exploring “advanced nuclear” energy and has partnered with Fervo Energy, a startup beginning to offer geothermal power for Google’s Nevada data center. Geothermal, which taps heat under the earth’s surface, is available around the clock and not dependent on weather, but comes with high risk and cost.

“Similar to what we did in the early days of wind and solar, where we did these large power purchase agreements to guarantee the tenure and to drive costs down, we think we can do the same with some of the newer energy sources,” Bennett said.

References:

https://aws.amazon.com/what-is/large-language-model/

https://spectrum.ieee.org/ai-energy-consumption

https://www.wsj.com/articles/ais-power-guzzling-habits-drive-search-for-alternative-energy-sources-5987a33a

https://www.crn.com/news/cloud/microsoft-aws-google-cloud-market-share-q3-2023-results/6

Amdocs and NVIDIA to Accelerate Adoption of Generative AI for $1.7 Trillion Telecom Industry

SK Telecom and Deutsche Telekom to Jointly Develop Telco-specific Large Language Models (LLMs)

AI Frenzy Backgrounder; Review of AI Products and Services from Nvidia, Microsoft, Amazon, Google and Meta; Conclusions

 

Amdocs and NVIDIA to Accelerate Adoption of Generative AI for $1.7 Trillion Telecom Industry

Amdocs and NVIDIA today announced they are collaborating to optimize large language models (LLMs) to speed adoption of generative AI applications and services across the $1.7 trillion telecommunications and media industries.(1)

Amdocs and NVIDIA will customize enterprise-grade LLMs running on NVIDIA accelerated computing as part of the Amdocs amAIz framework. The collaboration will empower communications service providers to efficiently deploy generative AI use cases across their businesses, from customer experiences to network provisioning.

Amdocs will use NVIDIA DGX Cloud AI supercomputing and NVIDIA AI Enterprise software to support flexible adoption strategies and help ensure service providers can simply and safely use generative AI applications.

Aligned with the Amdocs strategy of advancing generative AI use cases across the industry, the collaboration with NVIDIA builds on the previously announced Amdocs-Microsoft partnership. Service providers and media companies can adopt these applications in secure and trusted environments, including on premises and in the cloud.

With these new capabilities — including the NVIDIA NeMo framework for custom LLM development and guardrail features — service providers can benefit from enhanced performance, optimized resource utilization and flexible scalability to support emerging and future needs.

“NVIDIA and Amdocs are partnering to bring a unique platform and unmatched value proposition to customers,” said Shuky Sheffer, Amdocs Management Limited president and CEO. “By combining NVIDIA’s cutting-edge AI infrastructure, software and ecosystem and Amdocs’ industry-first amAlz AI framework, we believe that we have an unmatched offering that is both future-ready and value-additive for our customers.”

“Across a broad range of industries, enterprises are looking for the fastest, safest path to apply generative AI to boost productivity,” said Jensen Huang, founder and CEO of NVIDIA. “Our collaboration with Amdocs will help telco service providers automate personalized assistants, service ticket routing and other use cases for their billions of customers, and help the telcos analyze and optimize their operations.”

Amdocs counts more than 350 of the world’s leading telecom and media companies as customers, including 27 of the world’s top 30 service providers.(2) With more than 1.7 billion daily digital journeys, Amdocs platforms impact more than 3 billion people around the world.

NVIDIA and Amdocs are exploring a number of generative AI use cases to simplify and improve operations by providing secure, cost-effective and high-performance generative AI capabilities.

Initial use cases span customer care, including accelerating customer inquiry resolution by drawing information from across company data. On the network operations side, the companies are exploring how to proactively generate solutions that aid configuration, coverage or performance issues as they arise.

(1) Source: IDC, OMDIA, Factset analyses of Telecom 2022-2023 revenue.
(2) Source: OMDIA 2022 revenue estimates, excludes China.

Editor’s Note:

Generative AI uses a variety of AI models, including: 

  • Language models: These models, like OpenAI’s GPT-3, generate human-like text. One of the most popular examples of language-based generative models are called large language models (LLMs).
  • Large language models are being leveraged for a wide variety of tasks, including essay generation, code development, translation, and even understanding genetic sequences.
  • Generative adversarial networks (GANs): These models use two neural networks, a generator, and a discriminator.
  • Unimodal models: These models only accept one data input format.
  • Multimodal models: These models accept multiple types of inputs and prompts. For example, GPT-4 can accept both text and images as inputs.
  • Variational autoencoders (VAEs): These deep learning architectures are frequently used to build generative AI models.
  • Foundation models: These models generate output from one or more inputs (prompts) in the form of human language instructions.
Other types of generative AI models include:  Neural networks, Genetic algorithms, Rule-based systems, Transformers, LaMDA, LLaMA, BLOOM, BERT, RoBERTa. 
…………………………………………………………………………………………………………………………………
References:

https://nvidianews.nvidia.com/news/amdocs-and-nvidia-to-accelerate-adoption-of-generative-ai-for-1-7-trillion-telecom-industry

https://www.nvidia.com/en-us/glossary/data-science/generative-ai/

https://blogs.nvidia.com/blog/2023/01/26/what-are-large-language-models-used-for/

Cloud Service Providers struggle with Generative AI; Users face vendor lock-in; “The hype is here, the revenue is not”

Global Telco AI Alliance to progress generative AI for telcos

Bain & Co, McKinsey & Co, AWS suggest how telcos can use and adapt Generative AI

Generative AI Unicorns Rule the Startup Roost; OpenAI in the Spotlight

Generative AI in telecom; ChatGPT as a manager? ChatGPT vs Google Search

Generative AI could put telecom jobs in jeopardy; compelling AI in telecom use cases

 

Intentional or Accident: Russian fiber optic cable cut (1 of 3) by Chinese container ship under Baltic Sea

From Reuters:

A Russian fiber optic cable under the Baltic Sea was completely severed last month when a Chinese container ship passed over it, state company Rostelecom said on Tuesday.

Finnish investigators have already said they suspect the vessel, the NewNew Polar Bear, of causing serious damage to the nearby Balticconnector gas pipeline by dragging its anchor over the sea bed during the same voyage.

Two other Baltic telecoms cables were damaged on the same night of October 7th, along the route that the ship was travelling, according to shipping data reviewed by Reuters.

The incidents have highlighted the vulnerability of marine cables and pipelines at a time when security fears are running high because of the Ukraine war. Investigators have yet to establish who was responsible for blowing up Russia’s Nord Stream gas pipelines under the Baltic last year.

A Rostelecom spokesperson, responding to emailed questions from Reuters, said the double armored fiber optic cable, with a thickness of 40.4 mm (1.6 inches), had been cut completely.

Asked if the company believed the Chinese ship had caused the damage, the spokesperson said: “At the time of the damage to the fiber optic cable, the Chinese ship New Polar Bear was at a point with coordinates coinciding with the route of the communication line.”

China has said it is willing to provide necessary information on the incident in accordance with international law. NewNew Shipping, the owner and operator of the NewNew Polar Bear, has previously declined to comment when contacted by Reuters.

In a statement earlier on Tuesday, Rostelecom publicly acknowledged the damage to its cable for the first time, describing it as an accident and without mentioning the cause.  It said the site of the damage was only 28 km (17 miles) from where the Balticconnector gas pipeline was ruptured soon afterwards.

In total, three Baltic telecoms cables and one pipeline were damaged in the space of less than nine hours.

Data from shipping intelligence firm MarineTraffic, reviewed by Reuters, showed that the New Polar Bear passed over a Swedish-Estonian telecoms cable at 1513 GMT, then over the Russian cable at around 2020 GMT, the Balticconnector at 2220 GMT and a Finland-Estonia telecoms line at 2349 GMT.

Rostelecom said the damage to its cable was recorded at 2030 GMT.

As far back as Oct. 13, President Vladimir Putin dismissed as “complete rubbish” suggestions that Russia might have been to blame for the Balticconnector damage and floated the possibility that a ship’s anchor could have caused it.

On Tuesday, the Kremlin referred further questions to the Communications Ministry, which did not respond to a Reuters request for comment.

Finnish police announced on Oct. 24 that they had found a ship’s anchor near the broken gas pipeline. They have not concluded whether the damage was caused accidentally or deliberately.  Operator Gasgrid has said the pipeline could be out of commission until April or longer.

Rostelecom said a specialised vessel had started repairs on the fiber optic cable on Sunday and that the work was expected to take 10 days, depending on weather conditions.

The cable runs from St Petersburg to Russia’s Baltic exclave of Kaliningrad. The company said users had not been affected because data was transmitted via terrestrial routes and backup satellite channels.

References:

https://www.reuters.com/world/europe/russia-says-telecoms-cable-damaged-last-month-just-before-nearby-baltic-gas-2023-11-07/

China seeks to control Asian subsea cable systems; SJC2 delayed, Apricot and Echo avoid South China Sea

Sabotage or Accident: Was Russia or a fishing trawler responsible for Shetland Island cable cut?

Geopolitical tensions arise in Asia over subsea fiber optic cable projects; U.S. intervened to flip SeaMeWe-6 contractor

 

Verizon once again delays 5G Standalone (SA) commercial service

Like AT&T, Verizon has promised 5G standalone (SA) core network for a very long time.  The mostly wireless U.S. carrier initially said it would launch standalone 5G in 2020. Some in the industry thought it did so in 2022. But the company said the technology ‘is in testing now’ and is still not available commercially.

“We have it in trials only at this point. We don’t have it commercially available for our customers,” Verizon’s chief networking executive, Joe Russo, said on a podcast last month hosted by Recon Analytics. “So more to come in the next several months as Verizon will be entering the standalone core game.”

“It is absolutely a capability that we think will be another enabler to new use cases. But … the reliability and performance of Verizon’s network is what we stand for, and I don’t put technology out into the network that is a step back. It has to be a step forward. And all of the data that I see – both internal testing and with external testing that happens out there in the market – tells me that SA [standalone] needs a little bit more time.”

“We’re doing significant developing and testing to make sure that both the data session and the voice sessions in a standalone world are as good or better than what you would expect in our 4G network today. So we see that in the next several months we’re going to get there, but it was not my goal to be first in deploying standalone. It’s my goal to be best in deploying standalone.”

……………………………………………………………………………………………………………………………..

Verizon spokesperson Kevin King clarified that “we have commercial traffic running on our 5G non standalone core. That is what we announced earlier in the year. Joe was referring to our 5G standalone core which is in testing now.”

That cop-out was contradicted by a statement made during a webinar for analysts on September 29th, which was obtained by Light Reading.  “People talk about the standalone core. Just terminology-wise, that’s the 5G core essentially. If you guys have read the stuff we’ve said publicly, certainly we serve some customers on portions of our 5G core,” said Mike Haberman, Verizon’s SVP of strategy and transformation, And then we have some internal stuff going on with other functionality on the core. We’re in the process of rolling out (5G SA) in a very smart fashion.”

“Here’s the deal: When you go to the standalone core, you can’t aggregate your LTE carriers. With the non standalone core I’m aggregating together both 5G and 4G. So when you go standalone you start to bifurcate the spectrum. So that’s the impact to the RAN [radio access network]. So you better be sure that your mobile [customer] distribution, where they are geography, makes sense. Or what will happen is those customers will experience a lower service level. No good. We want to be careful of that. So that’s why, when you do the standalone core, you have to pay very close attention to your radio access network because they are directly attached.”

On April 27th Verizon issued a press release describing the benefits of 5G standalone (SA) technology and how it’s “what sets Verizon apart.” However, the release doesn’t specifically say that Verizon launched the technology.  That despite Verizon last year announced it had begun moving traffic onto its new 5G core, which supports both the non standalone (NSA) and standalone (SA) versions of the technology.

Last year, Mobile World Live reported that Verizon was migrating “commercial traffic onto SA 5G core.” The article cited an unnamed Verizon representative. Mobile World Live also reported that Ericsson, Casa Systems, Oracle and Nokia supply Verizon’s 5G core.

Dell’Oro Group, in January 2023, listed Verizon among the few North American wireless providers that had commercially launched the technology.

“This is a moving target,” Recon Analytics analyst Roger Entner told Light Reading. But Entner said Verizon’s position on the standalone version of 5G makes sense. “The benefits you can get today from standalone are limited.”

–>This author totally disagrees with Mr. Entner, because TRUE 5G=5G SA.  IN OTHER WORDS, ALL OF THE 3GPP DEFINED 5G FEATURES REQUIRE 5G SA!  That includes 5G security and network slicing.

…………………………………………………………………………………………………………………………………

Light Reading’s Mike Dano wrote:

Verizon now appears to be roughly three years behind its initial standalone 5G rollout plans. In the summer of 2020, Verizon said it would begin moving traffic onto its standalone 5G core “in the second half of 2020 with full commercialization in 2021.”

Then, in early 2022, Verizon CTO Kyle Malady suggested that the operator would begin moving some of its fixed wireless access (FWA) traffic onto its standalone 5G core by June of that year. He also said at the time that Verizon would start putting smartphone traffic onto that core in 2023.

………………………………………………………………………………………………………….

T-Mobile US and Dish Wireless are the only two 5G carriers that have launched commercial 5G SA.   AT&T has made a lot of noise about it’s 5G SA plans but has yet to launch.

AT&T’s chief networking executive, Chris Sambar, wrote in a September 29th blog post that AT&T was moving some customers to standalone 5G. “Many of the newest mobile devices are ready for 5G standalone, and we continue to move thousands of customers every day. We also recently launched AT&T Internet Air home fixed wireless service, and from the start, this product rides on standalone 5G.”

References:

https://www.lightreading.com/5g/verizon-surprises-with-ongoing-delays-in-5g-standalone-rollout

https://www.verizon.com/about/news/5g-standalone-why-it-matters

https://about.att.com/blogs/2023/network-ready.html

AT&T touts 5G advances; will deploy Standalone 5G when “the ecosystem is ready”- when will that be?

Analysys Mason: 40 operational 5G SA networks worldwide; Sub-Sahara Africa dominates new launches

GSA 5G SA Core Network Update Report

5G subscription prices rise in U.S. without killer applications or 5G features (which require a 5G SA core network)

 

SpaceX has majority of all satellites in orbit; Starlink achieves cash-flow breakeven

SpaceX accounts for roughly one-half of all orbital space launches around the world, and it’s growing its launch frequency. It also has a majority of all the satellites in orbit around the planet.  This Thursday, majority owner & CEO Elon Musk tweeted, “Excited to announce that SpaceX Starlink has achieved breakeven cash flow! Starlink (a SpaceX subsidiary) is also now a majority of all active satellites and will have launched a majority of all satellites cumulatively from Earth by next year.”

There are some 5,000 Starlink satellites in orbit. Starlink satellites are small, lower-cost satellites built by SpaceX that deliver high-speed, space-based internet service to customers on Earth. Starlink can cost about $120 a month and there is some hardware to buy as well.

Starlink ended 2022 with roughly 1 million subscribers. The subscriber count now isn’t known, but it could be approaching 2 million users based on prior growth rates. SpaceX didn’t return a request for comment.

In 2021, Musk said SpaceX would spin off and take Starlink public once its cash flow was reasonably predictable.

A SpaceX rocket carriers Starlink satellites into orbit. PHOTO CREDIT:  SPACEX

Starlink has been in the spotlight since last year as it helps provide Ukraine with satellite communications key to its war efforts against Russia.

Last month, Musk said Starlink will support communication links in Gaza with “internationally recognized aid organizations” after a telephone and internet blackout isolated people in the Gaza Strip from the world and from each other.

Musk has sought to establish the Starlink business unit as a crucial source of revenue to fund SpaceX’s more capital-intensive projects such as its next-generation Starship, a giant reusable rocket the company intends to fly to the moon for NASA within the next decade.

Starlink posted a more than six-fold surge in revenue last year to $1.4 billion, but fell short of targets set by Musk, the Wall Street Journal reported in September, citing documents.

SpaceX is valued at about $150 billion and is one of the most valuable private companies in the world.

References:

https://www.reuters.com/technology/elon-musk-says-starlink-has-achieved-breakeven-cash-flow-2023-11-02/

https://www.barrons.com/articles/elon-musk-spacex-starlink-86fe99ec?

Verizon transports 1.2 terabytes per second of data across a single wavelength

Verizon has upgraded its optical to electrical conversion cards to send data at speeds of 1.2 Tbps on a single wavelength through the carrier’s live production network. The trials demonstrated increased reliability and overall capacity as well, Verizon said.

The trials, which were conducted in metro Long Island, N.Y., were in partnership with Cisco and included technology from Acacia, as well. They utilized Cisco’s NCS 1014 transceiver shelf and Acacia’s Coherent Interconnect Module (CIM 8). Verizon said the module features silicon semiconductor chips with 5nm complementary metal-oxide semiconductor (CMOS) digital processing and 140 Gbaud silicon photonics using 3D packaging technology. In short, digital processing capabilities and transistor density both are increased.

Verizon said that it transmitted a 1.0 Tbps single wavelength through the Cisco NCS 20000 line system for more than 205 km. It traversed 14 fiber central offices (COs). The carrier said this is significant because progressive filtering and signal-to-noise degradation impact wavelengths as they pass through each CO. The trials also featured 800 Gbps transmission for 305 km through 20 COs — and a 1.2 Tbps wavelength that traversed three offices.

“We have bet big on fiber. Not only does it provide an award-winning broadband experience for consumers and enterprises, it also serves as the backbone of our wireless network. As we continue to see customers using more data in more varied ways, it is critical we continue to stay ahead of our customers’ demands by using the resources we have most efficiently,” said Adam Koeppe, SVP of Technology Planning at Verizon.

Image courtesy of Verizon

In addition to increasing data rates, the new optics technology from Cisco reduces the need for regeneration of the light signal (conversion to electrical and back to optical signals) along the path by compensating for the degradation of the light signal traveling through the fiber cable. This adds reliability and leads to a reduced cost per bit operating expense for more efficient network management.

Bill Gartner, senior vice president and general manager of Cisco Optical Systems and Optics, added, “This trial demonstrates our commitment to continuous innovation aimed at increasing wavelength capacity and reducing costs. The Verizon infrastructure built with the Cisco NCS 2000 open line system supports multiple generations of optics, thus protecting investments as technology evolves.”

In March, Windstream Wholesale said that it sent a 1 Tbps wave across its Converged Optical Network (ICON) between Dallas and Tulsa, a distance of 541 km.

References:

https://www.verizon.com/about/news/verizon-fiber-technology-advancement-results

Verizon Touts 1.2 Tbps Wavelengths Over Production Network – Telecompetitor

https://www.verizon.com/about/news/verizon-transports-800-gbps

AT&T, Verizon and Comcast all lost fixed broadband subscribers in 2Q-2023

Tutorial: Utilizing Containers for Cloud-Native and Elastic SBCs

by Bhupen Chauhan (edited by Alan J Weissberger)

Introduction:

A recent advancement in Session Border Controllers (SBCs) involves moving them to the cloud and utilizing elasticity. The use of Containers (defined below) intensifies this change even more. SBC solutions are essential for businesses.  Service providers depend on Voice over IP (VoIP), because they provide security, interoperability, quality, and compliance in their communication infrastructure. We’ll delve further into the world of Cloud Native/Elastic SBCs using containers in this post.

What is a SBC?

A Session Border Controller (SBC) is a specialized hardware device or software program that controls how phone calls are answered, carried out, and ended as VoIP. They act as gatekeepers between internal and external networks, managing the media streams and signals necessary for establishing, carrying out, and terminating calls.

A SBC establishes and keeps track of each session’s quality of service (QoS) state, guaranteeing calls must be answered correctly, and urgent calls take precedence over all other calls. Additionally, an SBC can act as a firewall for session traffic by enforcing its QoS policies and recognizing particular inbound threats to the communication environment.

SBC’s Significance in Communications:

By providing security, guaranteeing interoperability, and enabling the effective use of network resources, SBCs strengthen communication networks. Here are several reasons to have Session Border Controller in VoIP Phone System and protect IP communications, defending against intrusions and offering essential functions, including:

  • Security: An SBC’s most important responsibility is to defend the network against hostile assaults, including fraud, eavesdropping, and denial of service (DoS) attacks. They add a layer of security by concealing the network topology.
  • Quality of Service (QoS): SBCs ensure that voice calls have the capacity and resources to remain high-quality by prioritizing voice traffic over other kinds of traffic.
  • Interoperability: SBCs provide smooth communication between various devices, protocols, and signaling in IP networks by offering the required protocol translations.
  • NAT Traversal: VoIP communication may encounter problems due to Network Address Translation (NAT). By fixing NAT traversal issues, SBCs guarantee reliable and continuous communications.
  • Call Routing and Policy Enforcement: SBCs effectively route calls according to rules and specifications. They can also control bandwidth usage and implement various payment methods.
  • Regulatory Compliance: Communication service providers are required in some areas to offer the ability to intercept communications legally. SBCs help VoIP service providers in fulfilling these kinds of legal obligations.
  • Media services: Comprise functions like tone production, DTMF (dual-tone multi-frequency) interworking, and transcoding (changing one codec to another).

SBCs are essential to today’s communications environment, particularly regarding IP-based voice and video communications. Cloud-based and elastic SBCs will play a more significant role as communications change and more services go to the cloud. Now, we will together understand the concept of Cloud Native and Elastic SBCs.

What are Cloud Native and Elastic SBCs?

A strategy for developing and executing programs that takes advantage of the cloud computing concept is known as cloud-native. Conversely, Elastic SBCs describe a system’s capacity to adjust automatically to variations in workload by allocating and releasing resources.

Traditional SBCs and Cloud Native/Elastic SBCs are different as we now explain.

Traditional SBCs vs. Cloud Native/Elastic SBCs:

Traditional SBCs lack flexibility and are usually hardware-based. They might be expensive and difficult to expand or change. In comparison, Cloud Native/Elastic SBCs are highly adaptable. They provide cost-effectiveness and agility by effortlessly scaling up or down in response to demand.

Elasticity’s Function in Communication Services:

Elasticity guarantees continuous communication services that can adjust to heavy demands, particularly during peak hours. It implies that networks may scale resources without human intervention, ensuring service quality without taxing the infrastructure or adding needless expenses.

………………………………………………………………………………………………………………………………………

What are Containers?

Containers are small, independent, executable software packages containing all the necessary components to run a piece of software, guaranteeing that it performs consistently in various computing settings.

Function of Containers in the Modern Applications Deployment:

Containers offer an unequaled level of consistency and speed, revolutionizing how programs are delivered. They contain an application and all its dependencies, guaranteeing that it functions the same everywhere it is deployed.

Kubernetes and Docker:

Docker technology creates, transports, and operates applications inside containers. Meanwhile, Kubernetes’s container orchestration technology ensures that massive container deployments are effectively scaled and managed.

The Research and Markets predicts the global Kubernetes market is anticipated to expand significantly, rising from USD 1.8 billion in 2022 to USD 7.8 billion by 2030, with a spectacular Compound Annual Growth Rate (CAGR) of 23.4%

Use of Containers in SBCs:

SBCs benefit from unrivaled scalability, flexibility, and agility thanks to containers. They enable rapid deployments, guarantee consistency across many environments, and dramatically cut overall overheads—financially and in terms of time.  Using Containers involves:

  • Individual micro-services package and deploy using containers such as Docker. It guarantees scalability, isolation, and effective resource use.
  • It can easily update services, rolled back, and versioned.

Architectural Considerations for Creating Cloud-Native Applications:

Scalability, Redundancy, Resilience, and Performance considerations are crucial when constructing Cloud Native/Elastic SBCs with containers. Architectures must be modular, and facilities must be made for smooth upgrades and patches that don’t interfere with running services. The following architectural factors should be taken into account while creating and deploying Cloud Native/Elastic SBCs using containers:

Disentanglement of Elements:

  • Conventional SBCs frequently integrate several features into a single monolithic system.
  • Micro-services design, in which every function (signaling, media processing, transcoding, security, etc.) is a separate, independent service, is the best way to create cloud-native SBCs.

Coordination:

  • Containerized SBC micro-services can be managed and orchestrated by tools such as Kubernetes, which guarantees their efficient scheduling, scaling, and maintenance.
  • Consider putting service mesh technologies into practice for enhanced traffic management and security.

State Administration:

  • Active call sessions require careful management.
  • Consider utilizing state full sets in Kubernetes or distributed databases to maintain a session state.

High Redundancy and Availability:

  • Should guarantee redundancy over several zones or regions Via cloud-native design.
  • Incorporate self-healing procedures and health checks to ensure uninterrupted service availability.

Converting SBCs to a containerized, cloud-native architecture has benefits for maintainability, scalability, and flexibility. However, careful architectural considerations are necessary to guarantee cost-effectiveness, security, and performance.

Containers Networking for Real-Time Communication:

The networking element of containers is essential in the area of real-time communication. It guarantees seamless switching between media streams, satisfies low-latency specifications, and ensures that Quality of Service (QoS) standards.

Elastic and Cloud Native SBC Security and Compliance:

SBCs play a crucial role in addressing security. Strong security measures are required for Cloud Native/Elastic SBCs to thwart threats, prevent unwanted access, and guarantee data privacy. They must also adhere to industry norms and laws to ensure reliable communication connections.

Prospects for the Future and the Changing Scene for Cloud Native SBCs:

SBCs will likely interact more deeply with cloud ecosystems in the future, utilizing AI and machine learning to provide more innovative and adaptive features. To support developing IoT and 5G use cases, edge computing may also play a more significant part in Cloud Native SBCs.

Conclusions:

A significant improvement in the evolution of network security and telecommunications has been realized cloud-native and elastic principles and the power of containers. This new paradigm is poised to completely reshape communication networks in the future since it provides agility, scalability, and efficiency. As we embrace containerization and the cloud, the possibilities are endless, opening the prospect of a more efficient, safe, and networked world.

References:

Session Border Controller (SBC) for Enterprises and VoIP Service Providers

Ericsson’s India 6G Research Program at its Chennai R&D Center

Today at the India Mobile Congress 2023, Ericsson announced the launch of its ‘India 6G‘ program with the establishment of an India 6G Research team at its Chennai R&D Center. Ericsson stated that the ‘India 6G’ team consists of senior research members and a team of experienced researchers in Radio, Networks, AI, and Cloud. They have been tasked with developing fundamental solutions for the future.

The India Research team, in collaboration with Ericsson research teams in Sweden and the US, will work together to develop the technology that will enable the delivery of a cyber-physical continuum. In this continuum, networks will provide critical services, immersive communications, and omnipresent IoT, all while ensuring the integrity of the delivered information.

The 6G Research team in India, in collaboration with Ericsson Global Research teams, will develop novel solutions. The teams are working on various projects, including Channel Modelling and Hybrid Beamforming, Low-energy Networks, Cloud Evolution and Sustainable Computing, Trustworthy, Explainable, and Bias-Free AI algorithms, Autonomous Agents for Intent Management Functions, Integrated Sensing and Communication Functions for the Man-Machine Continuum, and Compute Offload to Edge-Computing Cloud, among others.

“By establishing a dedicated 6G research team for in-country research, contextual to India’s need and collaborating with the world class research programs across international research labs, we look forward to incorporating the needs of India into the mainstream of telecommunication technology evolution. “stated Magnus Frodigh, Head of Research at Ericsson.

Ericsson says it is partnering with premier institutes in India for Radio, AI and Cloud Research. The company said, “AI Research is of high importance to Ericsson as the 6G networks would be autonomously driven by AI algorithms. Ericsson is also looking to partner with other premier engineering institutes in India for 6G related research.”

The Centre for Responsible AI is an interdisciplinary research centre that envisions becoming a premier research centre for both fundamental and applied research in Responsible AI with immediate impact in deploying AI systems in the Indian ecosystem. AI Research is of high importance to Ericsson as the 6G networks would be autonomously driven by AI algorithms.  Ericsson is also looking to partner with other premier engineering institutes in India for 6G related research.

Ericsson in India:

Ericsson is reportedly partnering with Communication Service Providers, Bharti Airtel, and Reliance Jio to deploy 5G in the country.

According to the statement, Ericsson has been present in India since 1903, and the Ericsson Research team was established in 2010. With the establishment of 6G Research in India, Ericsson looks forward to playing a key role in advancing this technology in the country.

The company has three R&D centres in India, located in Chennai, Bengaluru, and Gurgaon.

References:

https://www.ericsson.com/en/press-releases/2/2023/10/ericsson-initiates-india-6g-program-in-india

https://telecomtalk.info/ericsson-announces-india-6g-program-at-imc2023/889172/

India unveils Bharat 6G vision document, launches 6G research and development testbed

Enable-6G: Yet another 6G R&D effort spearheaded by Telefónica de España

Nokia to open 5G and 6G research lab in Amadora, Portugal

6th Digital China Summit: China to expand its 5G network; 6G R&D via the IMT-2030 (6G) Promotion Group

China to introduce early 6G applications by 2025- way in advance of 3GPP specs & ITU-R standards

 

Page 33 of 216
1 31 32 33 34 35 216