Light Source Communications Secures Deal with Major Global Hyperscaler for Fiber Network in Phoenix Metro Area

Light Source Communications is building a 140-mile fiber middle-mile network in the Phoenix, AZ metro area, covering nine cities: Phoenix, Mesa, Tempe, Chandler, Gilbert, Queen Creek, Avondale, Coronado and Cashion. The company already has a major hyperscaler as the first anchor tenant.

There are currently 70 existing and planned data centers in the area that Light Source will serve. As one might expect, the increase in data centers stems from the boom in artificial intelligence (AI).

The network will include a big ring, which will be divided into three separate rings. In total, Light Source will be deploying 140 miles of fiber. The company has partnered with engineering and construction provider Future Infrastructure LLC, a division of Primoris Services Corp., to make it happen.

“I would say that AI happens to be blowing up our industry, as you know. It’s really in response to the amount of data that AI is demanding,” said Debra Freitas [1.], CEO of Light Source Communications (LSC).

Note 1. Debra Freitas has led LSC since co-founding in 2014. Owned and operated network with global OTT as a customer. She developed key customer relationships, secured funding for growth. Currently sits on the Executive Board of Incompas.


Light Source plans for the entire 140-mile route to be underground. It’s currently working with the city councils and permitting departments of the nine cities as it goes through its engineering and permit approval processes. Freitas said the company expects to receive approvals from all the city councils and to begin construction in the third quarter of this year, concluding by the end of 2025.

Primoris delivers a range of specialty construction services to the utility, energy, and renewables markets throughout the United States and Canada. Its communications business is a leading provider of critical infrastructure solutions, including program management, engineering, fabrication, replacement, and maintenance. With over 12,700 employees, Primoris had revenue of $5.7 billion in 2023.

“We’re proud to partner with Light Source Communications on this impactful project, which will exceed the growing demands for high-capacity, reliable connectivity in the Phoenix area,” said Scott Comley, president of Primoris’ communications business. “Our commitment to innovation and excellence is well-aligned with Light Source’s cutting-edge solutions and we look forward to delivering with quality and safety at the forefront.”

Light Source is a carrier neutral, owner-operator of networks serving enterprises throughout the U.S. In addition to Phoenix, several new dark fiber routes are in development in major markets throughout the Central and Western United States. For more information about Light Source Communications, go to

The city councils in the Phoenix metro area have been pretty busy with fiber-build applications the past couple of years because the area is also a hotbed for companies building fiber-to-the-premises (FTTP) networks. In 2022 the Mesa City Council approved four different providers to build fiber networks. AT&T and BlackRock have said their joint venture would also start deploying fiber in Mesa.

Light Source is focusing on middle-mile, rather than FTTP because that’s where the demand is, according to Freitas. “Our route is a unique route, meaning there are no other providers where we’re going. We have a demand for the route we’re putting in,” she noted.

The company says it already has “a major, global hyperscaler” anchor tenant, but it won’t divulge who that tenant is. Its network will also touch Arizona State University at Tempe and the University of Arizona.

Light Source doesn’t light any of the fiber it deploys. Rather, it is carrier neutral and sells the dark fiber to customers who light it themselves and who may resell it to their own customers.

Light Source began operations in 2014 and is backed by private equity. It did not receive any federal grants for the new middle-mile network in Arizona.


Bill Long, Zayo’s chief product officer, told Fierce Telecom recently that data centers are preparing for an onslaught of demand for more compute power, which will be needed to handle AI workloads and train new AI models.


About Light Source Communications:

Light Source Communications (LSC) is a carrier neutral, customer agnostic provider of secure, scalable, reliable connectivity on a state-of-the-art dark fiber network. The immense amounts of data businesses require to compete in today’s global market requires access to an enhanced fiber infrastructure that allows them to control their data. With over 120 years of telecom experience, LSC offers an owner-operated network for U.S. businesses to succeed here and abroad. LSC is uniquely positioned and is highly qualified to build the next generation of dark fiber routes across North America, providing the key connections for business today and tomorrow.


Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers

AI sparks huge increase in U.S. energy consumption and is straining the power grid; transmission/distribution as a major problem

AI Frenzy Backgrounder; Review of AI Products and Services from Nvidia, Microsoft, Amazon, Google and Meta; Conclusions



CoreSite Enables 50G Multi-cloud Networking with Enhanced Virtual Connections to Oracle Cloud Infrastructure FastConnect

CoreSite [1.], a co-location services provider, announced the launch of a 50 gigabits per second (Gbps) connection through Oracle Cloud Infrastructure FastConnect [2.]. The connection will be available on the Open Cloud Exchange, CoreSite’s software-defined networking (SDN) platform. It will allow Oracle customers to use Oracle Cloud bandwidth to OCI with virtual connections. It will also support bandwidth virtual connections to Google Cloud and between CoreSite data centers.  Oracle customers can use these connections to harness the power of Oracle Cloud Infrastructure (OCI) locally, including services like Oracle Autonomous Database, to unlock innovation and drive business growth.

Note 1. CoreSite is a subsidiary of American Tower Corporation and a member of Oracle PartnerNetwork (OPN). 

Note 2.  Oracle FastConnect enables customers to bypass the public internet and connect directly to Oracle Cloud Infrastructure and other Oracle Cloud services. With connectivity available at CoreSite’s data centers, FastConnect provides a flexible, economical private connection to higher bandwidth options for your hybrid cloud architecture.  Oracle FastConnect is accessible at CoreSite’s data center facilities in Northern Virginia and Los Angeles through direct fiber connectivity. FastConnect is also available via the CoreSite Open Cloud Exchange® in seven CoreSite markets, including Los Angeles, Silicon Valley, Denver, Chicago, New York, Boston and Northern Virginia.

The integration of Oracle FastConnect and the CoreSite Open Cloud Exchange offers on-demand, virtual connectivity and access to best in class, end-to-end, fully redundant connection architecture.

Image Credit: CoreSite


The connectivity of FastConnect and the OCX can offer customers deploying artificial intelligence (AI) and data-intensive applications the ability to transfer large datasets securely and rapidly from their network edge to machine learning (ML) models and big data platforms running on OCI. With the launch of the new OCX capabilities to FastConnect, businesses can gain greater flexibility to provision on-demand, secure bandwidth to OCI with virtual connections of up to 50 Gbps.

With OCI, customers benefit from best-in-class security, consistent high performance, simple predictable pricing, and the tools and expertise needed to bring enterprise workloads to cloud quickly and efficiently. In addition, OCI’s distributed cloud offers multicloud, hybrid cloud, public cloud, and dedicated cloud options to help customers harness the benefits of cloud with greater control over data residency, locality, and authority, even across multiple clouds. As a result, customers can bring enterprise workloads to the cloud quickly and efficiently while meeting the strictest regulatory compliance requirements.

“The digital world requires faster connections to deploy complex, data-intense workloads. The simplified process offered through the Open Cloud Exchange enables businesses to rapidly scale network capacity between the enterprise edge and cloud providers,” said Juan Font, President and CEO of CoreSite, and SVP of U.S. Tower. “These enhanced, faster connections with FastConnect can provide businesses with a competitive advantage by ensuring near-seamless and reliable data transfers at massive scale for real-time analysis and rapid data processing.”

OCI’s extensive network of more than 90 FastConnect global and regional partners offer customers dedicated connectivity to Oracle Cloud Regions and OCI services – providing customers with the best options anywhere in the world. OCI is a deep and broad platform of cloud infrastructure services that enables customers to build and run a wide range of applications in a scalable, secure, highly available, and high-performance environment. From application development and business analytics to data management, integration, security, AI, and infrastructure services including Kubernetes and VMware, OCI delivers unmatched security, performance, and cost savings.

The new Open Cloud Exchange capabilities on FastConnect will be available in Q4 2023.

Related Resources:

About CoreSite:

CoreSite, an American Tower company (NYSE: AMT), provides hybrid IT solutions that empower enterprises, cloud, network, and IT service providers to monetize and future-proof their digital business. Our highly interconnected data center campuses offer a native digital supply chain featuring direct cloud onramps to enable our customers to build customized hybrid IT infrastructure and accelerate digital transformation. For more than 20 years, CoreSite’s team of technical experts has partnered with customers to optimize operations, elevate customer experience, dynamically scale, and leverage data to gain competitive edge. For more information, visit and follow us on LinkedIn and Twitter.


IEEE Santa Clara Valley (SCV) Lecture and Tour of CoreSite Multi-Tenant Data Center


Using a distributed synchronized fabric for parallel computing workloads- Part I

by Run Almog​ Head of Product Strategy, Drivenets (edited by Alan J Weissberger)


Different networking attributes are needed for different use cases.  Endpoints can be the source of a service provided via the internet or can also be a handheld device streaming a live video from anywhere on the planet. In between endpoints we have network vertices that handle this continuous and ever-growing traffic flow onto its destination as well as handle the knowhow of the network’s whereabouts, apply service level assurance, handle interruptions and failures and a wide range of additional attributes that eventually enable network service to operate.

This two part article will focus on a use case of running artificial intelligence (AI) and/or high-performance computing (HPC) applications with the resulting networking aspects described.  The HPC industry is now integrating AI and HPC, improving support for AI use cases. HPC has been successfully used to run large-scale AI models in fields like cosmic theory, astrophysics, high-energy physics, and data management for unstructured data sets.

In this Part I article, we examine: HPC/AI workloads, disaggregation in data centers, role of the Open Compute Project, telco data center networking, AI clusters and AI networking.

HPC/AI Workloads, High Performance Compute Servers, Networking:

HPC/AI workloads are applications that run over an array of high performance compute servers. Those servers typically host a dedicated computation engine like GPU/FPGA/accelerator in addition to a high performance CPU, which by itself can act as a compute engine, and some storage capacity, typically a high-speed SSD. The HPC/AI application running on such servers is not running on a specific server but on multiple servers simultaneously. This can range from a few servers or even a single machine to thousands of machines all operating in synch and running the same application which is distributed amongst them.

The interconnect (networking) between these computation machines need to allow any to any connectivity between all machines running the same application as well as cater for different traffic patterns which are associated with the type of application running as well as stages of the application’s run. An interconnect solution for HPC/AI would resultingly be different than a network built to serve connectivity to residential households or a mobile network as well as be different than a network built to serve an array of servers purposed to answers queries from multiple users as a typical data center structure would be used for.

Disaggregation in Data Centers (DCs):

Disaggregation has been successfully used as a solution for solving challenges in cloud resident data centers.  The Open Compute Project (OCP) has generated open source hardware and software for this purpose.  The OCP community includes hyperscale data center operators and industry players, telcos, colocation providers and enterprise IT users, working with vendors to develop and commercialize open innovations that, when embedded in product are deployed from the cloud to the edge.

High-performance computing (HPC) is a term used to describe computer systems capable of performing complex calculations at exceptionally high speeds. HPC systems are often used for scientific research, engineering simulations and modeling, and data analytics.  The term high performance refers to both speed and efficiency. HPC systems are designed for tasks that require large amounts of computational power so that they can perform these tasks more quickly than other types of computers. They also consume less energy than traditional computers, making them better suited for use in remote locations or environments with limited access to electricity.

HPC clusters commonly run batch calculations. At the heart of an HPC cluster is a scheduler used to keep track of available resources. This allows for efficient allocation of job requests across different compute resources (CPUs and GPUs) over high-speed networks.  Several HPC clusters have integrated Artificial Intelligence (AI).

While hyperscale, cloud resident data centers and HPC/AI clusters have a lot of similarities between them, the solution used in hyperscale data centers is falling short when trying to address the additional complexity imposed by the HPC/AI workloads.

Large data center implementations may scale to thousands of connected compute servers.  Those servers are used for an array of different application and traffic patterns shift between east/west (inside the data center) and north/south (in and out of the data center). This variety boils down to the fact that every such application handles itself so the network does not need to cover guarantee delivery of packets to and from application endpoints, these issues are solved with standard based retransmission or buffering of traffic to prevent traffic loss.

An HPC/AI workload on the other hand, is measured by how fast a job is completed and is interfacing to machines so latency and accuracy are becoming more of a critical factor. A delayed packet or a packet being lost, with or without the resulting retransmission of that packet, drags a huge impact on the application’s measured performance. In HPC/AI world, this is the responsibility of the interconnect to make sure this mishaps do not happen while the application simply “assumes” that it is getting all the information “on-time” and “in-synch” with all the other endpoints it shares the workload with.

–>More about how Data centers  use disaggregation and how it benefits HPC/AI in the second part of this article (Part II).

Telco Data Center Networking:

Telco data centers/central offices are traditionally less supportive of deploying disaggregated solutions than hyper scale, cloud resident data centers.  They are characterized by large monolithic, chassis based and vertically integrated routers. Every such router is well-structured and in fact a scheduled machine built to carry packets between every group of ports is a constant latency and without losing any packet. A chassis based router would potentially pose a valid solution for HPC/AI workloads if it could be built with scale of thousands of ports and be distributed throughout a warehouse with ~100 racks filled with servers.

However, some tier 1 telcos, like AT&T, use disaggregated core routing via white box switch/routers and DriveNets Network Cloud (DNOS) software.  AT&T’s open disaggregated core routing platform was carrying 52% of the network operators traffic at the end of 2022, according to Mike Satterlee, VP of AT&T’s Network Core Infrastructure Services.  The company says it is now exploring a path to scale the system to 500Tbps and then expand to 900Tbps.

“Being entrusted with AT&T’s core network traffic – and delivering on our performance, reliability and service availability commitments to AT&T– demonstrates our solution’s strengths in meeting the needs of the most demanding service providers in the world,” said Ido Susan, DriveNets founder and CEO. “We look forward to continuing our work with AT&T as they continue to scale their next-gen networks.”

Satterlee said AT&T is running a nearly identical architecture in its core and edge environments, though the edge system runs Cisco’s disaggregates software. Cisco and DriveNets have been active parts of AT&T’s disaggregation process, though DriveNets’ earlier push provided it with more maturity compared to Cisco.

“DriveNets really came in as a disruptor in the space,” Satterlee said. “They don’t sell hardware platforms. They are a software-based company and they were really the first to do this right.”

AT&T began running some of its network backbone on DriveNets core routing software beginning in September 2020. The vendor at that time said it expected to be supporting all of AT&T’s traffic through its system by the end of 2022.

Attributes of an AI Cluster:

Artificial intelligence is a general term that indicates the ability of computers to run logic which assimilates the thinking patterns of a biological brain. The fact is that humanity has yet to understand “how” a biological brain behaves, how are memories stored and accessed, how come different people have different capacities and/or memory malfunction, how are conclusions being deduced and how come they are different between individuals and how are actions decided in split second decisions. All this and more are being observed by science but not really understood to a level where it can be related to an explicit cause.

With evolution of compute capacity, the ability to create a computing function that can factor in large data sets was created and the field of AI focuses on identifying such data sets and their resulting outcome to educate the compute function with as many conclusion points as possible. The compute function is then required to identify patterns within these data sets to predict the outcome of new data sets which it did not encounter before. Not the most accurate description of what AI is (it is a lot more than this) but it is sufficient to explain why are networks built to run AI workloads different than regular data center networks as mentioned earlier.

Some example attributes of AI networking are listed here:

  • Parallel computing – AI workloads are a unified infrastructure of multiple machines running the same application and same computation task
  • Size – size of such task can reach thousands of compute engines (e.g., GPU, CPU, FPGA, Etc.)
  • Job types – different tasks vary in their size, duration of the run, the size and number of data sets it needs to consider, type of answer it needs to generate, etc. this as well as the different language used to code the application and the type of hardware it runs on contributes to a growing variance of traffic patterns within a network built for running AI workloads
  • Latency & Jitter – some AI workloads are resulting a response which is anticipated by a user. The job completion time is a key factor for user experience in such cases which makes latency an important factor. However, since such parallel workloads run over multiple machines, the latency is dictated by the slowest machine to respond. This means that while latency is important, jitter (or latency variation) is in fact as much a contributor to achieve the required job completion time
  • Lossless – following on the previous point, a response arriving late is delaying the entire application. Whereas in a traditional data center, a message dropped will result in retransmission (which is often not even noticed), in an AI workload, a dropped message means that the entire computation is either wrong or stuck. It is for this reason that AI running networks requires lossless behavior of the network. IP networks are lossy by nature so for an IP network to behave as lossless, certain additions need to be applied. This will be discussed in. follow up to this paper.
  • Bandwidth – large data sets are large. High bandwidth of traffic needs to run in and out of servers for the application to feed on. AI or other high performance computing functions are reaching interface speeds of 400Gbps per every compute engine in modern deployments.

The narrowed down conclusion from these attributes is that a network purposed to run AI workloads differs from a traditional data center network in that it needs to operate “in-synch.

There are several such “in-synch” solutions available.  The main options are: Chassis based solutions, Standalone Ethernet solutions, and proprietary locked solutions.–>These will be briefly described to their key advantages and deficiencies in our part II article.


There are a few differences between AI and HPC workloads and how this translates to the interconnect used to build such massive computation machines.

While the HPC market finds proprietary implementations of interconnect solutions acceptable for building secluded supercomputers for specific uses, the AI market requires solutions that allow more flexibility in their deployment and vendor selection.

AI workloads have greater variance of consumers of outputs from the compute cluster which puts job completion time as the primary metric for measuring the efficiency of the interconnect. However, unlike HPC where faster is always better, some AI consumers will only detect improvements up to a certain level which gives interconnect jitter a higher impact than latency.

Traditional solutions provide reasonable solutions up to the scale of a single machine (either standalone or chassis) but fail to scale beyond a single interconnect machine and keep the required performance to satisfy the running workloads. Further conclusions and merits of the possible solutions will be discussed in a follow up article.


About DriveNets:

DriveNets is a fast-growing software company that builds networks like clouds. It offers communications service providers and cloud providers a radical new way to build networks, detaching network growth from network cost and increasing network profitability.

DriveNets Network Cloud uniquely supports the complete virtualization of network and compute resources, enabling communication service providers and cloud providers to meet increasing service demands much more efficiently than with today’s monolithic routers. DriveNets’ software runs over standard white-box hardware and can easily scale network capacity by adding additional white boxes into physical network clusters. This unique disaggregated network model enables the physical infrastructure to operate as a shared resource that supports multiple networks and services. This network design also allows faster service innovation at the network edge, supporting multiple service payloads, including latency-sensitive ones, over a single physical network edge.



10 Networking Trends in High-Performance Computing

AT&T Deploys Dis-Aggregated Core Router White Box with DriveNets Network Cloud software

DriveNets Network Cloud: Fully disaggregated software solution that runs on white boxes



Equinix to deploy Nokia’s IP/MPLS network infrastructure for its global data center interconnection services

Today, Nokia announced that Equinix will deploy a new Nokia IP/MPLS network infrastructure to support its global interconnection services. As one of the largest data center and colocation providers, Equinix currently runs services on multiple networks from multiple vendors. With the new network, Equinix will be able to consolidate into one, efficient web-scale infrastructure to provide FP4-powered connectivity to all data centers – laying the groundwork for customers to deploy 5G networks and services.

Equinix currently provides metro, national and international interconnectivity and cloud services to its customers to distribute content that delivers the best user experience. As 5G rollouts continue, a fundamental shift in network design is critical to support 5G service capabilities such as ultra-low latency, high capacity and the power to connect multiple devices and systems into one, seamless and automated whole.In response, Equinix is replacing its older multi-vendor networks with a single global IP/MPLS network from Nokia, powered by its innovative FP4 routing silicon and Network Services Platform (NSP). Equinix will now be able to deliver all of its interconnection services worldwide, saving its customers money, streamlining their operations and easing their unique 5G transformations.With a presence in 24 countries across five continents, Equinix connects its hyperscale, communication service provider and enterprise customers with their end users in 52 markets worldwide, extending their digital infrastructure to wherever in the world they need to do business.  Equinix recently completed its $750 million all-cash deal to buy 13 data centers in Canada from Bell (BCE).

Muhammad Durrani, Director of IP Architecture for Equinix, said, “We see tremendous opportunity in providing our customers with 5G services, but this poses special demands for our network, from ultra-low latency to ultra broadband performance, all with business- and mission-critical reliability. Nokia’s end-to-end router portfolio will provide us with the highly dynamic and programmable network fabric we need, and we are pleased to have the support of the Nokia team every step of the way.”

“We’re pleased to see Nokia getting into the data center networking space and applying the same rigor to developing a next-generation open and easily extendible data center network operating system while leveraging its IP routing stack that has been proven in networks globally. It provides a platform that network operations teams can easily adapt and build applications on, giving them the control they need to move fast.”

Sri Reddy, Co-President of IP/Optical Networks, Nokia, said, “We are working closely with Equinix to help advance its network and facilitate the transformation and delivery of 5G services. Our end-to-end portfolio was designed precisely to support this industrial transformation with a highly flexible, scalable and programmable network fabric that will be the ideal platform for 5G in the future. It is exciting to work with Equinix to help deliver this to its customers around the world.”

With an end-to-end portfolio, including the Nokia FP4-powered routing family, Nokia is working in partnership with operators to deliver real 5G. The FP4 chipset is the industry’s leading network processor for high-performance routing, setting the bar for density and scale. Paired with Nokia’s Service Router Operating System (SR OS) software, it will enable Equinix to offer additional capabilities driven by routing technologies such as Ethernet VPNs (EVPNs) and segment routing (SR).

Nokia Routers.JPG

Image Credit: Nokia


This latest deal comes just two weeks after Equinix said it will host Nokia’s Worldwide IoT Network Grid (WING) service on its data centers. WING is an Infrastructure-as-a-Service offering that provides low-latency and global reach to businesses, hastening their deployment of IoT and utilizing solutions offered by the Edge and cloud.

Equinix operates more than 210 data centers across 55 markets. It is unclear which of these data centers will first offer Nokia’s services and when WING will be available to customers.

“Nokia needed access to multiple markets and ecosystems to connect to NSPs and enterprises who want a play in the IoT space,” said Jim Poole, VP at Equinix. “By directly connecting to Nokia WING, mobile network operators can capture business value across IoT, AI, and security, with a connectivity strategy to support business transformation.”



About Nokia:

We create the technology to connect the world. Only Nokia offers a comprehensive portfolio of network equipment, software, services and licensing opportunities across the globe. With our commitment to innovation, driven by the award-winning Nokia Bell Labs, we are a leader in the development and deployment of 5G networks.

Our communications service provider customers support more than 6.4 billion subscriptions with our radio networks, and our enterprise customers have deployed over 1,300 industrial networks worldwide. Adhering to the highest ethical standards, we transform how people live, work and communicate. For our latest updates, please visit us online and follow us on Twitter @nokia.



NeoPhotonics demonstrates 90 km 400ZR transmission in 75 GHz DWDM channels enabling 25.6 Tbps per fiber

NeoPhotonics completed experimental verification of the transmission of 400Gbps data over data center interconnect (DCI) link in a 75 GHz spaced Dense Wavelength Division Multiplexing (DWDM) channel.

NeoPhotonics achieved two milestones using its interoperable pluggable 400ZR [1.] coherent modules and its specially designed athermal arrayed waveguide grating (AWG) multiplexers (MUX) and de-multiplexers (DMUX).

Note 1. ZR stands for Extended Reach which can transmit 10G data rate and 80km distance over single mode fiber and use 1550nm lasers.

  • Data rate per channel increases from today’s non-interoperable 100Gbps direct-detect transceivers to 400Gbps interoperable coherent 400ZR modules.
  • The current DWDM infrastructure can be increased from 32 channels of 100 GHz-spaced DWDM signals to 64 channels of 75 GHz-spaced DWDM signals.
  • The total DCI fiber capacity can thus be increased from 3.2 Tbps (100Gbps/ch. x 40 ch.) to 25.6 Tbps (400Gbps/ch. x 64 ch.), which is a total capacity increase of 800 percent.

NeoPhotonics said its technology overcomes multiple challenges in transporting 400ZR signals within 75 GHz-spaced DWDM channels.

The filters used in NeoPhotonics MUX and DMUX units are designed to limit ACI [2.] while at the same time having a stable center frequency against extreme temperatures and aging.

Note 2.  ACI stands for Adjacent Channel Interface; it also can refer to Application Centric Infrastructure.

What is 400ZR? - Ciena

NeoPhotonics has demonstrated 90km DCI links using three in-house 400ZR pluggable transceivers with their tunable laser frequencies tuned to 75GHz spaced channels, and a pair of passive 75GHz-spaced DWDM MUX and DMUX modules designed specifically for this application. The optical signal-to-noise ratio (OSNR) penalty due to the presence of the MUX and DMUX and the worst-case frequency drifts of the lasers, as well as the MUX and DMUX filters, is less than 1dB. The worst-case component frequency drifts were applied to emulate the operating conditions for aging and extreme temperatures, the company said in a press release.

“The combination of compact 400ZR silicon photonics-based pluggable coherent transceiver modules with specially designed 75 GHz channel spaced multiplexers and de-multiplexers can greatly increase the bandwidth capacity of optical fibers in a DCI application and consequently greatly decrease the cost per bit,” said Tim Jenks, Chairman and CEO of NeoPhotonics. “These 400ZR coherent techniques pack 400Gbps of data into a 75 GHz wide spectral channel, placing stringent requirements on the multiplexers and de-multiplexers. We are uniquely able to meet these requirements because we do both design and fabrication of planar lightwave circuits and we have 20 years of experience addressing the most challenging MUX/DMUX applications,” concluded Mr. Jenks.

About NeoPhotonics

NeoPhotonics is a leading developer and manufacturer of lasers and optoelectronic solutions that transmit, receive and switch high-speed digital optical signals for Cloud and hyper-scale data center internet content provider and telecom networks. The Company’s products enable cost-effective, high-speed over distance data transmission and efficient allocation of bandwidth in optical networks. NeoPhotonics maintains headquarters in San Jose, California and ISO 9001:2015 certified engineering and manufacturing facilities in Silicon Valley (USA), Japan and China. For additional information visit


Zayo’s largest capacity wavelengths deal likely for cloud data center interconnection (DCI)

Zayo Group Holdings announced it has signed a deal for the largest amount of capacity sold on any fiber route in the company’s history.  The deal with the unnamed customer will provide approximately 5 terabits of capacity that can be used to connect mega scale data centers. While Zayo didn’t disclose the customer, large hyperscale cloud providers, such as Amazon Web Services, Microsoft Azure and Google Cloud Project, and webscale companies such as Facebook, seem to be likely candidates.

Zayo provides a 133,000-mile fiber network in the U.S., Canada and Europe.  Earlier this year it agreed to be acquired by affiliates of Digital Colony Partners and the EQT Infrastructure IV fund.  That deal is slated to close in the first half of next year.

“Our customers [1] are no longer talking gigabits — they’re talking terabits on multiple diverse routes,” said Julia Robin, senior vice president of Transport at Zayo. “Zayo’s owned infrastructure, scalable capacity on unique routes and ability to turn up services quickly positions us to be the provider of choice for high-capacity infrastructure.”

Note 1. Zayo’s primary customer segments include data centers, wireless carriers, national carriers, ISPs, enterprises and government agencies.

Image result for image of zayo's fiber optic network

Zayo to extend fiber-optic network in central Florida: The new fiber network infrastructure, comprising more than 2300 route miles, will open Tampa and Orlando as new markets for the fiber-optic network services company.


Zayo’s extensive wavelength network provides dedicated bandwidth to major data centers, carrier hotels, cable landing stations and enterprise locations across our long-haul and metro networks. Zayo continues to invest in the network, adding new routes and optronics to eliminate local stops, reduce the distance between essential markets and minimize regeneration points. Options include express, ultra-low and low-latency routes and private dedicated networks.

Zayo says it “leverages its deep, dense fiber assets in almost all North American and Western European metro markets to deliver a premier metro wavelength offering. Increasingly, enterprises across multiple sectors including finance, retail, pharma and others, are leveraging this network for dedicated connectivity as they seek ways to have more control over their growing bandwidth needs.”

According to a report by market research firm IDC, data created, captured and replicated worldwide will be 175 zettabytes by 2025 and 30% of it will be in real time. A large chunk of that amount will be driven by webscale, content and cloud providers that require diverse, high capacity connections between their data centers. In order to provision high bandwidth amounts, service providers and webscale companies are turning to dedicated wavelength solutions.

Zayo’s wavelength network provides dedicated bandwidth to major data centers, carrier hotels, cable landing stations and enterprise locations across its long-haul and metro networks. Its communications infrastructure offerings include dark fiber, private data networks, wavelengths, Ethernet, dedicated internet access and data center co-location services. Zayo also owns and operates a Tier 1 IP backbone and 51 carrier-neutral data centers.


For more information on Zayo, please visit


IHS Markit: Data Center Interconnect (DCI) is Fastest-growing Application for Optical Networking

A significant driver for innovation in the optical market, data center interconnect (DCI) is the fastest-growing application for optical networking equipment, according to a new study from business information provider IHS Markit. Eighty-six percent of service providers polled for the Optical Network Applications Survey have plans to support DCI applications in their networks.

“Data center interconnect is enjoying a meteoric rise as the hottest segment in the optical networking applications space,” said Heidi Adams, senior research director for transport networks at IHS Markit. “Service providers are becoming increasingly invested in the DCI market, both for providing interconnect between their own data centers and for offering DCI services to internet content providers and enterprises. We estimate that service providers will account for around half of all DCI equipment spending in 2018.”

The optical data center equipment market reached $1.4 billion in sales in the first half of 2018, posting 19 percent year-over-year growth, according to IHS Markit. A key driver of the market is the compact DCI sub segment, which notched a 173 percent growth rate during this same time period.

“‘Compact’ DCI equipment is designed to fit within a data center environment from the form factor, power consumption and operational perspectives,” Adams said. “It’s optimized to meet the requirements of internet content providers like Google, AWS, Facebook, Microsoft and Apple.”

The top three vendors in the compact DCI sub segment are Ciena, Infinera and Cisco, who collectively account for three-quarters of the market.

Additional DCI highlights

  • Cost per port is the leading criterion among survey respondents for the selection of equipment for DCI applications.
  • 100G is the main currency for line-side DCI interfaces in 2018, declining in favor of 400G by 2021.
  • IHS Markit forecasts the total DCI market to grow at a 15 percent compound annual growth rate (CAGR) from 2017 to 2022, representing a higher rate of growth than the overall WDM market.

Optical Network Applications Service Provider Survey – 2018

This survey analyzes the trends and assesses the needs of service providers using emerging optical networking architectures. It covers data center interconnect, packet-optical equipment and software-defined networking for transport networks. For the survey, IHS Markit interviewed 22 service providers who have deployed packet-optical transport, optical DCI and/or transport SDNs or will do so in the future.

DCI, Packet-Optical & OTN Equipment Market Tracker

This biannual report provides worldwide and regional vendor market share, market size, forecasts through 2022, analysis and trends for data center interconnect equipment, packet-optical transport systems, and OTN transport and switching hardware.



Cignal AI: Record Cloud and Colo Optical Hardware Spending in 2Q18

by Andres Schmitt

Ciena Leads Sales to North American Cloud/Colo Operators; Huawei Sees Strong Demand from Chinese Cloud Giants

Sales of optical equipment to the cloud and colo market grew rapidly, reaching record levels in 2Q18, according to the most recent Optical Customer Markets Report issued by networking component and equipment market research firm Cignal AI. Cloud and colo operators such as Google, Microsoft, and Amazon still account for only a fraction of global optical equipment spending but were nearly a quarter of all North American operator purchases during 2Q18.

“While cloud and colo spending is still not near traditional telco demand for optical transport equipment, the balance is shifting. This is particularly true in North America, where cloud and colo operators now provide both technical and financial leadership to the supply chain,” said Andrew Schmitt, Directing Analyst at Cignal AI.

Released quarterly, the Optical Customer Markets Report quantifies optical equipment sales to five key customer markets – incumbent, wholesale, cable MSO, cloud and colo, and enterprise and government. The current report includes results through the 2Q18 and details equipment vendor market share for sales to cloud operators. Regional forecasts, based on expected spending trends by customer market, are also updated.

Additional key findings in the 2Q18 Optical Customer Markets Report include:

  • Incumbent spending accounts for the largest share of all optical spending in the market. In fact, incumbent spending in China is as much as all spending by other incumbent operators worldwide, combined. Outlays by EMEA incumbents increased again in the most recent quarter.
  • Cable MSO spending in North America continues to be very strong and grew both quarter-over-quarter and year-over-year.
  • Ciena led other vendors in direct sales to the cloud/colo market led by strength from the WaveServer platform. Also, newly-combined Infinera and Coriant became the second largest supplier of optical equipment to these customers. Huawei also continues to grow its market share as a result of growing demand from Baidu, Alibaba and Tencent.

About the Optical Customer Markets Report

The Optical Customer Markets Report tracks optical equipment spending by end customer market type and provides forecasts based on expected spending trends on a regional basis. Deliverables include an Excel file with complete data set, PowerPoint summary and Optical Equipment Active Insight.

The report includes revenue-based market size for all end customer markets across all regions, with market share for sales to the cloud and colo segment broken out on a worldwide basis. Vendors examined include Adtran, ADVA, Ciena, Cisco, Coriant, Cyan, ECI, Ekinops, Fiberhome, Fujitsu Networks, Huawei, Infinera, Juniper Networks, NEC, Nokia, Padtec, TE Conn, Transmode, Xtera and ZTE.

Full report details, as well as other articles and presentations, are available to users who register for a free account on the Cignal AI website.

About Cignal AI

Cignal AI provides active and insightful market research for the networking component and equipment market and the market’s end customers. Our work blends expertise from a variety of disciplines to create a uniquely informed perspective on the evolution of networking communications.

Addendum:  Data Center Interconnect (DCI) Market Share

Moore’s law alive in the Data Center; Ethernet adapter revenue up 43% YoY

by Cliff Grossner, Senior Research Director & Advisor Cloud & Data Center Research Practice at IHS-Markit


We cannot measure Moore’s law simply in time between generations. Even though it took Intel longer than 2 years to move from 14nm to 10nm silicon, the number of transistors in their 10nm CPUs exceeded Moore’s Law expectation of 2x per 2 years, according to new research by IHS Markit.

For example, improving transistor IC design helped Intel grow transistor density from 37.5 Million Transistor per Square Millimeter (Mtr/mm2)to 100.8 Mtr/mm2 between 2014 and 2017.

“Since 2007 we’ve seen an immense growth in consumer devices, apps, user-generated content and streaming services, as smart phones and social media gained popularity, driving the need for additional data center (DC) server computational capacity to support them. Connected devices and data-intensive applications will continue to fuel global demand for DC compute and push it up significantly ahead of the average growth of the number of transistors on a CPU,” said Cliff Grossner, Ph.D., Senior Research Director and Advisor for the Cloud and Data Center Research Practice at IHS Markit.

“Strong growth in the demand for DC server computation will compel designers of server hardware to think beyond general purpose compute and consider new server architectures purpose-built for parallel computation that will enable artificial intelligence, advanced driver assistance systems and real-time rendering for virtual and augmented reality amongst others,” Cliff added.

More Data Center Compute Market Highlights:

·         Cloud service providers are expected to buy 37% of 2017 DC servers shipped, telco 15% and enterprise 48%.

·         White Box – including all vendors that produce rack server hardware with OS software sold separately, such as QCT, Wiwynn and Inventec – was #1 in units shipped in 3Q17 (23% share) for DC servers.

·         HPE took the #1 sport in server revenue market share (23%), Dell was #2 (19%), and White Box was #3 (17%) in 2Q17

·         Programmable Ethernet adapter revenue was up 7% QoQ and up 43% YoY, hitting $22M in 3Q17

Research Synopsis:

The IHS Markit Data Center Compute Intelligence Service provides quarterly worldwide and regional market size, vendor market share, forecasts through 2021, analysis and trends for (1) data center servers by form factor[rack, blade, open compute and tower], server class[entry-level, enterprise, large-scale enterprise, large-scale compute and high performance compute], and market segment[Enterprise, Telco and Cloud Service Provider], and (2) Ethernet network adapters by CPU offload[Basic, Offload and Programmable NIC], port speed [1/10/25/40/50/100GE], form factor [stand-up, piggyback and open compite], usage case [storage and server] and market segment. Vendors tracked include Dell, HPE, Lenovo, Cisco, Huawei, Inspur, IBM, Supermicro, Cray, Intel, Broadcom, Mellanox, Cavium, and others.


In a separate IHS Markit report:

Hyperscale data center owners are driving growth of renewable energy in data centers

By Maggie Shillington, analyst, cloud and data centers, IHS Markit


  • Between 2 percent and 3 percent of developed countries’ electricity consumption is currently attributed to data centers. For most data centers, the largest operational cost is the electricity used for cooling.
  • Onsite generation is the ideal way to implement renewable energy in data centers. The two most popular renewable energy methods are solar and wind power, due to their high-energy production and relative ease of implementation.
  • Offsite renewable energy sources — primarily utility companies and renewable energy suppliers — are typically the easiest way for data centers to obtain renewable energy. Offsite generation removes the large upfront capital expenses to produce onsite renewable energy and the geographical limitations of renewable energy production methods.
  • Although small data centers have a distinct advantage in using onsite options, owners of hyperscale data centers (i.e., Apple, Google, Microsoft, Amazon and Facebook) are driving the growth of renewable energy for data centers.

Cignal AI & Del’Oro: Optical Network Equipment Market Decline Continues

Executive Summary & Overview:

Does anyone remember the fiber optic build out boom of the late 1990’s to early 2001? And the subsequent bust, which the industry still has not recovered from!

Fast forward to today, where we hear more and more about huge fiber demand from mega cloud service providers/Internet companies for intra and inter Data Center Connections.  And the huge amount of fiber backhaul for small cells and cell towers.

Yet two respected market research firms- Cignal AI and Del’Oro Group– both say that optical network transport equipment revenue declined yet again.

Cignal AI said: “global spending on optical network equipment dropped for a third consecutive quarter, led by a larger than normal seasonal decline in China and weakening trends in EMEA.”  However, Cignal AI (Andrew Schmitt) stated that “North American spending increased again quarter-over-quarter, with positive results reported by most vendors. Spending on Metro WDM continues to grow at the expense of LH WDM.”

Del’Oro Group reported in a press release: “revenues for Optical Transport equipment in North America continued to decline in the third quarter of 2017.”

“Optical Transport equipment purchases in North America was about 10 percent lower in the first nine months of 2017,” said Jimmy Yu, Vice President at Dell’Oro Group. “This has been one of the more challenging years for optical equipment manufacturers selling into North America. However, a few vendors in the region performed really well considering the tough market environment. For the first nine months of the year, Ciena was able to hold revenues steady, Cisco was able to grow revenues 14 percent, and Fujitsu experienced only a slight revenue decline,” Mr. Yu added.

–>Please see Editor’s Notes below for additional optical network equipment market insight and vendor perspective.


Cignal AI Report Summary:

  • North American spending increased again quarter-over-quarter, with positive results reported by most vendors.  Spending on Metro WDM continues to grow at the expense of LH WDM.
  • EMEA revenue fell sharply though this was the result of weakness at larger vendors – smaller vendors performed better. As in North America, LH WDM bore the brunt of the decline.
  • Last quarter was the weakest YoY revenue growth recorded in China in over 4 years as momentum from 2Q17 spending failed to continue into the third quarter. Spending trends in the region remain difficult to predict.
  • Revenue in the rest of Asia (RoAPAC) easedfollowing breakout results in India during 2Q17 though spending remains at historically high levels.
  • Quarterly coherent 100G+ port shipments broke 100k units for the first time on a global basis. 100G+ Port shipments in China were flat QoQ and are substantially up YoY.

Cignal AI’s October 29, 2017 Optical Customer Markets Report discovered an unexpected weakness in 2017 optical transport equipment spending from cloud and co-location (colo) operators (see Cignal AI Reports Unexpected Drop in Cloud and Colo Spending). This surprising trend was then further supported by public comments later made by Juniper and Applied Optoelectronics.

Contact Info: 

Cignal AI – 225 Franklin Street FL26 Boston, MA – 02110 – (617) 326-3996

Email:  [email protected]


Editor’s Notes:

1.One prominent Optical Transport Network Equipment vendor evidently feels the effect of the market slowdown.  On November 8, 2017, Infinera reported a GAAP net loss for the quarter of $(37.2) million, or $(0.25) per share, compared to a net loss of $(42.8) million, or $(0.29) per share, in the second quarter of 2017, and net loss of $(11.2) million, or $(0.08) per share, in the third quarter of 2016.

Infinera also announced it is implementing a plan to restructure its worldwide operations in order to reduce its expenses and establish a more cost-efficient structure that better aligns its operations with its long-term strategies. As part of this restructuring plan, Infinera will reduce headcount, rationalize certain products and programs, and close a remote R&D facility.

2. Astonishingly, there’s an India based optical network equipment vendor on the rise.  Successful homegrown Indian telecom vendors are hard to come by. That makes Bengaluru-based Tejas Networks something of an anomaly. Started 17 years ago (in 2000), Tejas is one of India’s few hardware producers.

Tejas Networks India Ltd. has made a name for itself in the optical networking market, especially within India, which looks poised for a boom in this sector (mainly due to fiber backhaul of 4G and 5G mobile data traffic). Nearly two thirds of its sales come from India, with the rest earned overseas.

“We are growing at 35% year-on-year and we hope to grow by at least 20% over the next two to three years,” says Sanjay Nayak, the CEO and managing director of Tejas, during an interview with Light Reading. “Overseas, we mainly target south-east Asian, Latin America and African markets.” Telcos in these markets have similar concerns to those in India, explains Nayak, making it easy for Tejas to address their demands.

“R&D is in our DNA and we believe that unless you come up with a differentiated product the market will not take you seriously,” says Nayak. “We have a huge advantage as an Indian player … [which] allows us to provide the product at a lesser price.”

Nayak believes that the experience of developing solutions for the problems faced by Indian telcos has helped the company to address overseas markets as well.

“Our products do very well for networks evolving from TDM to packet, which is a key concern of the Indian telcos,” he explains. “We realized that the US-based service providers were facing a similar problem of cross connect, which we were able to resolve. So, as we say, you can address any market if you are able to handle the Indian market.”

Read more at:

3.  The long haul optical transport market is dominated by OTN (Optical Transport Network) equipment (which this editor worked on from 2000 to 2002 as a consultant to Ciena, NEC, and other optical network equipment and chip companies).

The OTN wraps client payloads (video, image, data, voice, etc) into containers or “wrappers” that are transported across wide area fiber optic networks.  That helps maintain native payload structure and management information. OTN offers key benefits such as reduction in transport cost and optimal utilization of the optical spectrum.

OTN technology includes both WDM and DWDM. The service segment includes network maintenance and support services and network design & optimization services. On the basis of component, the market is divided into optical switch and optical transport. Based on end user, it is classified into government and enterprises.

According to Allied Market Research, OTN equipment market leaders include: Adtran, Inc., ADVA Optical Networking, Advanced Micro Devices Inc., Fujitsu, Huawei Technologies., ZTE Corporation., Belkin Corporation., Ciena Corporation., Coriant, and Allied Telesyn.

Above illustration courtesy of Allied Market Research


Note that Cisco offers OTN capability on their Network Convergence System (NCS) 4000 – 400 Gbps Universal line card.  Despite that and other OTN capable gear, Cisco is not covered in the above mentioned Allied Market Research OTN report.


Global Switching & Router Market Report:

Separately, Synergy Research Group said in a press release that:

Worldwide switching and router revenues were well over $11 billion in Q3 and were $44 billion for last four quarters, representing 3% growth on a rolling annualized basis. Ethernet switching is the largest of the three segments accounting for almost 60% of the total and it is also the segment that has by far the highest growth rate, propelled by aggressive growth in the deployment of 100 GbE and 25 GbE switches.

In Q3 North America remained the biggest region accounting for over 41% of worldwide revenues, followed by APAC, EMEA and Latin America. The APAC region has been the fastest growing and this was again the case in Q3, with growth being driven in large part by spending in China, which benefited Huawei in particular.

Cisco’s share of the total worldwide switching and router market was 51%, with shares in the individual segments ranging from 63% for enterprise routers to 38% for service provider routers. Cisco is followed by Huawei, Juniper, Nokia and HPE. Their overall switching and router market shares were in the 4-10% range in Q3. There is then a reasonably long tail of other vendors, with Arista and H3C being the most prominent challengers.

S&R Q317[1]

“The big picture is that total switching and router revenues are still growing and Cisco continues to control half of the market,” said John Dinsdale, a Chief Analyst at Synergy Research Group. “Some view SDN and NFV as existential threats to Cisco’s core business, with own-design networking gear from the hyperscale cloud providers posing another big challenge. While these are genuine issues which erode growth opportunities for networking hardware vendors, there are few signs that these are substantially impacting Cisco’s competitive market position in the short term.”

Contact Info: 

To speak to a Synergy analyst or to find out more about how to access Synergy’s market data, please contact Heather Gallo @ [email protected] or at 775-852-3330 extension 101.

Page 1 of 2
1 2