New Linux Foundation white paper: How to integrate AI applications with telecom networks using standardized CAMARA APIs and the Model Context Protocol (MCP)

The Linux Foundation’s CAMARA project [1.] released a significant white paper, “In Concert: Bridging AI Systems & Network Infrastructure through MCP: How to Build Network-Aware Intelligent Applications.” The open source software organization says, “Telco network capabilities exposed through APIs provide a large benefit for customers. By simplifying telco network complexity with APIs and making the APIs available across telco networks and countries, CAMARA enables easy and seamless access.”

Note 1. CAMARA is an open source project within the Linux Foundation to define, develop and test the APIs. CAMARA works in close collaboration with the GSMA Operator Platform Group to align API requirements and publish API definitions. Harmonization of APIs is achieved through fast and agile created working code with developer-friendly documentation. API definitions and reference implementations are free to use (Apache2.0 license).

…………………………………………………………………………………………………………………………………………………………….

The white paper outlines how the Model Context Protocol (MCP) and CAMARA’s network APIs can provide AI systems with real-time network intelligence, enabling the development of more efficient and network-aware applications. This is seen as a critical step toward future autonomous networks that can manage and fix their own data discrepancies.

CAMARA facilitates the development of operator-agnostic network APIs, adhering to a “write once” paradigm to mitigate fragmentation and provide uniform access to essential network capabilities, including Quality on Demand (QoD), Device Location, Edge Discovery, and fraud prevention signals. The new technical paper details an architecture where an MCP server functions as an abstraction layer, translating CAMARA APIs into MCP-compliant “tools” that AI applications can seamlessly discover and invoke. This integration bridges the historical operational gap between AI systems and the underlying communication networks that power modern digital services. By leveraging MCP integration, AI agents can dynamically access the latest API capabilities upon release, circumventing the need for continuous code refactoring and ensuring immediate utilization of emerging network functionalities without implementation bottlenecks.

“AI agents increasingly shape the digital experiences people rely on every day, yet they operate disconnected from network capabilities – intelligence, control, and real-time source of truth,” said Herbert Damker, CAMARA TSC Chair and Lead Architect, Infrastructure Cloud at Deutsche Telekom.  “CAMARA and MCP bring AI and network infrastructure into concert, securely and consistently across operators.”

The paper includes practical example scenarios for “network-aware” intelligent applications/agents, including:

  • Intelligent video streaming with AI-powered quality optimization
  • Banking fraud prevention using network-verified security context
  • Local/edge-optimized AI deployment informed by network and edge resource conditions

In addition to the architecture and use cases, the paper outlines CAMARA’s objectives for supporting MCP, which include covering areas such as security guidelines; standardized MCP tooling for CAMARA APIs; and quality requirements and success factors needed for production-grade implementations. The white paper is available for download on the CAMARA website. 

Collaboration with the Agentic AI Foundation

The release of this work aligns with a major ecosystem milestone: MCP now lives under the Linux Foundation’s newly formed Agentic AI Foundation (AAIF), a sister initiative that provides neutral, open governance for key agentic AI building blocks. The Linux Foundation announced AAIF on December 9, 2025, with founding project contributions including Anthropic’s MCP, Block’s goose, and OpenAI’s AGENTS.md. AAIF’s launch emphasizes MCP’s role as a broadly adopted standard for connecting AI models to tools, data, and applications, with more than 10,000 published MCP servers cited by the Linux Foundation and Anthropic. 

“With MCP now under the Linux Foundation’s Agentic AI Foundation, developers can invest with confidence in an open, vendor-neutral standard,” said Arpit Joshipura, general manager, Networking, Edge and IoT at the Linux Foundation. “CAMARA’s work demonstrates how MCP can unlock powerful new classes of network-aware AI applications.”

“The Agentic AI Foundation calls for trustworthy infrastructure. CAMARA answers that call. As AI shifts from conversation to orchestration, agentic workflows demand synchronization with reality,” said Nick Venezia, CEO and Founder, Centillion.AI, CAMARA End User Council Representative to the TSC. “We provide the contextual lens that allows AI to verify rather than infer, moving from guessing to knowing.“​​​​​​​​​​​​​​​​

References:

https://camaraproject.org/

https://camaraproject.org/news/

https://camaraproject.org/2026/01/12/camara-charts-a-path-for-network-aware-ai-applications-with-mcp/

IEEE/SCU SoE May 1st Virtual Panel Session: Open Source vs Proprietary Software Running on Disaggregated Hardware

Linux Foundation creates standards for voice technology with many partners

LF Networking 5G Super Blue Print project gets 7 new members

OCP – Linux Foundation Partnership Accelerates Megatrend of Open Software running on Open Hardware

TC3 Update on CORD (Central Office Re-architected as a Data center)

Introduction:

Timon Sloane of the Open Networking Foundation (ONF) provided an update on project CORD on November 1st at the Telecom Council’s Carrier Connections (TC3) summit in Mt View, CA.  The session was titled:

Spotlight on CORD: Transforming Operator Networks and Business Models

After the presentation, Sandhya Narayan of Verizon and Tom Tofigh of AT&T came up to the stage to answer a few audience member questions (there was no real panel session).

The basic premise of CORD is to re-architect a telco/MSO central office to have the same or similar architecture of a cloud resident data center.  Not only the central office, but also remote networking equipment in the field (like an Optical Line Termination unit or OLT) are decomposed and disaggregated such that all but the most primitive functions are executed by open source software running on a compute server.  The only hardware is the Physical layer transmission system which could be optical fiber, copper, or cellular/mobile.

Author’s Note:  Mr. Sloane didn’t mention that ONF became involved in project CORD when it merged with ON.Labs earlier this year. At that time, the ONOS and CORD open source projects became ONF priorities.  The Linux Foundation still lists CORD as one of their open source projects, but it appears the heavy lifting is being done by the new ONF as per this press release.

………………………………………………………………………………………………………………

Backgrounder:

A reference implementation of CORD combines commodity servers, white-box switches, and disaggregated access technologies with open source software to provide an extensible service delivery platform. This gives network operators (telcos and MSOs) the means to configure, control, and extend CORD to meet their operational and business objectives. The reference implementation is sufficiently complete to support field trials.

Illustration above is from the OpenCord website

……………………………………………………………………………………………………………………….

Highlights of Timon Sloane’s CORD Presentation at TC3:

  • ONF has transformed over the last year to be a network operator led consortium.
  • SDN, Open Flow, ONOS, and CORD are all important ONF projects.
  • “70% of world wide network operators are planning to deploy CORD,” according to IHS-Markit senior analyst Michael Howard (who was in the audience- see his question to Verizon below).
  • 80% of carrier spending is in the network edge (which includes the line terminating equipment and central office accessed).
  • The central office (CO) is the most important network infrastructure for service providers (AKA telcos, carriers and network operators, MSO or cablecos, etc).
  • The CO is the service provider’s gateway to customers.
  • End to end user experience is controlled by the ingress and egress COs (local and remote) accessed.
  • Transforming the outdated CO is a great opportunity for service providers.  The challenge is to turn the CO into a cloud like data center.
  • CORD mission is the enable the “edge cloud.”                                               –>Note that mission differs from the OpenCord website which states:

    “Our mission is to bring datacenter economies and cloud agility to service providers for their residential, enterprise, and mobile customers using an open reference implementation of CORD with an active participation of the community. The reference implementation of CORD will be built from commodity servers, white-box switches, disaggregated access technologies (e.g., vOLT, vBBU, vDOCSIS), and open source software (e.g., OpenStack, ONOS, XOS).” 

  • A CORD like CO infrastructure is built using commodity hardware, open source software, and white boxes (e.g. switch/routers and compute servers).
  • The agility of a cloud service provider depends on software platforms that enable rapid creation of new services- in a “cloud-like” way. Network service providers need to adopt this same model.
  • White boxes provide subscriber connections with control functions virtualized in cloud resident compute servers.
  • A PON Optical Line Termination Unit (OLT) was the first candidate chosen for CORD.  It’s at the “leaf of the cloud,” according to Timon.
  • 3 markets for CORD are:  Mobile (M-), Enterprise (E-), and Residential (R-).  There is also the Multi-Service edge which is a new concept.
  • CORD is projected to be a $300B market (source not stated).
  • CORD provides opportunities for: application vendors (VNFs, network services, edge services, mobile edge computing, etc), white box suppliers (compute servers, switches, and storage), systems integrators (educate, design, deploy, support customers, etc).
  • CORD Build Event was held November 7-9, 2017 in San Jose, CA.  It explored CORD’s mission, market traction, use cases, and technical overview as per this schedule.

Service Providers active in CORD project:

  • AT&T:  R-Cord (PON and g.fast), Multi-service edge-CORD, vOLTHA  (Virtual OLT Hardware Abstraction)
  • Verizon:  M-Cord
  • Sprint:  M-Cord
  • Comcast:  R-Cord
  • Century Link: R-Cord
  • Google:  Multi-access CORD

Author’s Note:  NTT (Japan) and Telefonica (Spain) have deployed CORD and presented their use cases at the CORD Build event.  Deutsche Telekom, China Unicom, and Turk Telecom are active in the ONF and may have plans to deploy CORD?

……………………………………………………………

Q&A Session:

  • This author questioned the partitioning of CORD tasks and responsibility between ONF and Linux Foundation.  No clear answer was given.  Perhaps in a follow up comment?
  • AT&T is bringing use cases into ONF for reference platform deployments.
  • CORD is a reference architecture with systems integrators needed to put the pieces together (commodity hardware, white boxes, open source software modules).
  • Michael Howard asked Verizon to provide commercial deployment status- number, location, use cases, etc.  Verizon said they can’t talk about commercial deployments at this time.
  • Biggest challenge for CORD:  Dis-aggregating purpose built, vendor specific hardware that exist in COs today.  Many COs are router/switch centric, but they have to be opened up if CORD is to gain market traction.
  • Future tasks for project CORD include:  virtualized Radio Access Network (RAN), open radio (perhaps “new radio” from 3GPP release 15?), systems integration, and inclusion of micro-services (which were discussed at the very next TC3 session).

Addendum from Marc Cohn, formerly with the Linux Foundation:

Here’s an attempt to clarify the CORD project responsibilities: