Highlights of 2017 Telecom Infrastructure Project (TIP) Summit

Executive Summary:

The Telecom Infra Project (TIP) is gaining a lot of awareness and market traction, judging by last week’s very well attended TIP Summit at the Santa Clara Convention Center.  The number of telecom network operators presented was very impressive, especially considering that none were from the U.S. with the exception of AT&T, which presented on behalf of the Open Compute Project (OCP) Networking Group.   It was announced at the summit that the OCP Networking group had formed an alliance with TIP.

The network operators that presented or were panelists included representatives from:  Deutsche Telekom AG, Telefonica, BT, MTN Group (Africa),  Bharti Airtel LTD (India), Reliance Jio (India), Vodafone, Turkcell (Turkey), Orange, SK Telecom, TIM Brasil, etc.  Telecom Italia, NTT, and others were present too.   Cable Labs – the R&D arm of the MSOs/cablecos – was represented in a panel where they announced a new TIP Community Lab (details below).

Facebook co-founded TIP along with Intel, Nokia, Deutsche Telekom, and SK Telecom at the 2016 Mobile World Congress event.  Like the OCP (also started by Facebook), its mission is to dis-aggregate network hardware into modules and define open source software building blocks. As its name implies, TIP’s focus is telecom infrastructure specific in its work to develop and deploy new networking technologies. TIP members include more than 500 companies, including telcos, Internet companies, vendors, consulting firms and system integrators.  Membership seems to have grown exponentially in the last year.

Image result for quotes from 2017 Telecom Infrastructure Project (TIP) summit

During his opening keynote speech, Axel Clauberg, VP of technology and innovation at Deutsche Telekom and chairman of the TIP Board of Directors, announced that three more operators had joined the TIP Board: BT, Telefonica, and Vodafone.

“TIP is truly operator-focused,” Clauberg said. “It’s called Telecom Infrastructure Project, and I really count on the operators to continue contributing to TIP and to take us to new heights.” That includes testing and deploying the new software and hardware contributed to TIP, he added.

“My big goal for next year is to get into the deployment stage,” Clauberg said. “We are working on deployable technology. [In 2018] I want to be measured on whether we are successfully entering that stage.”

Jay Parikh, head of engineering and infrastructure at Facebook, echoed that TIP’s end goal is deployments, whether it is developing new technologies, or supporting the ecosystem that will allow them to scale.

“It is still very early. Those of you who have been in the telco industry for a long time know that it does not move lightning fast. But we’re going to try and change that,” Parikh said.

…………………………………………………………………………………………………………………….

TIP divides its work into three areas — access, backhaul, and core & management — and each of the project groups falls under one of those three areas.  Several new project groups were announced at the summit:

  • Artificial Intelligence and applied Machine Learning (AI/ML): will focus on using machine learning and automation to help carriers keep pace with the growth in network size, traffic volume, and service complexity. It will also work to accelerate deployment of new over-the-top services, autonomous vehicles, drones, and augmented reality/virtual reality.
  • End-to-End Network Slicing (E2E-NS): aims to create multiple networks that share the same physical infrastructure. That would allow operators to dedicate a portion of their network to a certain functionality and should make it easier for them to deploy 5G-enabled applications.
  • openRAN: will develop RAN technologies based on General Purpose Processing Platforms (GPPP) and disaggregated software.

The other projects/working groups are the following:

  • Edge Computing: This group is addressing system integration requirements with innovative, cost-effective and efficient end-to-end solutions that serve rural and urban regions in optimal and profitable ways.
  •  This group is pioneering a 60GHz wireless networking system to deliver gigabits of capacity in dense, urban environments more quickly, easily and at a lower cost than deploying fiber.  A contribution was made to IEEE 802.11ay task force this year on use cases for mmW backhaul.

TIP-WS_MWC_Contribution_MMWave_Ku_v.001

Above illustration courtesy of TIP mmW Networks Group

  • Open Optical Packet Transport:  This project group will define Dense Wavelength Division Multiplexing (DWDM) open packet transport architecture that triggers new innovation and avoids implementation lock-ins. Open DWDM systems include open line system & control, transponder & network management and packet-switch and router technologies.
  • The Working Group is focused on enabling carriers to more efficiently deliver new services and applications by using mobile edge computing (MEC) to turn the RAN network edge (mobile, fixed, licensed and unlicensed spectrum) into an open media and service hub.
  • The project is pioneering a virtualized RAN (VRAN) solution comprised of low-cost remote radio units that can be managed and dynamically reconfigured by a centralized infrastructure over non-ideal transport.
  • project group will develop an open RAN architecture by defining open interfaces between internal components and focusing on the lab activity with various companies for multi-vendor interoperability. The goal is to broaden the mobile ecosystem of related technology companies to drive a faster pace of innovation.

A complete description, with pointers/hyperlinks to respective project/work group charters is in the TIP Company Member Application here.

TEACs –  Innovation Centers for TIP:

Also of note was the announcement of several new TEACs – TIP Ecosystem Acceleration Centers, where start-ups and investors can work together with incumbent network operators to progress their respective agendas for telecom infrastructure.

The TIP website comments on the mission of the TEACs:

“By bringing together the key actors – established operators, cutting-edge startups, and global & local investors – TEACs establish the necessary foundation to foster collaboration, accelerate trials, and bring deployable infrastructure solutions to the telecom industry.”

TEACs are located in London (BT), Paris (Orange), and Seoul (SK Telecom). .

TIP Community Labs:

TIP Community Labs are physical spaces that enable collaboration between member companies in a TIP project group to develop telecom infrastructure solutions. While the labs are dedicated to TIP projects and host TIP project teams, the space and basic equipment are sponsored by individual TIP member companies hosting the space.  The labs are located in: Seoul, South Korea (sponsored by SK Telecom); Bonn, Germany (sponsored by Deutsche Telekom); Menlo Park, California, USA (sponsored by Facebook). Coming Soon Rio de Janiero, Brazil – to be sponsored by TIM Brasil.  At this summit, Cable Labs announced it will soon open a TIP Community Lab in Louisville, CO.

…………………………………………………………………………………………………………………………..

Selected Quotes:

AT&T’s Tom Anschutz (a very respected colleague) said during his November 9th – 1pm keynote presentation:

“Network functions need to be disaggregated and ‘cloudified.’  We need to decompose monolithic, vertically integrated systems into building blocks; create abstraction layers that hide complexity.  Design code and hardware as independent modules that don’t bring down the entire IT system/telecom network if they fail.”

Other noteworthy quotes:

“We’re going to build these use-case demonstrations,” said Mansoor Hanif, director of converged networks and innovation at BT. “If you’re going to do something as difficult and complex as network slicing, you might as well do it right.”

“This is the opening of a system that runs radio as a software on top of general purpose processes and interworks with independent radio,” said Santiago Tenorio, head of networks at Vodafone Group. The project will work to reduce the costs associated with building mobile networks and make it easier for smaller vendors to enter the market.  “By opening the system will we get a lower cost base? Definitely yes,” absolutely yes,” Tenorio added.

“Opening up closed, black-box systems enables innovation at every level, so that customers can meet the challenges facing their networks faster and more efficiently,” said Josh Leslie, CEO of Cumulus Networks. “We’re excited to work with the TIP community to bring open systems to networks beyond the data center.” [See reference press release from Cumulus below].

“Open approaches are key to achieving TIP’s mission of disaggregating the traditional network deployment approach,” said Hans-Juergen Schmidtke, Co-Chair of the TIP Open Optical Packet Transport project group. “Our collaboration with Cumulus Networks to enable Cumulus Linux on Voyager (open packet DWDM architecture framework and white box transponder design) is an important contribution that will help accelerate the ecosystem’s adoption of Voyager.”

……………………………………………………………………………………………………………………………

Closing Comments:  Request for Reader Inputs!

  1.  What’s really interesting is that there are no U.S. telco members of TIP.  Bell Canada is the only North American telecom carrier among its 500 members. Equinix and Cable Labs are the only quasi- network operator members in the U.S.
  2. Rather than write a voluminous report which few would read, we invite readers to contact the author or post a comment on areas of interest after reviewing the 2017 TIPS Summit agenda.

 

References:

TIP Summit 2017

An Update from TIP Summit 2017

News

http://www.businesswire.com/news/home/20171108005571/en/Cumulus-Networks-Telecom-Infra-Project-TIP-Collaborate

https://www.devex.com/news/telecom-industry-tries-new-tactics-to-connect-the-unconnected-91492

 

 

AT&Ts Perspective on Edge Computing from Fog World Congress

Introduction:

In her October 31st keynote at the Fog World Congress, Alicia Abella, PhD and Vice President – Advanced Technology Realization at AT&T, discussed the implications of edge computing (EC) for network service providers, emphasizing that it will make the business case for 5G realizable when low latency is essential for real time applications (see illustration below).

The important trends and key drivers for edge computing were described along with AT&T’s perspective of its “open network” edge computing architecture emphasizing open source software modules.

Author’s Note:  Ms. Abella did not distinguish between edge and fog computing nor did she even mention the latter term during her talk.  We tried to address definitions and fog network architecture in this post.  An earlier blog post quoted AT&T as being “all in” for edge computing to address low latency next generation applications.

………………………………………………………………………………………………………………..

AT&T Presentation Highlights:

  • Ms. Abella defined EC as the placement of processing and storage resources at the perimeter of a service provider’s network in order to deliver low latency applications to customers.  That’s consistent with the accepted definition.

“Edge compute is the next step in getting more out of our network, and we are busy putting together an edge computing (network) architecture,” she said.

  • “5G-like” applications will be the anchor tenant for network provider’s EC strategy.  augmented reality/virtual reality, Multi-person real time video conferencing, and autonomous vehicles were a few applications cited in the illustration below:

Above illustration courtesy of AT&T.

“Size, location, configuration of EC resources will vary, depending on capacity demand and use cases,”  said Ms. Abella.

…………………………………………………………………………………………………………..

  • Benefits of EC to network service providers include:
  1. Reduce backhaul traffic
  2. Maintain quality of experience for customers
  3. Reduce cost by decomposing and disaggregating access function
  4. Optimize current central office infrastructure
  5. Improve reliability of the network by distributing content between the edge and centralized data centers
  6. Deliver innovative services not possible without edge compute, e.g. Industrial IoT autonomous vehicles, smart cities, etc

“In order to achieve some of the latency requirements of these [5G applications?] services a service provider needs to place IT resources at the edge of the network. Especially, when looking at autonomous vehicles where you have mission critical safety requirements. When we think about the edge, we’re looking at being able to serve these low latency requirements for those [real time] applications.”

  • AT&T has “opened our network” to enable new services and reduce operational costs.  The key attributes are the following:
  1. Modular architecture
  2. Robust network APIs
  3. Policy management
  4. Shared infrastructure for simplification and scaling
  5. Network Automation platform achieved using INDIGO on top of ONAP
  • AT&T will offer increased network value and adaptability as traffic volumes change:
  1. Cost/performance leadership
  2. Improved speed to innovation
  3. Industry leading security, performance, reliability

“We are busy thinking about and putting together what that edge compute architecture would look like. It’s being driven by the need for low latency.”

In terms of where, physically, edge computing and storage is located:

“It depends on the use case. We have to be flexible when defining this edge compute architecture. There’s a lot of variables and a lot of constraints. We’re actually looking at optimization methods.  We want to deploy edge compute nodes in mobile data centers, in buildings, at customers’ locations and in our central offices. Where it will be depends on where there is demand, where we have spectrum, we are developing methods for optimizing the locations.  We want to be able to place those nodes in a place that will minimize cost to us (AT&T), while maintaining quality of experience. Size, location and configuration is going to depend on capacity demand and the use cases,” Alicia said.

  • Optimization of EC processing to meet latency constraints may require GPUs and FPGAs in additional to conventional microprocessors.  One such application cite was running video analytics for surveillance cameras.
  • Real time control of autonomous vehicles would require a significant investment in roadside IT infrastructure but have an uncertain return-on-investment. AT&T now has 12 million smart cars on its network, a number growing by a million per quarter.
  • We need to support different connectivity to the core network and use “SDN” within the site.
  • Device empowerment at the edge must consider that while mobile devices (e.g. smart phones and tablets) are capable of executing complex tasks, they have been held back by battery life and low power requirements.
  • Device complexity means higher cost to manufacturers and consumers.
  • Future of EC may include “crowd sourcing computing power in your pocket.”  The concept here is to distribute the computation needed over many people’s mobile devices and compensate them via Bitcoin, other crypto currency or asset class.  Block chain may play a role here.

Fog Computing Definition, Architecture, Market and Use Cases

Introduction to Fog Computing, Architecture and Networks:

Fog computing is an extension of cloud computing which deploys data storage, computing and communications resources, control and management data analytics closer to the endpoints.  It is especially important for the Internet of Things (IoT) continuum, where low latency and low cost are needed.

Fog computing architecture is the arrangement of physical and logical network elements, hardware, and software to implement a useful IoT network. Key architectural decisions involve the physical and geographical positioning of fog nodes, their arrangement in a hierarchy, the numbers, types, topology, protocols, and data bandwidth capacities of the links between fog nodes, things, and the cloud, the hartware and software design of individual fog nodes, and how a complete IoT network is orchestrated and managed. In order to optimize the architecture of a fog network, one must first understand the critical requirements of the general use cases that will take advantage of fog and specific software application(s) that will run on them. Then these requirements must be mapped onto a partitioned network of appropriately designed fog nodes. Certain clusters of requirements are difficult to implement on networks built with heavy reliance on the cloud (intelligence at the top) or intelligent things (intelligence at the bottom), and are particularly influential in the decision to move to fog-based architectures.

From a systematic perspective, fog networks provide a distributed computing system with a hierarchical topology. Fog networks aim at meeting stringent latency requirements, reducing power consumption of end devices, providing real-time data processing and control with localized computing resources, and decreasing the burden of backhaul traffic to centralized data centers.  And of course, excellent network security, reliability and availability must be inherent in fog networks.

Figure 1

Fog computing network architecture

Illustration courtesy of August 2017 IEEE Communications Magazine article: “Architectural Imperatives for Fog Computing: Use Cases, Requirements, and Architectural Techniques for Fog-Enabled IoT Networks”  (IEEE Xplore or IEEE Communications magazine subscription required to view on line)

………………………………………………………………………………………………………………………..

Fog Computing Market:

The fog computing market opportunity will exceed $18 billion worldwide by the year 2022, according to a new report by 451 Research. Commissioned by the OpenFog Consortium, the Size and Impact of Fog Computing Market projects that the largest markets for fog computing will be, in order, energy/utilities, transportation, healthcare and the industrial sectors.

“Through our extensive research, it’s clear that fog computing is on a growth trajectory to play a crucial role in IoT, 5G and other advanced distributed and connected systems,” said Christian Renaud, research director, Internet of Things, 451 Research, and lead author of the report. “It’s not only a technology path to ensure the optimal performance of the cloud-to-things continuum, but it’s also the fuel that will drive new business value.”

Key findings from the report were presented during an opening keynote on October 30th at the Fog World Congress conference. In addition to projecting an $18 billion fog market and identifying the top industry-specific market opportunities, the report also identified:

  • Key market transitions fueling the growth include investments in energy infrastructure modernization, demographic shifts and regulatory mandates in transportation and healthcare.
  • Hardware will have the largest percentage of overall fog revenue (51.6%), followed by fog applications (19.9%) and then services (15.7%).  By 2022, spend will shift to apps and services, as fog functionality is incorporated into existing hardware.
  • Cloud spend is expected to increase 147% to $6.4 billion by 2022.

“This is a seminal moment that not only validates the magnitude of fog, but also provides us with a first-row seat to the opportunities ahead,” said Helder Antunes, chairman of the OpenFog Consortium and Senior Director, Cisco. “Within the OpenFog community, we’ve understood the significance of fog—but with its growth rate of nearly 500 percent over the next five years—consider it a secret no more.”

The fog market report includes the sizing and impact of fog in the following verticals: agriculture, datacenters, energy and utilities, health, industrial, military, retail, smart buildings, smart cities, smart homes, transportation, and wearables.

Fog computing is the system-level architecture that brings computing, storage, control, and networking functions closer to the data-producing sources along the cloud-to-thing continuum. Applicable across industry sectors, fog computing effectively addresses issues related to security, cognition, agility, latency and efficiency.

Download the full report at www.openfogconsortium.org/growth.

………………………………………………………………………………………………………………

Fog Use Cases:

According to the Open Fog Consortium, fog architectures offer several unique advantages over other approaches, which include, but are not limited to:

 Security: Additional security to ensure safe, trusted transactions
 Cognition: awareness of client-centric objectives to enable autonomy
 Agility: rapid innovation and affordable scaling under a common infrastructure
 Latency: real-time processing and cyber-physical system control
 Efficiency: dynamic pooling of local unused resources from participating end-user devices

New use cases created by the OpenFog Consortium were also released that showcase how fog works in industry.  These use cases provide fog technologists with detailed views of how fog is deployed in autonomous driving, energy, healthcare and smart buildings.

The August 2017 IEEE Communications magazine article lists various IoT vertical markets and example fog use cases for each one:

Table 1.

It also delineates several application examples and allowable latency for each one:

Table 2.
………………………………………………………………………………………………………………………

IEEE to Standardize Fog Network Architecture based on Open Fog Consortium Reference Model

The OpenFog Consortium has announced that its OpenFog Reference Architecture will serve as the basis for a new working group formed by the IEEE Standards Association (IEEE-SA) to accelerate the creation and adoption of industry standards for fog computing and networking.  This and other future standards on Fog computing and networking will serve as a significant catalyst to propel the digital revolution occurring as a result of advanced Internet of Things (IoT), 5G and embedded artificial intelligence (AI) applications.

Fog computing and networking is an advanced distributed architecture that brings computing, storage, control, and networking functions closer to the data-producing sources along the cloud-to-thing continuum. Applicable across industry sectors, fog computing effectively addresses issues related to security, cognition, agility, latency and efficiency (SCALE).

The inaugural meeting of the IEEE ComSoc Standards Working Group on Fog Computing and Networking Architecture Framework- Project P1934 [1] is scheduled for November 2017, with its work expected to be complete by April 2018.  Additional details were presented at two Fog World Congress sessions I attended on October 31st and November 1st in Santa Clara, CA (see below).

Note 1.  IEEE P1934 proposed standard: OpenFog Reference Architecture for Fog Computing:

-Working Group: Fog Computing Architecture Framework
-Working Group Chair: John Zao  –  jkzao@openfogconsortium.org
-Working Group Vice-Chair:  Tao Zhang  –  taozhang1@yahoo.com

-Sponsoring Society and Committee: IEEE Communications Society/Standards Development Board (COM/SDB)
-Sponsor Chair: Mehmet Ulema –  m.ulema@ieee.org

http://standards.ieee.org/develop/wg/FOG.html

………………………………………………………………………………………………………………………………………………….

The OpenFog Reference Architecture is a universal technical framework designed to enable the data-intensive requirements of IoT, 5G and AI applications.  It is a structural and functional prescription of an open, inter-operable, horizontal system architecture for distributing computing, storage, control and networking functions closer to the users along a cloud-to-thing continuum. The framework encompasses various approaches to disperse information technology (IT), communication technology (CT) and operational technology (OT) services through an information messaging infrastructure as well as legacy and emerging multi-access networking technologies.

“This represents a giant step forward for fog computing and for the industry, which will soon have the specifications for use in developing industrial strength fog-based hardware, software and services,” said John Zao, Chair, IEEE Standards Working Group on Fog Computing and Networking Architecture Framework (and Associate Professor at Taiwan Chiao-Tung University). “The objective from the beginning was that the Open Fog Reference Architecture would serve as the high-level basis for industry standards, and the IEEE is looking forward to the collaboration in this effort.”

“The standards work produced by this new working group will be crucial in the continued growth of fog computing innovation and things-to-cloud systems,” said Dr. Mehmet Ulema, Director, Standards Development, IEEE Communications Society, and Professor at Manhattan College, New York. “This also is an outstanding example of the strategic alliance between IEEE and OpenFog to co-create and co-promote fog networking concepts and architectures.”

“The mandate for fog computing is growing stronger, driven by the recognition that traditional architectures can’t deliver on the operational challenges for today’s advanced digital applications,” said Helder Antunes, chairman of the OpenFog Consortium and Senior Director at Cisco.  “On behalf of the members of the OpenFog technical community, I’m pleased to see the recognized value of the OpenFog Reference Architecture and IEEE’s commitment to fog computing and networking via the formation of this new working group.”

…………………………………………………………………………………………

IEEE ComSoc Rapid Reaction Standards Activities – RRSA

On November 1st at the Open Fog World Congress, IEEE ComSoc Standards Chair Alex Gelman, PhD explained the RRSA mechanism to define new IEEE ComSoc standards for Fog computing/networking and other projects related to communications technologies.  Special targets for IEEE standardization are emerging technologies

Methodology:

  • Invite industry practitioners that have ideas for specific standardization projects or for areas of standardization
  • Identify relevant leading experts in the target field, e.g. Industrial  and academic researchers
  • Leverage IEEE ComSoc Technical Committees
  • Issue a call for participation, solicit project proposals and/or position statements
  • Select participants based on proposals/positions statements submissions
  • Selected proposals are typically selected that can be clustered in 1-3 groups
  • 1 day face to face meeting to come to agreement on a proposed new standard
  • If approved, culminates in a PAR – Project Authorization Request

Some observations made during OpenFog RRSA:

  • Scholarly nature of Fog Technologies
    • Fog/Edge technologies are still, at least in part, in conceptual phase
    • It is critical to engage Industrial and academic researchers in discussion and standardization
  • Multiplicity of standards
    • Notable complimentary efforts, e.g. MEC
    • The bad news about standards is that there are many to choose from
    • The good news about standards is that there are many to choose from
    • “Legislating” any particular technology will impede innovation
  • Properly architecting standards is key to harmonization of efforts
    • Early cooperation with IEEE and external standards groups is highly desirable for harmonization
    • Proper modularity of standards is critical for future Interoperability, Interworking, or Coexistence mechanisms
  • Viable Standardization Strategy
    • Harmonize IEEE standardization method with OpenFog entity-based membership is a good idea
    • Deploy adoption and standard development methods as appropriate
    • Position OpenFog Standardization among IEEE Strategic projects for 5G and Beyond

Related IEEE Standards Projects:

  • IEEE P1934 “Open Fog Reference Architecture for Fog Computing”
  • IEEE P2413™: Draft Standard for an Architectural Framework for the Internet of Things

…………………………………………………………………………………………………………………………………………………………..

Future Fog Computing and Networking Standards:

During a November 1st late afternoon discussion on Fog/IEEE standards, Professor Zao said that in the future, the Open Fog Consortium would work with IEEE and other standards bodies/entities on other Fog computing standards. This author suggested that future Fog networking standards follow the CCITT (now ITU-T) model adopted for ISDN in the early to mid 1980’s:  define the reference architecture, functional groupings and reference points between functional groupings.  Then standardize the interfaces, protocols and message sets based on pointers to existing standards (where applicable) or new standards.  Several attendees agreed with that approach with the goal of being able to certify compliance to exposed Fog networking interfaces.

References:

OpenFog Reference Architecture

https://www.openfogconsortium.org/wp-content/uploads/OpenFog_Reference_Architecture_2_09_17-FINAL.pdf

 

IEEE Standards Association:  http://standards.ieee.org/

IEEE Standards for 5G and Beyond: https://5g.ieee.org/standards

IEEE IoT Initiative: https://iot.ieee.org/

IEEE SDN/NFV Initiative: https://sdn.ieee.org/

IEEE 5G Initiative: https://5g.ieee.org/

…………………………………………….

 

Preview of Fog World Congress: October 30th to November 1st, Santa Clara, CA

The Fog World Congress (FWC), to be held October 30th to November 1st in Santa Clara, CA, provides an innovative forum for industry and academia in the field of fog computing and networking to define terms, discuss critical issues, formulate strategies and organize collaborative efforts to address the challenges.  Also, to share and showcase research results and industry developments.

FWC is co-sponsored by IEEE ComSoc and the OpenFog Consortium. It is  is the first conference that brings industry and research together to explore the technologies, challenges, industry deployments and opportunities in fog computing and networking.

Image result for illustration of fog computing

Don’t miss the fog tutorial sessions which aim to clarify misconceptions and bring the communities up to speed on the latest research, technical developments and industry implementations of fog. FWC Research sessions will cover a comprehensive range of topics. There will also be sessions designed to debate controversial issues such as why and where fog will be necessary, what will happen in a future world without fog, how could fog disrupt the industry.

Here are a few features sessions:

  • Fog Computing & Networking: The Multi-Billion Dollar opportunity before us
  • Driving through the Fog: Transforming Transportation through Autonomous vehicles
  • From vision to practice: Implementing Fog in Real World environments
  • Fog & Edge: A panel discussion
  • Fog over Denver: Building fog-centricity in a Smart City from the ground up
  • Fog Tank: Venture Capitalists take on the Fog startups
  • 50 Fog Design & Implementation Tips in 50 Minutes
  • Fog at Sea: Marine Use Cases For Fog Technology
  • NFV and 5G in a Fog computing environment
  • Security Issues, Approaches and Practices in the IoT-Fog Computing Era: A panel discussion

View the 5 track conference program here.

Finally, register here.

For general information about the conference, including registration, please email: info@fogworldcongress.com

About the Open Fog Consortium:

The OpenFog Consortium bridges the continuum between Cloud and Things in order to solve the bandwidth, latency and communications challenges associated with IoT, 5G and artificial intelligence.  Its work is centered around creating an open fog computing architecture for efficient and reliable networks and intelligent endpoints combined with identifiable, secure, and privacy-friendly information flows between clouds, endpoints, and services based on open standard technologies.  While not a standards organization, OpenFog drives requirements for fog computing and networking to IEEE.  The global nonprofit was founded in November 2015 and today represents the leading researchers and innovators in fog computing.

For more information, visit www.openfogconsortium.org; Twitter @openfog; and LinkedIn /company/openfog-consortium.

Reference:

http://techblog.comsoc.org/2017/07/20/att-latency-sensitive-next-gen-apps-need-edge-computing/

ABI Research: Start-ups to be rising stars of 5G challenging incumbents

The rise of 5G is promising to shake up the status quo in the mobile equipment industry by presenting opportunities for startups to grab market share away from the incumbent vendors, according to ABI Research.

In a new report, the market research firm identified 15 startups exhibiting strong potential to play a role in wireless network operators’ transformation to 5G through innovative products and services.

“Traditionally operators have deployed a handful of infrastructure vendors in their networks, especially in the core network. Stagnating average revenue per user and increasing network traffic are driving operators to be more cost-effective and innovative in network performance and operations management and network upgrades. The end-to-end digital transformation toward virtualized and software defined networks is creating the opportunity for operators to open their highly proprietary networks and vendor ecosystem to include innovative start-ups. The 15 companies we have profiled illustrate a strong business sense and innovative solutions,” says Prayerna Raina, Senior Analyst at  ABI Research.

Operators are facing the need to address key network performance and traffic management issues ahead of the standardization and launch of 5G in 2020, the report states.

Startups such as Athonet, CellWize, CellMining, AirHop Communications, Core Network Dynamics, Blue Danube and Vasona Networks are developing innovative solutions in these areas and may challenge the long-established telecom industry status quo.

“The telco start-ups we have profiled are challenging the incumbents in every way. From the flexibility of the solution to value-added services and a strong R&D focus, these companies are not just innovative, but also reflect an understanding of telco operators’ operational models as well as revenue and network performance challenges. With strong financial backing and active engagement with major partners in their ecosystem, these startups have proven their ability to meet operator requirements in tests and field deployments,” Ms. Raina said.

Image result for image of 5G

These findings are from ABI Research’s Mobile Network Hot Tech Innovators report. This report is part of the company’s Mobile Network Infrastructure research service, which includes research, data, and analyst insights.

Technology trends including SDN and NFV for mobile networks, the evolution of the mobile edge computing and self-organizing network solutions will also lay the groundwork for 5G (even though none of those will be included in the ITU-R IMT 2020 standards). Other enabling technologies include the use of big data analytics (also not to be included in any 5G standard) to enhance and optimize network performance.

Question:  Do you really think start-ups can take market share away from Nokia, Ericsson, Huawei, Qualcomm, and other incumbent wireless technology companies?  Don’t forget Intel which is making a major effort to be a 5G technology provider with their mobile terminal platform.

References:

https://www.abiresearch.com/press/startups-are-rising-stars-5g/

https://www.telecomasia.net/content/startups-challenge-telecoms-status-quo-5g-rises

https://www.abiresearch.com/staff/bio/prayerna-raina/

https://www.quora.com/5G-Communications-What-companies-are-leading-in-5G-technologies

http://theinstitute.ieee.org/technology-topics/communications/5g-the-future-of-communications-networks

IEEE ComSoc Webinar: 5G: Converging Towards IMT-2020 Submission

IMT 2020 workshop which includes hyperlinks to enable you to download the presentations:
Note: There were 4 organizations that presented their proposed IMT 2020 RAN (AKA RIT/SRIT) schemes at this workshop:
3GPP 5G, ETSI DECT, Korea IMT 2020 and China IMT 2020

AT&T: Latency sensitive, next-gen apps need Edge Computing & We’re All In!

AT&T strongly advocates the use of edge computing (EC) as a way to reinvent the telco network and cloud so as to make new services like augmented reality, virtual reality, and low latency “5G” applications practicable.

The company’s CTO wrote in a blog post that it is adding intelligence to its cell towers, central offices, and small cells that are at the “edge” of the cloud by outfitting them with high-end graphics processing chips and other general purpose computers. By doing so, it will reduce the distance that data has to travel to get processed, thereby reducing latency and boosting overall network performance.

“Edge computing fulfills the promise of the cloud to transcend the physical constraints of our mobile devices,” said Andre Fuetsch, president of AT&T Labs and CTO in a statement. “The capabilities of tomorrow’s “5G” are the missing link that will make edge computing possible.”  That’s because many “5G” applications require low latency, especially for real time control of machinery and Internet connected devices (IoT).

AT&T said it will begin deploying edge computing out over the next few years starting with urban areas and expanding those over time. The company also said that MEC is an important element to the company’s network virtualization program. The company’s goal is to have 55 percent of its network virtualized by year-end with a longer term goal of having 75 percent of its network virtualized by 2020.

Part of AT&T’s network virtualization effort is the deployment of a centralized RAN (C-RAN) architecture, which will be virtualized to help speed the evolution to “5G” services. More on that from Gordon Mansfield, AT&T’s VP of RAN and Device Design here.

The above referenced AT&T blog post identified the challenge and solution for next-gen applications:

Here’s the challenge: Next-gen applications like autonomous cars and augmented reality/virtual reality (AR/VR) will demand massive amounts of near-real time computation.

For example, according to some third-party estimates, self-driving cars will generate as much as 3.6 terabytes of data per hour from the clusters of cameras and other sensors. Some functions like braking, turning and acceleration will likely always be managed by the computer systems in the cars themselves.

But what if we could offload some of the secondary systems to the cloud? These include things like updating and accessing detailed maps these cars will use to navigate.

Or consider AR/VR. The industry is moving to a model where those applications will come through your smartphone. But creating entirely virtual worlds or overlaying digital images and graphics on top of the real world in a convincing way also requires a lot of processing power. Even when phones can deliver that horsepower, the tradeoff is extremely short battery life.

Edge computing addresses those obstacles by moving the computation into the cloud in a way that feels seamless. It’s like having a wireless supercomputer follow you wherever you go.

………………………………………………………………………………………………………………………

AT&T said that it’s already deploying EC-capable services to enterprise customers today through AT&T FlexWareSM service. Customers can currently manage powerful network services through a standard tablet device. We expect to see more applications for EC in areas like public safety that will be enabled by the FirstNet wireless broadband network.

The company claims to be committed to deploying mobile 5G as soon as possible and are committed to edge computing. As AT&T rolls out EC over the next few years, dense urban areas will be their first targets, and they’ll expand from those over time.

In conclusion, AT&T stated “we’re all in- now (for edge computing)” as per these strong closing remarks:

AT&T Labs and AT&T Foundry innovation centers are at the heart of designing and testing edge computing. In February, the AT&T Foundry in Palo Alto, CA, released a white paper on the computing and networking challenges around AR/VR. We’ll put out a second white paper in the coming weeks. It will discuss how we can apply edge computing to enable mobile augmented and virtual reality technology in the ecosystem.

There’s no time to lose. We think edge computing will drive a wave of innovation unlike anything seen since the dawn of the internet itself. Stay tuned.

…………………………………………………………………………………………………………………………..

Other network operators have been touting multi-access edge computing (MEC) in conjunction with “5G” networks. Late last year, 5G Americas, a trade group representing several operators in North and South America (including AT&T), released a white paper about the growing interest in MEC and said that standards bodies like the 3GPP and ETSI are considering including MEC in the 5G standards development.

ETSI has formed the Multi-access Edge Computing Industry Specification Group (MEC ISG).  Earlier this month, ETSI released its first package of standardized application programming interfaces (APIs) that will support MEC interoperability.

……………………………………………………………………….

References:

http://about.att.com/story/reinventing_the_cloud_through_edge_computing.html

https://www.sdxcentral.com/articles/news/att-touts-mec-tool-reduce-latency-boost-performance/2017/07/

https://www.wirelessweek.com/news/2017/07/t-turns-edge-computing-vr-other-5g-use-cases

 

 

Recent Posts