DISH Network’s Spectrum Strategy & Business Outlook, by David Dixon of FBR & Co.

Overview:

We continue to anticipate that DISH will opportunistically acquire spectrum in the incentive auction, taking advantage of VZ’s and T’s likely limited participation due to their greater emphases on spectrum reuse and weaker balance sheets (due to the AWS-3 auction). We believe DISH’s spectrum portfolio is progressively devaluing as the industry embraces low-cost, unlicensed and shared spectrum, namely, the 3.5 GHz spectrum band, to solve metro density challenges.

3.5 GHz spectrum is moving more quickly than expected. An ecosystem is quickly building around 150 MHz of shared 3.5 GHz spectrum, which can be deployed in 2017 ahead of priority licenses auctioned in 2017. Contrary to consensus, 150 MHz of 3.5 GHz spectrum should be a significant “5G metro hotspot” game changer. We expect this trend to shift meaningfully the spectrum supply curve, leading to a devaluation of DISH spectrum value.

What is the value creation potential of DISH s wireless spectrum portfolio?

DISH has assembled a potentially solid spectrum position, but the market values this spectrum too highly today. First, it needs to be combined with PCS, G, H, and AWS-4 bands to be optimal. Second, DISH is positioning its spectrum as downlink only; but, with the advent of the smartphone camera and enterprise mobility, uplink and downlink traffic will become more balanced. Third, in light of declining wireless revenues, the wireless industry is undergoing a major strategic rethink with respect to spectrum utilization (i.e., use of unlicensed spectrum for low-cost, small cell deployments, where 80% of traffic is occurring). The key to valuation is how soon the buyer of spectrum needs to move and the appetite for regulatory approval. AT&T and Verizon appear to be prime candidates, with major spectrum challenges in major markets, but they have the highest degree of regulatory risk and are in the midst of a strategic shift regarding their spectrum utilization paths going forward. Furthermore, even if there were appetite for a deal, we do not think a deal will be successful for either of the two major wireless operators until Sprint and/or T-Mobile US become significantly stronger operators. While the AWS-3 auction provided important market direction in valuing DISH s spectrum portfolio, DISH continues to face increased erosion of its pay TV customer base and needs to move quickly, in our view, despite extended buildout milestones. Post-SoftBank and post-Clearwire, Sprint is well positioned on capacity for four or five years and does not need to move quickly; T-Mobile is well positioned on spectrum to manage capacity needs for four to five years, according to Ericsson, so we believe DISH should move to acquire T-Mobile ahead of a Comcast MVNO launch. But we see more strategic alliance opportunities between T-Mobile and Comcast or Google.

What is the longterm outlook for Dish Networks’ organic business?

DISH faces a continued, competitive ARPU and churn disadvantage to cable operators and can only resell broadband. Network trials confirm major challenges with the fixed-broadband business model. We see a more challenging cash flow outlook as a result. Today, as the business slows, margin expansion comes from lower success-based installation costs; but fixed costs rise when DISH loses customers. DISH has to do something soon on the strategic front, particularly as AT&T is positioning to deploy fiber deeper across its access networks in additional markets, potentially covering an incremental 10 million to 15 million homes.

We believe increased competitive challenges in the pay TV market are not adequately balanced by DISH’s options in the wireless arena, which may take longer than expected to achieve. While a bearish signal, an acquisition of a wireless operator is a possible scenario, with T-Mobile being a potential target, in our view. However, we think that a spectrum lease appears to be the more likely outcome, but this may take longer to achieve.

Hyper Scale Mega Data Centers: Time is NOW for Fiber Optics to the Compute Server

Introduction:

Brad Booth, Principal Architect, Microsoft Azure Cloud Networking,  presented an enlightening and informative keynote talk on Thursday, April 14th at the Open Server Summit in Santa Clara, CA.  The discussion highlighted the tremendous growth in cloud computing/storage, which is driving the need for hyper network expansion within mega data centers (e.g. Microsoft Azure, Amazon, Google, Facebook, other large scale service providers, etc).

Abstract:

At 10 Gb/s and above, electrical signals cannot travel beyond the box unless designers use expensive, low-loss materials. Optical signaling behaves better but requires costly cables and connectors. For 25 Gb/s technology, datacenters have been able to stick with electrical signaling and copper cabling by keeping the servers and the first switch together in the rack. However, for the newly proposed 50 Gb/s technology, this solution is likely to be insufficient. The tradeoffs are complex here since most datacenters don’t want to use fiber optics to the servers. The winning strategy will depend on bandwidth demands, datacenter size, equipment availability, and user experience.

Overview of Key Requirements:

  • To satisfy cloud customer needs, the mega data center network should be built for high performance and resiliency.  Unnecessary network elements/equipment don’t provide value (so should be eliminated or at least minimized).  
  • It’s critically important to plan for future network expansion (scaling up) in terms of more users and higher speeds/throughput.
  • For a hyper scale cloud data center (CDC), the key elements are: bandwidth, power, security, and cost.  
  • “Crop rotation” was also mentioned, but that’s really an operational issue of replacing or upgrading CDC network elements/equipment approximately every 3-4 years (as Brad stated in a previous talk).

 

Channel Bandwidth & Copper vs Fiber Media:

Brad noted what many transmission experts have already observed: as bit rates continue to increase, we are approaching Shannon’s limit for the bandwidth of a given channel.  To increase channel capacity, we need to either increase the bandwidth of the channel or improve the Signal to Noise (S/N) ratio.  However, as the bit rate increases over copper or fiber media, the S/N ratio decreases.  Therefore, the bandwidth of the channel needs to improve.

“Don’t ever count out copper” is a common phrase.  Copper media bandwidth has continued to increase over both short (intra-chasis/rack), medium (up to 100m), and longer distances (e.g. Vectored DSL/G.fast up to 5.5 km).   While copper cable has a much higher signal loss than optical fiber, it’s inexpensive and benefits from not having to do Electrical to Optical (Server transmit), Optical to Electrical (Server receive), and Optical to Electrical to Optical (OEO – for a switch) conversions which are needed when fiber optics cable is used as the transmission media.

Fiber Optics is a much lower loss media than copper with significantly higher bandwidth.  However, as noted in a recent ComSoc Community blog post, Bell Labs said optical channel speeds are approaching the Shannon capacity limits.  

Brad said there’s minimal difference in the electronics associated with fiber transmission, with the exception of the EO, OE or OEO conversions needed.

A current industry push to realize a cost of under $1 per G b/sec of speed, would enable on board optics to strongly compete with copper based transmission in a mega data center (cloud computing and/or large network service providers).

Optics to the Server:

For sure, fiber optics is a more reliable and speed scalable medium than copper.  Assuming the cost per G decreases to under $1 as expected, there are operational challenges in running fiber optics (rather than twin ax copper) to/from (Compute) servers and (Ethernet) switches.   They include: installation, thermal/power management, and cleaning.

In the diagram below, each compute server rack has a Top of Rack (ToR) switch which connects to two Tier 1 switches for redundancy and possible load balancing.  Similarly, each Tier 1 switch interconnects with two Tier 2 switches, also shown in the diagram below:


 

Figure (courtesy of Microsoft) illustrates various network segments within a contemporary Mega Data Center which uses optical cables between a rack of compute servers and a Tier 1 switch, and longer distance optical interconnects (at higher speeds) for switch to switch communications.  


Brad suggested that a new switch/server topology would be in order when there’s a large scale move from twinax copper direct attach cables to fiber optics.  The server to Top of Rack (ToR) switch topology would be replaced by a distributed mesh topology where Tier 1 Switches (first hop to/from server) connect to a rack of servers via fiber optical cable.  Brad said Single Mode Fiber (SMF), which had previously been used primarily in WANs, was a better choice than Multi Mode Fiber (MMF) for intra Data Center interconnects up to 500M.  Katherine Schmidke of Facebook said the same thing at events that were summarized at this blog site.

Using SMF at those distances, the Tier 1 switch could be placed in a centralized location, rather than today’s ToR switches.  A key consideration for this distributed optical interconnection scenario is to move the EO and OE conversion implementation closer to the ASIC used for data communications within each optically interconnected server and switch.

Another advantage of on board optics is shorter PCB (printed circuit board) trace lengths which increases reliability and circuit density while decreasing signal loss (which increases with distance).  

However, optics are very temperature sensitive.  Ideally they operate best at temperatures of <70 degrees Fahrenheit. Enhanced thermal management techniques will be necessary to control the temperature and heat generated by power consumption.

Industry Preparedness for Optical Interconnects:

  • The timeline will be drive by I/O technology: speed vs power vs cost.  
  • At OFC 2016, a 100Gb/sec PAM 4 (pulse amplitude modulation with 4 discrete levels/dimensions per signal element) was successfully demonstrated.
  • Growing cloud computing market should give rise to a huge volume increase in cloud resident compute servers and the speeds at which they’re accessed.
  • For operations, a common standardized optics design is greatly preferred to a 1 of a kind “snowflake” type of implementation. Simplified installation (and connectivity) is also a key requirement.
  • Optics to the compute server enables a more centralized switching topology which was the mainstay of local area networking prior to 10G b/sec Ethernet. It enables an architecture where the placement of the ToR or Tier 0 switch is not dictated by the physical medium (the Direct Access Copper/twin-ax cabling).
  • The Consortium for On-Board Optics (COBO) is a 45 member-driven standards-setting organization developing specifications for interchangeable and interoperable optical modules that can be mounted onto printed circuit boards.

COBO was founded to develop a set of industry standards to facilitate interchangeable and interoperable optical modules that can be mounted or socketed on a network switch or network controller motherboard. Modules based on the COBO standard are expected to reduce the size of the front-panel optical interfaces to facilitate higher port density and also to be more power efficient because they can be placed closer to the network switch chips.

Is the DC Market Ready for Optics to the Server?

That will depend on when all the factors itemized above are realized.  In this author’s opinion, cost per G b/sec transmitted will be the driving factor.  If so, then within the next 12 to 18 months we’ll see deployments of SMF based optical interconnects between servers and the Tier 1 switches at up to 500m distance in cloud and large service provider data centers (DCs).

Of course, as the server to Tier 1 switch distance extends to up to 500m (from very short distance twin ax connections), there’s a higher probability of a failure or defect.  While fault detection, isolation and repair are important issues in any network, they’re especially urgent for hyper-scale data centers.  Because such DCs are designed for redundancy of key network elements, a single failure is not as catastrophic.

This author believes that the availability of lower-cost SMF optics will open up a host of new options for building large DCs that can scale to support higher speeds (100G or even 400G) in the future without requiring additional cabling investments.  

Here’s a quote from Brad in a March 21, 2016 COBO update press release:  

“COBO’s work is important to make the leap to faster networks with the density needed by today’s data centers. COBO’s mission is a game-changer for the industry, and we’re thrilled with the growth and impact our member organizations have been able to achieve in just one year,” said Brad Booth, President of COBO. “We believe 2016 will be an important year as we drive to create standards for the data center market while investigating specifications for other market spaces.”

Addendum on SMF: 

The benefits of SMF DC infrastructures include: 

  • Flexible reach from 500m to 10km within a single DC
  • Investment protection by supporting speeds such as 40GbE, 100GbE and 400GbE on the same fiber plant
  • Cable is not cost prohibitive, SMF is less expensive than MMF fiber
  • Easy to terminate in the field with LC connectors when compared to MTP connectors 

References:

http://cobo.azurewebsites.net/

http://www.gazettabyte.com/home/2015/11/25/cobo-looks-inside-and-beyond-the-data-centre.html

http://www.ieee802.org/3/50G/index.html

Internet of Things (IoT) Use Cases & Verizon IoT Report

Mainstream adoption and integration across varied business sectors is unleashing an array of Internet of Things (IoT) use cases. These practical IoT applications are making a substantive, real-world impact and represent a boon for businesses as well. From 2014 to 2015 alone, IoT network connections experienced marked growth.

Verizon and AT&T have created proprietary developer platforms to spur IoT applications, and their extensive wired and wireless networks, accelerated by 5G, will help support the next wave of advances.

This week’s release of Verizon’s “State of the Market: Internet of Things 2016” denotes a sea change for IoT. The focus today, Verizon notes, has shifted from merely forecasting the potential of IoT technology. Rather, mainstream adoption and integration across varied business sectors is unleashing an array of IoT use cases. These practical IoT applications are making a substantive, real-world impact and represent a boon for businesses as well. As Verizon’s data confirms, the IoT opportunity is substantial. From 2014 to 2015 alone, IoT network connections have experienced marked growth. Gains include energy/utilities (58 percent); home monitoring (50 percent); transportation (49 percent); smart cities (43 percent); agriculture (33 percent); healthcare/pharma (26 percent).

– Read more at: https://www.ustelecom.org/blog/business-case-iot#sthash.7LP9AAcI.dpuf

“We are beginning to see a more even distribution of IoT spend across regions as the momentum of this market drives enterprises to adopt IoT solutions globally,” said Carrie MacGillivray, vice president, IoT and Mobile research. “Vendors must be mindful to having global IoT product strategies, but at the same time, they must have a regional strategy to ensure that local requirements and regulations are met.”

Worldwide Internet of Things Forecast Update, 2015-2019

FCC Wants to Revamp Oversight of Bulk Data Service Provided to Businesses

Details of the new FCC plan hadn’t yet been made public as of Thursday evening. The agency said it would vote on the plan at its April 28th meeting.  Yet even before the proposal emerged, industry groups began battling each other over how far it should go and which sectors should be swept up.

Smaller telecom companies reached an understanding with giant telco Verizon Communications Inc. to recommend replacing the current patchwork of regulation with “a permanent policy framework for all dedicated services,” one that would be “technology-neutral,” according to a joint statement. The current regulatory system has been criticized for focusing on older market players and technologies, leaving them feeling disproportionately targeted.

Verizon and INCOMPAS (which represents competitive carriers) urged the FCC to adopt a “permanent policy framework” on access to broadband business services.

http://www.multichannel.com/news/fcc/verizon-clecs-strike-deal-special-access/403939


The Wall Street Journal reported that the cable industry  said that such a move would target their firms. Some cable companies fear they might now face more regulation than in the past, since they represent a newer technology.

“The FCC should reject any call to impose new, onerous regulations on an industry that is stepping up to offer meaningful choices to business customers,” the National Cable & Telecommunications Association said in its own statement. “The FCC will not achieve competition if it burdens new…entrants with regulation.”

The so-called special-access market has proved to be a particularly difficult regulatory puzzle for the FCC to solve, at a time of rapid transformation in the telecom industry generally.

Some critics believe the FCC went too far in deregulating the market in 1999, the last time the agency made a major policy pronouncement.

For years, telecom companies such as Sprint Corp. and Level 3 Communications Inc. have griped that the big phone companies likeAT&T Inc. and Verizon Communications Inc. have taken unfair advantage of their power in the market. AT&T and Verizon, along with CenturyLink Inc. and Frontier Communications Corp., dominate the special access market because they effectively control the wires that were built by the legacy AT&T monopoly, which was broken up by the government in 1984.

Some smaller companies, for example, accuse the carriers of requiring them to make large volume commitments or face big fees. Sprint, which uses the special access to connect its cell towers, says it has had to pay huge termination fees to the larger carriers when it switched several thousand cell towers to alternative providers.

AT&T and the other large carriers have denied the allegations and said the market is generally competitive.

In addition, companies of all sizes have complained that the FCC deregulatory scheme adopted in 1999 was both overly complicated and ineffective at determining areas where the market still needed stronger oversight. As a result, the FCC already has taken some steps toward a new system of stronger oversight.

Adding to the problems, the 1996 Telecommunications Act (which has been a complete flop after the dot com bust) gives the FCC authority to police competitive behavior in the telecom market, but the agency’s jurisdiction over these types of contracts primarily covers older technologies. AT&T, Verizon and other carriers have invested in newer network technologies that aren’t subject to FCC oversight in this way.

AT&T Looks to Public-Private Partnerships for GigaFiber Expansion, especially in North Carolina

Reversing its previous policy of trying to block municipal broadband projects, AT&T now says it is eager to work with more municipalities to support the deployment of gigabit broadband access.  That would fulfill its commitment to the Federal Communications Commission (FCC) to increase its fiber footprint, which was a condition of AT&T’s buying DirecTV (see below). For residential broadband, fiber to the home (AT&Ts GigaPower) is only in selected greenfield deployments and in cities where Google Fiber has been announced and/or is commercially available as per this map.  

During a Light Reading Gigabit Cities Live conference Q &A, Vanessa Harrison, president of AT&T’s North Carolina operation said the U.S. mega-carrier favors projects that build on existing company assets.  According to Light Reading’s Carol Wilson, Ms. Harrison said that AT&T likes to be invited to participate as a private company in municipal fiber-to-the-premises networks, either by a city or county government that is clear on what it needs and expects from a fiber network operator.

“We look for areas that demonstrate a demand, where there is infrastructure that is in our traditional service territory — where we can expand our facilities, enhance our facilities and deploy new facilities. And we also look for adoptability — areas where it is easy to adopt,”  Ms. Harrison said.


AT&T’s Fiber Build-Out Promise to FCC:

Under the FCC’s terms of approval for the DirecTV acquisition, AT&T agreed to build out fiber to 12.5 million homes nationwide and is doing that across its existing footprint. North Carolina is home to seven cities on the AT&T GigaPower roadmap, some of which pre-date the DirecTV deal, and that’s in part because the state has long had policies that encourage private investment, Harrison said.

More recently, the public-private partnership North Carolina Next-Generation Network (NCNGN) brought together six municipalities and four major universities that invited private operators to compete to provide a gigabit network, and chose AT&T as their network operator. That’s the kind of public-private partnership invitation that AT&T likes, because it clearly spells out what the local communities need. Communication between potential partners is one of the most important factors, according to Ms. Harrison.

“We look to come to the table and say, ‘Here are the facilities we have. How can we partner together to meet your need?” she said. It’s important for communities to do their own needs assessment and know what they are looking for in a fiber network, and also what resources an operator is expected to bring to the effort.

“Deploying fiber is a big job and it takes a lot of time, and a lot of resources including a number of employees,” she noted.

It’s important that the municipality’s needs be clear upfront, so expectations can be correctly set, but the city must also recognize its responsibilities. That includes the ability to issue permits in a timely fashion, for example, which can create stresses on staff municipal departments.

Note that AT&T GigaPower has been available in Huntersville, NC to residential and small business customers since October 2015.


Addendum:

On March 15th, North Carolina voted overwhelmingly to borrow $2 billion to pay for a laundry list of infrastructure projects, collectively known as Connect NC.  Almost half of the bond money is intended for projects within the UNC System, and another $350 million is for the community college system.

In a co-authored editorial in the February 22nd News & Observer, AT&T’s Vanessa Harrison wrote:

“The bond package will support new research, technology and innovation across all of our universities and community colleges……The investments in our state from Connect NC are critical to sustained economic growth and continued success in our global economy. Connect NC investments will benefit all North Carolinians. Whether you are an alum of North Carolina’s university or community college system, have or had a child in the system, or simply want our state to have the best-skilled workforce in the country, you will benefit from Connect NC.”

Read more here: http://www.newsobserver.com/opinion/op-ed/article61828722.html#storylink=cpy

Analysis: Frontier Completes Acquisition of Verizon Wireline Operations in 3 States

Introduction & Backgrounder:

Over the past decade, Verizon (VZ) had acquired GTE and MCI Worldcom’s wireline networks and also built out its own fiber to residential customer premises for triple play services (FiOS for true broadband Internet, pay TV and voice).  In February 2014, VZ bought out Vodafone to be the 100% owner of Verizon Wireless.  That gave the huge telco a much larger stake in the U.S. wireless carrier market.  That’s certainly the case today and for the foreseeable future with Verizon’s CEO Lowell McAdam talking about 5G trials while saying VZ deploy 5G in a live network years before the ITU-R standard is completed!  Last September, VZ announced it would sell wireline assets to Frontier Communications.  That deal was completed today (April 1st).

The REALLY Big News:

NO APRIL FOOL’s JOKE:  Frontier Communications announced completion of its $10.54 billion acquisition of Verion’s wireline operations which provides services to residential, commercial and wholesale customers in California, Texas and Florida.  The acquisition includes approximately 3.3 million voice connections, 2.1 million broadband connections, and 1.2 million FiOS® video subscribers, as well as the related incumbent local exchange carrier businesses. New customers will begin receiving monthly bills starting in mid-April.

From Verizon’s website:

The sale does not include the services, offerings or assets of other Verizon businesses, such as Verizon Wireless and Verizon Enterprise Solutions.

The transaction concentrates Verizon’s landline operations in contiguous northeast markets – which will enhance the efficiency of the company’s marketing, sales and service operations across its remaining landline footprint. It also allows Verizon to further strengthen and extend its industry leadership position in the U.S. wireless market, while returning value to Verizon’s shareholders. As previously announced, Verizon is using the proceeds of the transaction to pay down debt.

Approximately 9,400 Verizon employees who served customers in California, Florida and Texas are continuing employment with Frontier.

From the Business Wire Press Release

“This is a transformative acquisition for Frontier that delivers first-rate assets and important new opportunities given our dramatically expanded scale,” said Daniel J. McCarthy, Frontier’s President and Chief Executive Officer. “It significantly expands our presence in three high-growth, high-density states, and improves our revenue mix by increasing the percentage of our revenues coming from segments with the most promising growth potential.”

Frontier will take on approximately 9,400 employees from Verizon. “Our new colleagues know their markets, their customers and their business extremely well,” McCarthy said. “As valued members of the Frontier team, they will ensure continuity of existing customer relationships.”


Analysis – Frontier:

Frontier has now just about doubled in size as the result of this Verizon acquisition. The shift by Frontier is also noteworthy because of the massive consolidation currently underway in the US broadband and pay-TV markets. Frontier is about to become one of the few significantly-sized triple play service providers that isn’t a cable company (Comcast, TW Cable, Charter, etc).

Arris Group Inc. (Nasdaq: ARRS) CEO Bob Stanzione expressed his opinion last fall that Frontier would invest in expanding the FiOS plant, and Frontier President and CEO Daniel McCarthy has been vocal about his plan to expand TV and broadband service to new customers. (See Video: Frontier’s Final Frontier and Frontier Gives Telco TV a Boost .)

Most recently, Frontier Director of Strategic Planning David Curran talked to Light Reading about Frontier’s evaluation of open networking technologies, and the company’s particular interest in CORD implementations — the idea of a Central Office Re-architected as a Data Center (CORD). With possible migrations to new open networking technologies, Frontier hopes to be able to better scale to higher broadband speeds and higher numbers of subscribers. (See Frontier Checks Out CORD, SDN  and also Project CORD for GPON LTE, G.FAST DSL Modems & Mobile Networks)


Analysis – Verizon:

Light Reading reports that Verizon’s grand FiOS experiment that began more than a decade ago has wilted in recent years as Verizon has transferred its focus to wireless projects, and, more critically, wireless profits.

Verizon hasn’t given up on wireline altogether. The company did just spend $1.8 billion to acquire XO Communications and further build up its fiber network. Plus, Verizon is reportedly close to wrapping up an RFP process that will decide the vendors for a planned fiber upgrade project using next-generation PON technologies. (See Verizon Bags XO for $1.8B and Verizon Preps Next Major Broadband Upgrade.)

However, Verizon’s wireline strategy appears to center more on enterprise services and cellular backhaul, rather than broadband residential services. The telco has said it remains committed to its FiOS footprint on the east coast, but with even the director of FiOS TV promoting Verizon’s Go90 OTT mobile video service over her own product, it’s hard to know how seriously to take the company’s official position. (See FiOS TV Director Cuts the Cord.)

In a FCC filing last year, Florida Power & Light argued against Verizon selling wireline assets to Frontier:

“Verizon has made it clear it intends to be out of the wireline business within the next ten years, conveying this clear intent to regulated utilities in negotiations over joint use issues and explaining that Verizon no longer wants to be a pole owner. Indeed, the current proposed [$10.54 billion sale of Verizon facilities in Florida, Texas and California] proves this point.”

and later……………………………..

“All of the evidence shows that Verizon is abandoning its efforts to build out wireline broadband (especially in New Jersey).  There should be no doubt that Verizon’s strategy to abandon wireline service in favor of wireless service extends beyond New York and Florida and beyond storm damaged and rural areas.”

2016 OCP Summit: Highlights of Bell Labs Peter Winzer’s Talk & Conversation with FB’s Katharine Schmidtke

Backgrounder:

Fiber optics transmission – both short haul within a Data Center (DC) and long haul for WAN transport- have made great progress in recent years.  While it took 12 to 14 years (from 2000 to 2012-2014) to go from 1G to 10G Ethernet in the DC, it’s taken only a couple of years to go to 40G in mega DCs and in the core WAN.  Some DCs, Internet Exchanges, and WAN backbones now support 100G.  

[For today’s DCs, the uplinks and switch to switch traffic uses 10G/40G Ethernet while WANs generally use OTN framing, channels and speeds.]

A Long History of Innovation at Bell Labs:

During a March 9th Open Compute Project (OCP) Summit Keynote, Peter Winzer, Ph.D & Head of the Optical Transmissions Systems & Networks Research Department at (now Nokia) Bell Labs shared insights around how innovation has evolved along with the fiber optic foundation for telecom infrastructure and long haul transport.  After his 11 minute presentation, Winzer was interviewed by Katharine Schmidtke, PhD & Head of Optical Network Strategy at Facebook (see Q&A summary below).

Winzer shared key insights about how the “Transistor” was developed at Bell Labs (patent in 1947) and then given away for free to stimulate its uptake in design of electronic devices.   That so called “culture of openness” spread to the area now known as Silicon Valley, ever since William Shockley set up his Mt View, CA lab in 1956, followed by Bob Noyce & company leaving to found Fairchild in 1957.   “Silicon Valley is a spin-out of Bell Labs…” he said.  We agree!

Peter covered the glory days of Bell Labs to it’s accomplishments after divestiture by AT&T (to Lucent than Alcatel-Lucent and now Nokia) with a chart showing all the important discoveries.  Please refer to this chart of Innovation at Work which continues with a video window of Peter’s speaking.

Peter amplified the conference theme of “Open Compute” by saying that open, collaborative science is fundamental to innovation.  “It is important to extend general scientific knowledge,” Winzer said, “because that will lead to many things unexpected that will create other industries. That really captures what Bell Labs is all about.”                                                            
Bell Labs Recent Work in Fiber Optic Transmission:
  

Peter than described some of the more recent work he and his team conducted around the 100G field trial with Verizon in 2007 on a live link in Florida, which proved that capacity limits of optical networks could be drastically increased (from 10G or 40G to 100G) to help meet exponential growth in traffic.  Winzer maintains that 100G in the WAN is feasible today as an upgrade to 10G/40G long haul networks.

Please refer to chart of Making 100G a Reality- A Recent Example of Bell Labs Innovation.  One little Bell Labs developed DSP chip does all the signal processing for 100G transmission.  Next was a Terabit per second per fiber optic wavelength/channel (which assumes DWDM transmission) lab demo in 2015.  

Why was that important? Because data/video traffic is continuing to grow (exponentially).  Traffic growth vs fiber speed per wavelength was illustrated by a table showing how much capacity/speed is needed depending on traffic growth rates. For example, with a 40% traffic growth rate, in 7 years a 10X fiber channel increase will be needed.  That in turn implies 1 terabit per second transmission will be needed in 2017, since 100G bit/spec was commercially available (but rarely deployed) in 2010.

Capacity Limits of Optical Networks:

It was always believed that fiber capacity was infinite.  But it’s been slowing down since 2000.  Looking ahead, the Bell Labs team will focus on facing challenges around fiber capacity limits as we are now approahing the Shannon limits of a fiber optic channel based on transmission distance.  

Winzer stated: “We are at the Shannon limits.  Is there life beyond DWDM?  Bell Labs is looking at physical dimensions available and, with the help of frequency and space, we hope to find solutions that will help you scale your networks further (e.g. attain higher speeds per optical channel/wavelength/fiber).”  

This topic was picked up again during the Q&A conversation described below.

Facebook’s Katharine Schmidtke’s Remarks + Q &A with Peter Winzer:

Dr. Schmidtke has been previously profiled in a 2015 Hot Interconnects article detailing Facebook’s (FB’s) intra DC optical network strategy.  Katharine said that while most of FB’s DC internal interconnects were at 40G bit/sec links, one FB DC would be upgraded to 100G bit/sec using Duplex Single Mode Fiber (SMF) this year.  SMF, while more expensive than Multi Mode Fiber (MMF), can be upgraded to support much higher speeds such as 400G bit/sec and even 1 Tera bit/sec.

Katharine said that all FB DC internal interconnects would be upgraded 100G by January of 2017.  Finally, Katharine said that it’s now possible to build efficient green infrastructure for better designed DCs.  

Note:  Katharine didn’t talk about fiber speeds needed to interconnect FB’s DCs, e.g. their internal WAN backbone (note that Google refused to answer that very same question from this author during several of their talks on Google’s inter-DC fiber optic based backbone WAN.

In a follow up conversation with Peter Winzel, Katharine asked a number of questions.  Here’s a summary of the Q &A:

1.  What’s the capacity needed in the near future? Fiber optic capacity is very relative, dependent on operator demand and their traffic growth rates.

2.  What should the industry be doing to address the capacity challenges?  There are five physical dimensions that can be used to scale a network.  Only Frequency and Space haven’t been exploited yet, but those will need to be researched. Several approaches were suggested for Frequency improvements.  Space implies parallel transmission systems (TBD).

3.  How does coherence technology (e.g. 100G Coherent DSP chip from Bell Labs) help on the network scale?   With Coherent DSPs you can electronically compensate for various types of impairments such that 100G becomes a “piece of cake” where 10G previously had trouble (due to uncompensated fiber impairments).   You can also monitor and open up the network to sense polarization rotation in the signal which could be caused by a back hoe digging up the system.  That’s predictive fault detection and isolation.  Please also see the IEEE Spectrum article excerpt on coherence below.

4.  What about “Alien Wavelengths” (sending wavelengths generated by a different vendor over the fiber line system built by the incumbent vendor)?  Coherent technology can compensate for impairments much more easily providing for a lot more openness.

5. Business case?  Peter referenced Bell Labs giving away the transistor to stimulate electronic designs to use it.  The effect was that the transistor began used for switching (and other applications) besides the linear amplifier it was originally intended to replace.  It had a huge impact on the electronics industry.  The implication here is that openness and collaboration as per the OCP will lead to important innovations in fiber optics transmission. Time will tell.


In a March 9th afternoon OCP Summit session, Katharine described Optical Interconnects within the DC and Beyond using FB DC’s as examples.  That was then followed by a panel that included representatives from Microsoft, Google and Equinix. You can watch that video here.

Has Fiber Optic Transmission Speed Reached It’s Limits?

In an email to this author related to Winzer’s comments about fiber optic transmission capacity limits, Katharine wrote:  “You could say that fiber optic transmission (speed) is predicted to plateau or is predicted to reach the Shannon limit.   I do agree that the capacity crunch challenge is important and research needs to be started right away.  It’s a great topic to start a discussion.”

Indeed, the discussion has already started as per an excellent February 2016 IEEE Spectrum article: After decades of exponential growth, fiber-optic capacity may be facing a plateau.  The graph below is extracted from that article:

/img/02KecksLawChart-1453237185011.jpg

 

Data source: Donald Keck Fiber-optic capacity has made exponential gains over the years. The data in this chart, compiled by Donald Keck, tracks the record-breaking “hero experiments” that typically precede commercial adoption. It shows the improvement in fiber capacity before and after the introduction of wavelength-division multiplexing [light blue section] 

The article goes on to reference Peter Winzer’s work:

Peter Winzer, a distinguished member of technical staff at Bell Labs and a leader in high-speed fiber systems, agrees that installing new cables with even more fibers is the simplest approach. But in a recent article, he warned that this approach, which will add to the cost of a cable, might not be popular among telecommunications companies. It wouldn’t reduce the cost per transmitted bit as much as they had come to expect from earlier technological improvements.”

The article also defines coherence and why it’s an important and intrinsic property of laser light.

“Coherence means that if you cut across the beam at any point, you’ll find that all its waves will have the same phase. The peaks and troughs all move in concert, like soldiers marching on parade.  Coherence can be used to drastically improve a receiver’s ability to extract information. The scheme works by combining an incoming fiber signal with light of the same frequency generated inside a receiver. With its clean phase, the locally generated light can be used to help determine the phase of the noisier incoming signal. The carrier wave can then be filtered out, leaving the signal that was added to it. The receiver converts that remaining signal into an electronic form carrying the 1s and 0s of the information that was sent.”


Space and time do not permit any more discussion here about the intriguing subject of how to increase fiber optic tranmission capacity as the Shannon limit is reached.  The obvious answer is to just use more fibers or wavelengths.

We invite blog posts, referenced articles and comments (log in using your IEEE web account credentials to post a comment below this article).

Verizon, FCC Push Mm Wave 5G -Threat to Cable Broadband Service, Reinhardt Krause, INVESTOR’S BUSINESS DAILY

Article written by R. Krause of Investors Business Daily (investors.com) & edited by Alan J Weissberger.
Followed by a blog post from IEEE’s Alan Gatherer f Huawei and then a reference to a superb 5G presentation from IEEE’s Jonathan Wells of AJIC consulting

IBD Article:

Bottom Line:  Could high frequencies let AT&T or Verizon out do cable broadband service? Wireless carriers could one day boast data-transfer speeds up to a gigabit per second with 5G — about 50 times faster than cellular networks around the U.S. have now. That opens up new markets for competition.

  Federal regulators and Verizon Communications have zeroed in on airwaves that could make the U.S. the global leader in rolling out 5G wireless services.    One market opportunity for 5G may be as challenger to the cable TV industry’s broadband dominance. Think Verizon Wireless, not Verizon’s VZ   FiOS-branded landline service, vs. the likes of Comcast or Charter Communications.    

First, though, airwaves need to be freed up for 5G. That’s where highfrequency radio spectrum, also called millimeter wave or mm-Wave, comes in. In particular, U.S. regulators are focused on the 28 gigahertz frequency band, analysts say. Most wireless phone services use radio frequency below 3 GHz.    

If 28 GHz or millimeter wave rings a bell, that’s because several fixed wireless startups (WinStar, Teligent TLGT   , NextLink, Terabeam) tried and failed to commercialize products relying on high-frequency airwaves during the dot-com boom of the late 1990s. Business models were suspect, and their LMDS (local multipoint distribution services) were susceptible to interference from rain and other environmental conditions.

When the tech bubble burst in 2000-01, the LMDS startups perished. Technology advances, however, could now make the high-frequency airwaves prime candidates for 5G.

 

“In the 1990s, with LMDS, mobile data wasn’t mature, and neither was the Internet, and neither was the electronics industry — it couldn’t make low-cost, mmWave devices,” said Ted Rappaport, founding director of NYU Wireless, New York University’s research center on millimeter-wave technologies. 

   “Wi-Fi was really brand new then, and broadband backhaul (long-distance) was not even built out. LMDS was originally conceived to be like fiber, to serve as backhaul or point-to-multipoint, and was not for mobile services, he said. 
   “Fast forward to today: backhaul is in place to accommodate demand, and electronics at mmWave frequencies are being mass-produced in cars,” Rappaport continued. “Demand for data is increasing more than 50% a year, and the  only way to continue to supply capacity to users is to move up to (millimeter wave).” 
   The Federal Communications Commission in October opened a 
 study looking at 28, 37, 39, and 60 GHz as the primary bands for 5G. While the FCC says that 28 GHz airwaves show promise, some countries have been focused on higher frequencies.  FCC Chairman Tom Wheeler, speaking at a U.S. Senate committee hearing on March 2, said: “While international coordination is preferable, I believe we should move forward with exploration of the 28 GHz band.”    
Wheeler said that the U.S. will lead the world in 5G and allocate spectrum “faster than any nation on the planet.”
 

 

Verizon Makes Deals  
 Verizon, meanwhile, on Feb. 22 agreed to buy privately held XO Communications’ fiber-optic network business for about $1.8 billion. In a side deal, Verizon will also lease XO’s wireless spectrum in the 28 GHz to 31 GHz bands, with an option to buy for $200 million by the end of 2018. XO’s spectrum covers some of the largest U.S. metro areas, including New York, Boston, Chicago, Minneapolis, Atlanta, Miami, Dallas, Denver, Phoenix, San Francisco and Los Angeles, as well as Tampa, Fla., and Austin, Texas.  Verizon CFO Fran Shammo commented on the XO deal at a Morgan Stanley conference on March 1st. 
   “Right now we have licenses issued to us from the FCC for trial purposes at 28 GHz. The XO deal gave us additional 28 GHz,” he said. “The rental agreement enables us to include that (XO spectrum) in some 
 of our R&D development with 28 GHz. So that just continues the path that we’re on in launching 5G as soon as the FCC clears spectrum.”  He noted that Japan and South Korea plan to test 5G services using 28 GHz and 39 GHz airwaves.    

 

 Some analysts doubt that 28 GHz airwaves will be on a fast track.  “We are skeptical not only on the timing of the availability of 28 GHz but also its ultimate viability in a mobile wireless network,” Walter Piecyk, analyst at BTIG Research, said in a report.  Boosting signal strength at higher frequencies is a challenge for wireless firms. Low-frequency airwaves travel over long distances and also through walls, improving in-building services.  

One approach to increase propagation in millimeter wave bands, analysts say, is using more “smallcell” radio antennas, which increase network capacity.  Wireless firms generally use large cell towers to connect mobile phone calls and whisk video and email to mobile phone users. They also install radio antennas on building rooftops, church steeples and billboards. Suitcase-sized antennas used in small-cell technology often go on lamp posts or utility poles.    Verizon has been testing small cell technology in Boston, MA. 

   When Will 5G Happen? 

Verizon says that it will begin rolling out 5G commercially in 2017, though its plans are still vague. While many wireless service providers touted 5G plans and tests at the Mobile World Congress (MWC) in February, makers of telecom network equipment are being cautious.

 “General consensus (at MWC) seemed to indicate that the 2020 time-frame will mark full-scale 5G deployments,” Barclays analyst Mark Moskowitz said in a report.  Verizon has said that it doesn’t expect 5G networks to replace existing 4G ones. 

While 5G is expected to provide much faster data speeds, wireless firms also expect applications that  require always-on, low-data-rate connections. The apps involve datagathering from industrial sensors, home appliances and other devices often referred to as part of the Internet of Things (IoT).

Both Verizon and AT&T   have recently touted 5G speeds up to one gigabit per second. That’s roughly 50 times faster than the average speeds of 4G wireless networks in good conditions. AT&T CEO Randall Stephenson recently said that 5G speeds could match fiber-optic broadband connections to homes.

5G Vs. Broadband 

At the Morgan Stanley conference, Verizon’s Shammo also said that 5G could be a “substitute product for broadband.” Regulators would like to create new competition for cable TV companies. But, Verizon says, it’s still early days. 

   “With trials, we’ll figure out exactly what we can deliver, what the base cases are,” said Shammo. “5G has the capability to be a substitute for broadband into the home with a fixed wireless solution. The question is, can you deploy that technology and actually make money at a price that the consumer would pay?” 
   Sanyogita Shamsunder, Verizon’s director of network infrastructure planning, says that high frequencies can support 5G.  “Radio frequency components today are able to support much wider bandwidth (think wide lanes on the highway) when compared  to even 10 years ago. What it means is we are able to pump more bits at the same time,” Shamsunder said in an email to IBD.   “Due to improvements in antenna and RF technology,” she added, “we are able to support 100s of small, tiny antennas on a small die the size of a quarter.”


Another point of view from Alan Gatherer of IEEE ComSoc:

Fresh from Mobile World Congress, my favourite “tell it like it is” curmudgeon-cum-analyst Richard Kramer has kindly agreed to share his thoughts on the state of the industry and on 5G in particular. While reading his article, I had two thoughts that align with his position:

1) How long will it take to really do VoLTE well? 

2) 3G’s lifespan was quite short and we should probably expect a lot more runway for 4G. 

Like Alice in Wonderland, the mobile world has been turned topsy-turvy with an accelerated push to 5G.  One would think the lessons on 3G, 3.5G (HSPA), 4G, and its many variants, were never (painfully) learned: that the ideal approach for operators and vendors is to leave time to “harvest” profits from investments, not race to the next node. This was true in the earliest discussions of LTE (stretching back, if one recalls, to 2006/7), and in the interim fending off the noisy interventions of WiMax (remember those embarrassing forecasts from some analysts, which we fondly recall dubbing “technical pornography” for the 802.xxx variants garnering oohs and aahs from radio engineers).  Bear in mind that 3G was commercially launched in UK on 03.03.03, and LTE was demo’ed at the 2008 Beijing Olympics. Isn’t there a lesson here about leaving the cake in the oven long enough to bake?

That 5G is theoretically using the same, or at least similar, air interfaces, is hardly a saving grace. For now, the thought of deploying a heap of non-standard equipment is highly unappealing to telco customers. Neither is sufficient attention paid to the lack of spectrum, or the potential perils of relying on unlicensed spectrum for commercial services.  There seems to be a blind, marketing-led rush to be the first to announce milestones that are effectively rigged lab trials, and that convince few of the sceptical buyers to shift long-standing vendor allegiances. So what do we have to hang our hats on? A series of relatively disjointed and often proprietary innovations building on LTE, specifically many bands of carrier aggregation and millimetre wave, including unlicensed bands, to get support for (and make a smash and grab raid on) much wider blocks of spectrum and therefore better throughout and capacity; a further extension of decades of work on MIMO to further boost capacity; and a similar pendulum swing towards edge caching to reduce latency (while at the same time trying to centralise resource in baseband-in-the-cloud, to reduce processing overheads in networks). The astonishing leap of faith is that by providing gigabit wireless speed at low latency, one will enable “new business models,” for now largely unimagined.

This leaves us with the farcical purported “business cases” for 5G. First, we have the Ghost of 2G Past, in the form of telematics, rebranded M2M, and now rebranded once more as “IoT”. To be sure, there are many industries that have long had the aim of wirelessly connecting all sorts of devices without voice or high-speed data connectivity. Yet these applications tend to work just fine at 2G or even 3G speeds. The notion that we need vast infrastructure upgrades to send tiny amounts of data with lower latency smells of desperation. Then there are all the low-latency video-related services – which again can be made more than workable with a combination of cellular plus WiFi. Meanwhile, just to muddy the waters and prevent any smooth sailing towards the mythical 5G world, we have a slew of new variants: LTE-A, LTE-U, low-energy LTE, MulteFire, LTE-QED (sorry, I made that one up), etc. And the aims of gigabit wireless have to be to supplant wireline, though that is hardly acting in isolation, as cablecos adopt DOCSIS 3.1 and traditional telcos bring on G.fast and other next-generation copper or fibre technology. As always, these advances are not being made in isolation, even if the plans of individual vendors seem to have done so.

Desperation is not confined to equipment vendors; chipmakers such as Qualcomm, Mediatek and others are facing the first year of a declining TAM for smartphone silicon, partly due to weak demand from emerging markets, and also due a rising influence of second hand smartphones being sold after refurbishment. We also see a trend of leading smartphone vendors internalising their silicon requirements, be it with apps processors (Apple’s A-series, Samsung Exynos), or modems (HiSilicon). Our view is that smartphone unit demand will be flattish overall this year, with most of the growth coming from low-end vendors desperate to ramp volumes to stay relevant. This should drive Qualcomm and MediaTek to continue addressing more and more “adjacent” segments within smartphones, to prevent chip sales from shrinking. Qualcomm is looking to make LTE much more robust to overtake WiFi and get traction in end-markets it does not address today.   

Thus we have another of the “inter-regnum” MWCs, in which we are mired in a chaotic economic climate where investment commitments will be slow in coming, while vendors pre-position themselves for the real action in two or three years when the technologies are actually closer to being standardised and then working.  We have like Alice, dropped into the Rabbit Hole, to wander amidst the psychedelic lab experiments of multiple hues of 5G, before reality sets in and everything fades to grey, or at least the black and white of firm roadmaps and real technical solutions. 

Editor-in-Chief: Alan Gatherer ([email protected])

Comments are welcome!

http://www.comsoc.org/ctn/5g-down-rabbit-hole


Reference to presentation & slides:  5G and the Future of Wireless by Jonathan Wells, PhD

https://californiaconsultants.org/event/5g-and-the-future-of-wireless/


2016 OCP Summit Solidifies Mega Trend to Open Hardware & Software in Mega Data Centers

The highlight of the 2016 Open Compute Project (OCP) Summit, held March 9-10, 2016 at the San Jose Convention Center, was Google’s unexpected announcement that it had joined OCP and was contributing “a new rack specification that includes 48V power distribution and a new form factor to allow OCP racks to fit into our data centers,” With Facebook and Microsoft already contributing lots of open source software (e.g. MSFT SONIC – more below) and hardware (compute server and switch designs), Google’s presence puts a solid stamp of authenticity on OCP and ensures the trend of open IT hardware and software will prevail in cloud resident mega data centers.

Google hopes it can go beyond the new power technology in working together with OCP, Urs Hölzle, Google’s senior vice president for technical infrastructure said in a surprise Wednesday keynote talk at the OCP Summit.  Google published a paper last week calling on disk manufacturers “to think about alternate form factors and alternate functionality of disks in the data center,” Hölzle said. Big data center operators “don’t care about individual disks, they care about thousands of disks that are tied together through a software system into a storage system.” Alternative form factors can save costs and reduce complexity.

Hölzle noted the OCP had made great progress (in open hardware designs/schematics), but said the organization could do a lot more in open software.  He said there’s an opportunity for OCP to improve software for managing the servers, switch/routers, storage, and racks in a (large) Data Center.  That would replace the totally outdated SNMP with its set of managed objects per equipment type (MIBs).


Jason Taylor, PhD, the OCP Foundation chairman and president + vice president of Infrastructure for Facebook, said that the success of the OCP concept depends upon its acceptance by the telecommunications industry.  Taylor said: “The acceptance of OCP from the telecommunications industry is a particularly important sign of momentum for the community. This is another industry where infrastructure is core to the business. Hopefully we’ll end up with a far more efficient infrastructure.”

This past January, the OCP launched the OCP Telco Project.  It’s specifically focused on open telecom data center technologies.  Members include AT&T, Deutshe Telekom (DT), EE (UK mobile network operator and Internet service provider), SK Telecom, Verizon, Equinix and Nexius.  The three main goals of the OCP Telco Project are:

  • Communicating telco technical requirements effectively to the OCP community.
  • Strengthening the OCP ecosystem to address the deployment and operational needs of telcos.
  • Bringing OCP innovations to telco data-center infrastructure for increased cost-savings and agility.

See OCP Telco Project,  Major Telcos Join Facebook’s Open Compute Project and Equinix Looks to Future-Proof Network Through Open Computing

In late February, Facebook started a parallel open Telecom Infra Project (TIP) for mobile networks which will use OCP principles.  Facebook’s Jay Parikh wrote in a blog post:

“TIP members will work together to contribute designs in three areas — access, backhaul, and core and management — applying the Open Compute Project models of openness and disaggregation as methods of spurring innovation. In what is a traditionally closed system, component pieces will be unbundled, affording operators more flexibility in building networks. This will result in significant gains in cost and operational efficiency for both rural and urban deployments. As the effort progresses, TIP members will work together to accelerate development of technologies like 5G that will pave the way for better connectivity and richer services.”

TIP was referenced by Mr. Parikh in his keynote speech which was preceeded by a panel session (see below) in which wireless carriers DT, SK Telecom, AT&T and Verizon shared how they planned to use and deploy OCP built network equipment.  Jay noted that Facebook contributed Wedge 100 and 6-pack – design of next-generation open networking switches to OCP.  Facebook is also working with other companies on standardizing data center optics and inter-data center (WAN) transport solutions to help the industry move faster on networking. MicrosoftVerizon, and Equinix are all part of that effort.


At the beginning of his keynote speech, Microsoft’s  Azure CTO Mark Russinovich asked the OCP Summit audience how many believed Microsoft was an “open source company?”  Very few hands were raised.   That was to change after Russinovich announced the release of SONiC (Software for Open Networking in the Cloud) to the OCP. It is based on the idea that a fully open sourced switch platform could be serviceable by sharing the same software stack across various hardware from multiple switch vendors/ ASIC switch silicon.  The new software extends and opens the Linux-based ACS switch that Microsoft has been using internally in its Azure cloud, and will be offered for all to use through the OCP.   It also includes software implementations for all the popular protocol stacks for a switch-router.

                                 OCP

                                  Soucrce: Microsoft –  Positioning SONiC within a 3 layer stack

SONiC in OCP

      Source:  Microsoft

The SONiC platform biulds on the Switch Abstraction Interface (SAI) a software layer launched last year by Microsoft, that translates the APIs for multiple network ASICs, so they can be run by the same software instead of requiring proprietary code.  With SAI, cloud service providers had to provide or find code to carry out actual network jobs on top of the interface These utilities included some open source software. SONiC combines those open source omponents (for jobs like BGP routing) and Microsoft’s own utilities, all of which have been open sourced. 

More than a simple proposal, SONiC is already receiving contributions from companies such as Arista, Broadcom, Dell, and Mellanox.  Russinovich closed by asking the audience how many NOW think Microsoft is an “open source company?”   Hundreds of hands went up in the air which affirms the audience’s recognition of SONiC as a key contribution to the open source networking software movement.


Rachael King, Reporter at the Wall Street Journal moderated a panel of telecommunications executives, including Ken Duell from AT&T, Mahmoud El-Assir from Verizon, Kangwon Lee from SK Telecom, and Daniel Brower from Deutsche Telekom to discuss some of the common infrastructure challenges related to shifting to 5G cellular networks quickly and without disrupting service. The central theme of the session was “driving innovation at a much greater speed,” as Daneil Brower, VP chief architect of infrastructure cloud for DT.  The goal is improved service velocity so carriers can deploy and realize revenues from new services much quicker.

Most telco network operators are focused on shifting to “white box” switches and routers and virtualizing their networks, taking an open approach to infrastructure will make the transition to 5G more efficient and will accelerate the speed of delivery and configuration of networks.

Ken Duell, AVP of new technology product development and engineering at AT&T concisely summarized the carrier’s dilemma: “In our case, it’s a matter of survival. Our customers are expecting services on demand anywhere they may be. We’re finding that the open source platform … provides us a platform to build new services and deliver with much faster velocity.”

Duell said a major challenge facing AT&T and other telecom companies is network operating system software. “When we think of white boxes, the hardware eco-system is maturing very quickly. The challenge is the software, especially network OS software, to run on these systems with WAN networking features. One of the things we hoped … is to create enough of an ecosystem to create these network OS software platforms.”

There’s also a huge learning and retraining effort for network engineers and other employees, which AT&T is addressing with new on-line learning courses.

Verizon SVP and CIO Mahmoud El Assir hit on the ability of open source and virtualization of functions (e.g. virtualized CPE) to create true network personalization for future wireless customers.  That was somewhat of a surprise to the WSJ moderator and to this author.  El Assir compared the new telco focus to the now outdated historical concerns with providing increased speed/throughput and supporting various protocols on the same network.

“Now it’s exciting that the telecom industry, the networking industry, everything is becoming more software,” El Assir said. “Everything in the network becomes more like an app. This allows us to kind of unlock and de-aggregate our network components and accelerate the speed of innovation. … Getting compute everywhere in the network, all the way to the edge, is a key element for us.”

El Assir added OCP-based switches and routers will allow for “personalized networks on the edge. You can have your own network on the edge. Today that’s not possible. Today everybody is connected to the same cell. We can change that. Edge compute will create this differentiation.”

Kang-Won Lee, director of wireless access network solution management for SK Telecom, looked ahead to “5G” and the various high-capacity use cases that will usher in a new type of network that will require white box hardware due to cost models.

“It was more about the storage and the bandwidth and how you support people moving around to make sure their connections don’t drop,” Lee said. “That was the foremost goal of mobile service providers. In Korea, we have already achieved that.” With 5G the network “will be a lot of different types of traffic that need to be able to connect. In order to support those different types of traffic … it will require a lot of work. That’s why we are looking at data centers, white boxes, obviously, I mean, creating data centers with the brand name servers is not going to be cost efficient.”

Moderator Rachel King asked: “So what about Verizon and AT&T, fierce rivals in the U.S. mobile market, sharing research and collaborating – how does that work?”

“Our current focus is on the customer,” El Assir replied. “I think now with what OCP is bringing to the table is really unique. We’ve moved from using proprietary software to open source software and now we’re at a good place where we can transition from using proprietary hardware to open source hardware. We want the ecosystem to grow in order for the ecosystem to be successful.”

“There’s a lot of efficiencies in having many companies collaborate on open source hardware,” Duell added. “I think it will help drive the cost down and the efficiency up across the entire industry. AT&T will still compete with Verizon, but the differentiation will come with the software. The hardware will be common. We’ll compete on software features.”

You can watch the video of that panel session here.


We close with a resonating quote from Carl Weinschenk, who covers telecom for IT Business Edge:

“Reconfiguring how IT and telecom companies acquire equipment is a complex and long-term endeavor. OCP appears to be taking that long road, and is getting buy-in from companies that can help make it happen.”

IDC Directions 2016: IoT (Internet of Things) Outlook vs Current Market Assessment

The 11th annual IDC Directions conference was held in San Jose, CA last week. The saga of the 3rd platform (Cloud, Mobile, Social, Big/Data Analytics) continues unabated. One of many IT predictions was that artifical intelligence (AI) and deep learning/machine learning are a big part of the new application development. IDC predicts 50% of developer teams will build AI/cognitive technologies into apps by 2018 up from only 1% in 2015.

Vernon Turner, senior vice president of enterprise systems at IDC, presented a keynote speech on IoT. IDC forecasts that by 2025, approximately 80 billion devices will be connected to the Internet. To put that in perspective, approximately 11 billion devices connect to the Internet now. The figure is expected to nearly triple to 30 billion by 2020 and then nearly triple again to 80 billion five years later.

To illustrate that phenomnal IoT growth rate, consider that currently, there are approximately 4,800 devices are being connected to the network. Ten years from now, the figure will balloon to 152,000 a minute. Overall, IoT will be a $1.46 trillion market by 2020, according to IDC.

“If you don’t have scalable networks for the IoTs, you won’t be able to connect,” Turner said. “New IoT networks are going to have to be able to handle various requirements of IoT (e.g. very low latency).”

Turner also provided a quick update to IDC’s predictions for the growth of (big) digital data. A few years ago, the market research firm made headlines by predicting that the total amount of digital data created worldwide would mushroom from 4.4 zettabytes in 2013 to 44 zettabytes by 2020. Currently, IDC believes that by 2025 the total worldwide digital data created will reach 180 zettabytes. The astounding growth comes from both the number of devices generating data as well as the number of sensors in each device.

  • The Ford GT car, for instance, contains 50 sensors and 28 microprocessors and is capable of generating up to 100GB of data per hour. Granted, the GT is a finely-tuned race car, but even pedestrian household items will contain arrays of sensors and embedded computing capabilities.
  • Smart thermometers will compile thousands of readings in a few seconds.
  • Cars, homes and office will likely be equipped with IoT gateways to manage security and connectivity for the expanding armada of devices.

How this huge amount of newly generated data gets used and where it’s stored remains an open debate in the industry. A substantial portion of it will consist of status data from equipment or persona devices reporting on remedial tasks: the current temperature inside a cold storage unit, the RPMs of a wheel on a truck etc. Some tech execs believe that a large segment of this status data can be summarized and discarded.

Industrial customers (like GE, Siemens, etc) will likely invest more heavily in IoT (sometimes referred to as “the Industrial Internet) than other market segments/verticals over time, but the moment retail customers are the most active in implementing new systems. In North America, a substantial amount of interest for IoT revolves around “digital transformation,” i.e. developing new digital services on top of existing businesses like car repair or hotel reservation. In Europe and Asia, the focus trends toward improving energy consumption and efficiency.

Turner noted that the commercialization of IoT is still in the experimental phase. When examining the IoT projects underway at big companies, IDC found that most of the budgets are in the $5 million to $10 million range. The $100 million contracts aren’t here yet, he added. Retail and manufacturing are the two leading IoT industry verticals, based on IDC findings.

In a presentation titled, A Market Maturing: The Reality of IoT, Carrie MacGillivray Vice President, IoT & Mobile made the following key points related to the IoT market:

  • Early adopters are plenty, but ROI cases few & far between
  • Vendors refining story, making solutions more “real”
  • Standards, regulation, scalability and cost (!!!) are still inhibitors (as they have been for years)

IDC has created a model to measure IoT market maturity and placed various categories of users in buckets. Their survey findings are as follows:

  • 2% are IoT Experimenters/ Ad Hoc 
  • 31.9% are IoT Explorers/Opportunistic
  • 31.3% are IoT Connectors/Repeatable
  • 24.2% are IoT Transformers/Manageable
  • 10.7% are IoT Disruptors/Optimized solution

Carrie revealed several other important IoT findings:

  • Vision is still a struggle for organizations, but it’s moving in the right direction. Executive teams must set the pace for IoT innovation.
  •  Still more tehcnology maturity is needed Investment extends beyond connecting the “thing” but also ensuring backend technology is enabled.
  •  IoT plans/process are still not captured in the strategic plan.  They need to be integrated into production environments holistically.

 

Carrie’s Closing Comments for IoT Market Outlook:

 

  • Security, regulatory, standards…and cost (!!!) are  still inhibitors to IoT market maturity [IDC will be publishing a report next month on the status of IoT standards and Carrie publicaly offered to share it with this author.]
  • Vision is still needed to be set at the executive level.
  • Thoughtful integration of process has to be driven by a vision with measurable objectives.
  • People “buy-in” will determine success or failure of these connected “things.” 

IoT

 

Above graphic courtesy of IDC:  http://www.idc.com/infographics/IoT

References:

http://www.idc.com/events/directions

http://www.forbes.com/sites/michaelkanellos/2016/03/03/152000-smart-devices-every-minute-in-2025-idc-outlines-the-future-of-smart-things/2/#19a7c92c66c2

http://www.indianweb2.com/2016/02/26/internet-of-things-iot-predictions-from-forrester-machina-research-wef-gartner-idc/


Page 271 of 319
1 269 270 271 272 273 319