Verizon, FCC Push Mm Wave 5G -Threat to Cable Broadband Service, Reinhardt Krause, INVESTOR’S BUSINESS DAILY

Article written by R. Krause of Investors Business Daily (investors.com) & edited by Alan J Weissberger.
Followed by a blog post from IEEE’s Alan Gatherer f Huawei and then a reference to a superb 5G presentation from IEEE’s Jonathan Wells of AJIC consulting

IBD Article:

Bottom Line:  Could high frequencies let AT&T or Verizon out do cable broadband service? Wireless carriers could one day boast data-transfer speeds up to a gigabit per second with 5G — about 50 times faster than cellular networks around the U.S. have now. That opens up new markets for competition.

  Federal regulators and Verizon Communications have zeroed in on airwaves that could make the U.S. the global leader in rolling out 5G wireless services.    One market opportunity for 5G may be as challenger to the cable TV industry’s broadband dominance. Think Verizon Wireless, not Verizon’s VZ   FiOS-branded landline service, vs. the likes of Comcast or Charter Communications.    

First, though, airwaves need to be freed up for 5G. That’s where highfrequency radio spectrum, also called millimeter wave or mm-Wave, comes in. In particular, U.S. regulators are focused on the 28 gigahertz frequency band, analysts say. Most wireless phone services use radio frequency below 3 GHz.    

If 28 GHz or millimeter wave rings a bell, that’s because several fixed wireless startups (WinStar, Teligent TLGT   , NextLink, Terabeam) tried and failed to commercialize products relying on high-frequency airwaves during the dot-com boom of the late 1990s. Business models were suspect, and their LMDS (local multipoint distribution services) were susceptible to interference from rain and other environmental conditions.

When the tech bubble burst in 2000-01, the LMDS startups perished. Technology advances, however, could now make the high-frequency airwaves prime candidates for 5G.

 

“In the 1990s, with LMDS, mobile data wasn’t mature, and neither was the Internet, and neither was the electronics industry — it couldn’t make low-cost, mmWave devices,” said Ted Rappaport, founding director of NYU Wireless, New York University’s research center on millimeter-wave technologies. 

   “Wi-Fi was really brand new then, and broadband backhaul (long-distance) was not even built out. LMDS was originally conceived to be like fiber, to serve as backhaul or point-to-multipoint, and was not for mobile services, he said. 
   “Fast forward to today: backhaul is in place to accommodate demand, and electronics at mmWave frequencies are being mass-produced in cars,” Rappaport continued. “Demand for data is increasing more than 50% a year, and the  only way to continue to supply capacity to users is to move up to (millimeter wave).” 
   The Federal Communications Commission in October opened a 
 study looking at 28, 37, 39, and 60 GHz as the primary bands for 5G. While the FCC says that 28 GHz airwaves show promise, some countries have been focused on higher frequencies.  FCC Chairman Tom Wheeler, speaking at a U.S. Senate committee hearing on March 2, said: “While international coordination is preferable, I believe we should move forward with exploration of the 28 GHz band.”    
Wheeler said that the U.S. will lead the world in 5G and allocate spectrum “faster than any nation on the planet.”
 

 

Verizon Makes Deals  
 Verizon, meanwhile, on Feb. 22 agreed to buy privately held XO Communications’ fiber-optic network business for about $1.8 billion. In a side deal, Verizon will also lease XO’s wireless spectrum in the 28 GHz to 31 GHz bands, with an option to buy for $200 million by the end of 2018. XO’s spectrum covers some of the largest U.S. metro areas, including New York, Boston, Chicago, Minneapolis, Atlanta, Miami, Dallas, Denver, Phoenix, San Francisco and Los Angeles, as well as Tampa, Fla., and Austin, Texas.  Verizon CFO Fran Shammo commented on the XO deal at a Morgan Stanley conference on March 1st. 
   “Right now we have licenses issued to us from the FCC for trial purposes at 28 GHz. The XO deal gave us additional 28 GHz,” he said. “The rental agreement enables us to include that (XO spectrum) in some 
 of our R&D development with 28 GHz. So that just continues the path that we’re on in launching 5G as soon as the FCC clears spectrum.”  He noted that Japan and South Korea plan to test 5G services using 28 GHz and 39 GHz airwaves.    

 

 Some analysts doubt that 28 GHz airwaves will be on a fast track.  “We are skeptical not only on the timing of the availability of 28 GHz but also its ultimate viability in a mobile wireless network,” Walter Piecyk, analyst at BTIG Research, said in a report.  Boosting signal strength at higher frequencies is a challenge for wireless firms. Low-frequency airwaves travel over long distances and also through walls, improving in-building services.  

One approach to increase propagation in millimeter wave bands, analysts say, is using more “smallcell” radio antennas, which increase network capacity.  Wireless firms generally use large cell towers to connect mobile phone calls and whisk video and email to mobile phone users. They also install radio antennas on building rooftops, church steeples and billboards. Suitcase-sized antennas used in small-cell technology often go on lamp posts or utility poles.    Verizon has been testing small cell technology in Boston, MA. 

   When Will 5G Happen? 

Verizon says that it will begin rolling out 5G commercially in 2017, though its plans are still vague. While many wireless service providers touted 5G plans and tests at the Mobile World Congress (MWC) in February, makers of telecom network equipment are being cautious.

 “General consensus (at MWC) seemed to indicate that the 2020 time-frame will mark full-scale 5G deployments,” Barclays analyst Mark Moskowitz said in a report.  Verizon has said that it doesn’t expect 5G networks to replace existing 4G ones. 

While 5G is expected to provide much faster data speeds, wireless firms also expect applications that  require always-on, low-data-rate connections. The apps involve datagathering from industrial sensors, home appliances and other devices often referred to as part of the Internet of Things (IoT).

Both Verizon and AT&T   have recently touted 5G speeds up to one gigabit per second. That’s roughly 50 times faster than the average speeds of 4G wireless networks in good conditions. AT&T CEO Randall Stephenson recently said that 5G speeds could match fiber-optic broadband connections to homes.

5G Vs. Broadband 

At the Morgan Stanley conference, Verizon’s Shammo also said that 5G could be a “substitute product for broadband.” Regulators would like to create new competition for cable TV companies. But, Verizon says, it’s still early days. 

   “With trials, we’ll figure out exactly what we can deliver, what the base cases are,” said Shammo. “5G has the capability to be a substitute for broadband into the home with a fixed wireless solution. The question is, can you deploy that technology and actually make money at a price that the consumer would pay?” 
   Sanyogita Shamsunder, Verizon’s director of network infrastructure planning, says that high frequencies can support 5G.  “Radio frequency components today are able to support much wider bandwidth (think wide lanes on the highway) when compared  to even 10 years ago. What it means is we are able to pump more bits at the same time,” Shamsunder said in an email to IBD.   “Due to improvements in antenna and RF technology,” she added, “we are able to support 100s of small, tiny antennas on a small die the size of a quarter.”


Another point of view from Alan Gatherer of IEEE ComSoc:

Fresh from Mobile World Congress, my favourite “tell it like it is” curmudgeon-cum-analyst Richard Kramer has kindly agreed to share his thoughts on the state of the industry and on 5G in particular. While reading his article, I had two thoughts that align with his position:

1) How long will it take to really do VoLTE well? 

2) 3G’s lifespan was quite short and we should probably expect a lot more runway for 4G. 

Like Alice in Wonderland, the mobile world has been turned topsy-turvy with an accelerated push to 5G.  One would think the lessons on 3G, 3.5G (HSPA), 4G, and its many variants, were never (painfully) learned: that the ideal approach for operators and vendors is to leave time to “harvest” profits from investments, not race to the next node. This was true in the earliest discussions of LTE (stretching back, if one recalls, to 2006/7), and in the interim fending off the noisy interventions of WiMax (remember those embarrassing forecasts from some analysts, which we fondly recall dubbing “technical pornography” for the 802.xxx variants garnering oohs and aahs from radio engineers).  Bear in mind that 3G was commercially launched in UK on 03.03.03, and LTE was demo’ed at the 2008 Beijing Olympics. Isn’t there a lesson here about leaving the cake in the oven long enough to bake?

That 5G is theoretically using the same, or at least similar, air interfaces, is hardly a saving grace. For now, the thought of deploying a heap of non-standard equipment is highly unappealing to telco customers. Neither is sufficient attention paid to the lack of spectrum, or the potential perils of relying on unlicensed spectrum for commercial services.  There seems to be a blind, marketing-led rush to be the first to announce milestones that are effectively rigged lab trials, and that convince few of the sceptical buyers to shift long-standing vendor allegiances. So what do we have to hang our hats on? A series of relatively disjointed and often proprietary innovations building on LTE, specifically many bands of carrier aggregation and millimetre wave, including unlicensed bands, to get support for (and make a smash and grab raid on) much wider blocks of spectrum and therefore better throughout and capacity; a further extension of decades of work on MIMO to further boost capacity; and a similar pendulum swing towards edge caching to reduce latency (while at the same time trying to centralise resource in baseband-in-the-cloud, to reduce processing overheads in networks). The astonishing leap of faith is that by providing gigabit wireless speed at low latency, one will enable “new business models,” for now largely unimagined.

This leaves us with the farcical purported “business cases” for 5G. First, we have the Ghost of 2G Past, in the form of telematics, rebranded M2M, and now rebranded once more as “IoT”. To be sure, there are many industries that have long had the aim of wirelessly connecting all sorts of devices without voice or high-speed data connectivity. Yet these applications tend to work just fine at 2G or even 3G speeds. The notion that we need vast infrastructure upgrades to send tiny amounts of data with lower latency smells of desperation. Then there are all the low-latency video-related services – which again can be made more than workable with a combination of cellular plus WiFi. Meanwhile, just to muddy the waters and prevent any smooth sailing towards the mythical 5G world, we have a slew of new variants: LTE-A, LTE-U, low-energy LTE, MulteFire, LTE-QED (sorry, I made that one up), etc. And the aims of gigabit wireless have to be to supplant wireline, though that is hardly acting in isolation, as cablecos adopt DOCSIS 3.1 and traditional telcos bring on G.fast and other next-generation copper or fibre technology. As always, these advances are not being made in isolation, even if the plans of individual vendors seem to have done so.

Desperation is not confined to equipment vendors; chipmakers such as Qualcomm, Mediatek and others are facing the first year of a declining TAM for smartphone silicon, partly due to weak demand from emerging markets, and also due a rising influence of second hand smartphones being sold after refurbishment. We also see a trend of leading smartphone vendors internalising their silicon requirements, be it with apps processors (Apple’s A-series, Samsung Exynos), or modems (HiSilicon). Our view is that smartphone unit demand will be flattish overall this year, with most of the growth coming from low-end vendors desperate to ramp volumes to stay relevant. This should drive Qualcomm and MediaTek to continue addressing more and more “adjacent” segments within smartphones, to prevent chip sales from shrinking. Qualcomm is looking to make LTE much more robust to overtake WiFi and get traction in end-markets it does not address today.   

Thus we have another of the “inter-regnum” MWCs, in which we are mired in a chaotic economic climate where investment commitments will be slow in coming, while vendors pre-position themselves for the real action in two or three years when the technologies are actually closer to being standardised and then working.  We have like Alice, dropped into the Rabbit Hole, to wander amidst the psychedelic lab experiments of multiple hues of 5G, before reality sets in and everything fades to grey, or at least the black and white of firm roadmaps and real technical solutions. 

Editor-in-Chief: Alan Gatherer ([email protected])

Comments are welcome!

http://www.comsoc.org/ctn/5g-down-rabbit-hole


Reference to presentation & slides:  5G and the Future of Wireless by Jonathan Wells, PhD

https://californiaconsultants.org/event/5g-and-the-future-of-wireless/


2016 OCP Summit Solidifies Mega Trend to Open Hardware & Software in Mega Data Centers

The highlight of the 2016 Open Compute Project (OCP) Summit, held March 9-10, 2016 at the San Jose Convention Center, was Google’s unexpected announcement that it had joined OCP and was contributing “a new rack specification that includes 48V power distribution and a new form factor to allow OCP racks to fit into our data centers,” With Facebook and Microsoft already contributing lots of open source software (e.g. MSFT SONIC – more below) and hardware (compute server and switch designs), Google’s presence puts a solid stamp of authenticity on OCP and ensures the trend of open IT hardware and software will prevail in cloud resident mega data centers.

Google hopes it can go beyond the new power technology in working together with OCP, Urs Hölzle, Google’s senior vice president for technical infrastructure said in a surprise Wednesday keynote talk at the OCP Summit.  Google published a paper last week calling on disk manufacturers “to think about alternate form factors and alternate functionality of disks in the data center,” Hölzle said. Big data center operators “don’t care about individual disks, they care about thousands of disks that are tied together through a software system into a storage system.” Alternative form factors can save costs and reduce complexity.

Hölzle noted the OCP had made great progress (in open hardware designs/schematics), but said the organization could do a lot more in open software.  He said there’s an opportunity for OCP to improve software for managing the servers, switch/routers, storage, and racks in a (large) Data Center.  That would replace the totally outdated SNMP with its set of managed objects per equipment type (MIBs).


Jason Taylor, PhD, the OCP Foundation chairman and president + vice president of Infrastructure for Facebook, said that the success of the OCP concept depends upon its acceptance by the telecommunications industry.  Taylor said: “The acceptance of OCP from the telecommunications industry is a particularly important sign of momentum for the community. This is another industry where infrastructure is core to the business. Hopefully we’ll end up with a far more efficient infrastructure.”

This past January, the OCP launched the OCP Telco Project.  It’s specifically focused on open telecom data center technologies.  Members include AT&T, Deutshe Telekom (DT), EE (UK mobile network operator and Internet service provider), SK Telecom, Verizon, Equinix and Nexius.  The three main goals of the OCP Telco Project are:

  • Communicating telco technical requirements effectively to the OCP community.
  • Strengthening the OCP ecosystem to address the deployment and operational needs of telcos.
  • Bringing OCP innovations to telco data-center infrastructure for increased cost-savings and agility.

See OCP Telco Project,  Major Telcos Join Facebook’s Open Compute Project and Equinix Looks to Future-Proof Network Through Open Computing

In late February, Facebook started a parallel open Telecom Infra Project (TIP) for mobile networks which will use OCP principles.  Facebook’s Jay Parikh wrote in a blog post:

“TIP members will work together to contribute designs in three areas — access, backhaul, and core and management — applying the Open Compute Project models of openness and disaggregation as methods of spurring innovation. In what is a traditionally closed system, component pieces will be unbundled, affording operators more flexibility in building networks. This will result in significant gains in cost and operational efficiency for both rural and urban deployments. As the effort progresses, TIP members will work together to accelerate development of technologies like 5G that will pave the way for better connectivity and richer services.”

TIP was referenced by Mr. Parikh in his keynote speech which was preceeded by a panel session (see below) in which wireless carriers DT, SK Telecom, AT&T and Verizon shared how they planned to use and deploy OCP built network equipment.  Jay noted that Facebook contributed Wedge 100 and 6-pack – design of next-generation open networking switches to OCP.  Facebook is also working with other companies on standardizing data center optics and inter-data center (WAN) transport solutions to help the industry move faster on networking. MicrosoftVerizon, and Equinix are all part of that effort.


At the beginning of his keynote speech, Microsoft’s  Azure CTO Mark Russinovich asked the OCP Summit audience how many believed Microsoft was an “open source company?”  Very few hands were raised.   That was to change after Russinovich announced the release of SONiC (Software for Open Networking in the Cloud) to the OCP. It is based on the idea that a fully open sourced switch platform could be serviceable by sharing the same software stack across various hardware from multiple switch vendors/ ASIC switch silicon.  The new software extends and opens the Linux-based ACS switch that Microsoft has been using internally in its Azure cloud, and will be offered for all to use through the OCP.   It also includes software implementations for all the popular protocol stacks for a switch-router.

                                 OCP

                                  Soucrce: Microsoft –  Positioning SONiC within a 3 layer stack

SONiC in OCP

      Source:  Microsoft

The SONiC platform biulds on the Switch Abstraction Interface (SAI) a software layer launched last year by Microsoft, that translates the APIs for multiple network ASICs, so they can be run by the same software instead of requiring proprietary code.  With SAI, cloud service providers had to provide or find code to carry out actual network jobs on top of the interface These utilities included some open source software. SONiC combines those open source omponents (for jobs like BGP routing) and Microsoft’s own utilities, all of which have been open sourced. 

More than a simple proposal, SONiC is already receiving contributions from companies such as Arista, Broadcom, Dell, and Mellanox.  Russinovich closed by asking the audience how many NOW think Microsoft is an “open source company?”   Hundreds of hands went up in the air which affirms the audience’s recognition of SONiC as a key contribution to the open source networking software movement.


Rachael King, Reporter at the Wall Street Journal moderated a panel of telecommunications executives, including Ken Duell from AT&T, Mahmoud El-Assir from Verizon, Kangwon Lee from SK Telecom, and Daniel Brower from Deutsche Telekom to discuss some of the common infrastructure challenges related to shifting to 5G cellular networks quickly and without disrupting service. The central theme of the session was “driving innovation at a much greater speed,” as Daneil Brower, VP chief architect of infrastructure cloud for DT.  The goal is improved service velocity so carriers can deploy and realize revenues from new services much quicker.

Most telco network operators are focused on shifting to “white box” switches and routers and virtualizing their networks, taking an open approach to infrastructure will make the transition to 5G more efficient and will accelerate the speed of delivery and configuration of networks.

Ken Duell, AVP of new technology product development and engineering at AT&T concisely summarized the carrier’s dilemma: “In our case, it’s a matter of survival. Our customers are expecting services on demand anywhere they may be. We’re finding that the open source platform … provides us a platform to build new services and deliver with much faster velocity.”

Duell said a major challenge facing AT&T and other telecom companies is network operating system software. “When we think of white boxes, the hardware eco-system is maturing very quickly. The challenge is the software, especially network OS software, to run on these systems with WAN networking features. One of the things we hoped … is to create enough of an ecosystem to create these network OS software platforms.”

There’s also a huge learning and retraining effort for network engineers and other employees, which AT&T is addressing with new on-line learning courses.

Verizon SVP and CIO Mahmoud El Assir hit on the ability of open source and virtualization of functions (e.g. virtualized CPE) to create true network personalization for future wireless customers.  That was somewhat of a surprise to the WSJ moderator and to this author.  El Assir compared the new telco focus to the now outdated historical concerns with providing increased speed/throughput and supporting various protocols on the same network.

“Now it’s exciting that the telecom industry, the networking industry, everything is becoming more software,” El Assir said. “Everything in the network becomes more like an app. This allows us to kind of unlock and de-aggregate our network components and accelerate the speed of innovation. … Getting compute everywhere in the network, all the way to the edge, is a key element for us.”

El Assir added OCP-based switches and routers will allow for “personalized networks on the edge. You can have your own network on the edge. Today that’s not possible. Today everybody is connected to the same cell. We can change that. Edge compute will create this differentiation.”

Kang-Won Lee, director of wireless access network solution management for SK Telecom, looked ahead to “5G” and the various high-capacity use cases that will usher in a new type of network that will require white box hardware due to cost models.

“It was more about the storage and the bandwidth and how you support people moving around to make sure their connections don’t drop,” Lee said. “That was the foremost goal of mobile service providers. In Korea, we have already achieved that.” With 5G the network “will be a lot of different types of traffic that need to be able to connect. In order to support those different types of traffic … it will require a lot of work. That’s why we are looking at data centers, white boxes, obviously, I mean, creating data centers with the brand name servers is not going to be cost efficient.”

Moderator Rachel King asked: “So what about Verizon and AT&T, fierce rivals in the U.S. mobile market, sharing research and collaborating – how does that work?”

“Our current focus is on the customer,” El Assir replied. “I think now with what OCP is bringing to the table is really unique. We’ve moved from using proprietary software to open source software and now we’re at a good place where we can transition from using proprietary hardware to open source hardware. We want the ecosystem to grow in order for the ecosystem to be successful.”

“There’s a lot of efficiencies in having many companies collaborate on open source hardware,” Duell added. “I think it will help drive the cost down and the efficiency up across the entire industry. AT&T will still compete with Verizon, but the differentiation will come with the software. The hardware will be common. We’ll compete on software features.”

You can watch the video of that panel session here.


We close with a resonating quote from Carl Weinschenk, who covers telecom for IT Business Edge:

“Reconfiguring how IT and telecom companies acquire equipment is a complex and long-term endeavor. OCP appears to be taking that long road, and is getting buy-in from companies that can help make it happen.”

IDC Directions 2016: IoT (Internet of Things) Outlook vs Current Market Assessment

The 11th annual IDC Directions conference was held in San Jose, CA last week. The saga of the 3rd platform (Cloud, Mobile, Social, Big/Data Analytics) continues unabated. One of many IT predictions was that artifical intelligence (AI) and deep learning/machine learning are a big part of the new application development. IDC predicts 50% of developer teams will build AI/cognitive technologies into apps by 2018 up from only 1% in 2015.

Vernon Turner, senior vice president of enterprise systems at IDC, presented a keynote speech on IoT. IDC forecasts that by 2025, approximately 80 billion devices will be connected to the Internet. To put that in perspective, approximately 11 billion devices connect to the Internet now. The figure is expected to nearly triple to 30 billion by 2020 and then nearly triple again to 80 billion five years later.

To illustrate that phenomnal IoT growth rate, consider that currently, there are approximately 4,800 devices are being connected to the network. Ten years from now, the figure will balloon to 152,000 a minute. Overall, IoT will be a $1.46 trillion market by 2020, according to IDC.

“If you don’t have scalable networks for the IoTs, you won’t be able to connect,” Turner said. “New IoT networks are going to have to be able to handle various requirements of IoT (e.g. very low latency).”

Turner also provided a quick update to IDC’s predictions for the growth of (big) digital data. A few years ago, the market research firm made headlines by predicting that the total amount of digital data created worldwide would mushroom from 4.4 zettabytes in 2013 to 44 zettabytes by 2020. Currently, IDC believes that by 2025 the total worldwide digital data created will reach 180 zettabytes. The astounding growth comes from both the number of devices generating data as well as the number of sensors in each device.

  • The Ford GT car, for instance, contains 50 sensors and 28 microprocessors and is capable of generating up to 100GB of data per hour. Granted, the GT is a finely-tuned race car, but even pedestrian household items will contain arrays of sensors and embedded computing capabilities.
  • Smart thermometers will compile thousands of readings in a few seconds.
  • Cars, homes and office will likely be equipped with IoT gateways to manage security and connectivity for the expanding armada of devices.

How this huge amount of newly generated data gets used and where it’s stored remains an open debate in the industry. A substantial portion of it will consist of status data from equipment or persona devices reporting on remedial tasks: the current temperature inside a cold storage unit, the RPMs of a wheel on a truck etc. Some tech execs believe that a large segment of this status data can be summarized and discarded.

Industrial customers (like GE, Siemens, etc) will likely invest more heavily in IoT (sometimes referred to as “the Industrial Internet) than other market segments/verticals over time, but the moment retail customers are the most active in implementing new systems. In North America, a substantial amount of interest for IoT revolves around “digital transformation,” i.e. developing new digital services on top of existing businesses like car repair or hotel reservation. In Europe and Asia, the focus trends toward improving energy consumption and efficiency.

Turner noted that the commercialization of IoT is still in the experimental phase. When examining the IoT projects underway at big companies, IDC found that most of the budgets are in the $5 million to $10 million range. The $100 million contracts aren’t here yet, he added. Retail and manufacturing are the two leading IoT industry verticals, based on IDC findings.

In a presentation titled, A Market Maturing: The Reality of IoT, Carrie MacGillivray Vice President, IoT & Mobile made the following key points related to the IoT market:

  • Early adopters are plenty, but ROI cases few & far between
  • Vendors refining story, making solutions more “real”
  • Standards, regulation, scalability and cost (!!!) are still inhibitors (as they have been for years)

IDC has created a model to measure IoT market maturity and placed various categories of users in buckets. Their survey findings are as follows:

  • 2% are IoT Experimenters/ Ad Hoc 
  • 31.9% are IoT Explorers/Opportunistic
  • 31.3% are IoT Connectors/Repeatable
  • 24.2% are IoT Transformers/Manageable
  • 10.7% are IoT Disruptors/Optimized solution

Carrie revealed several other important IoT findings:

  • Vision is still a struggle for organizations, but it’s moving in the right direction. Executive teams must set the pace for IoT innovation.
  •  Still more tehcnology maturity is needed Investment extends beyond connecting the “thing” but also ensuring backend technology is enabled.
  •  IoT plans/process are still not captured in the strategic plan.  They need to be integrated into production environments holistically.

 

Carrie’s Closing Comments for IoT Market Outlook:

 

  • Security, regulatory, standards…and cost (!!!) are  still inhibitors to IoT market maturity [IDC will be publishing a report next month on the status of IoT standards and Carrie publicaly offered to share it with this author.]
  • Vision is still needed to be set at the executive level.
  • Thoughtful integration of process has to be driven by a vision with measurable objectives.
  • People “buy-in” will determine success or failure of these connected “things.” 

IoT

 

Above graphic courtesy of IDC:  http://www.idc.com/infographics/IoT

References:

http://www.idc.com/events/directions

http://www.forbes.com/sites/michaelkanellos/2016/03/03/152000-smart-devices-every-minute-in-2025-idc-outlines-the-future-of-smart-things/2/#19a7c92c66c2

http://www.indianweb2.com/2016/02/26/internet-of-things-iot-predictions-from-forrester-machina-research-wef-gartner-idc/


MWC-2016: CDNs for Mobile Networks?

One of the more interesting trends from MWC-2016 in Barcelona last month is content delivery networks (CDNs) for mobile operators.  [A CDN is an interconnected system of cache servers that deliver web content based on geographical proximity. The CDN concept originated in the wire-line Internet world for traditional video services over “best effort” Internet transport.] 

Mobile CDN is a network of servers – systems, computers or devices – that cooperate transparently to optimize the delivery of content to end users on any type of wireless or mobile network. To provide good Qualtiy of Experience (QoE)  mobile content delivery networks (CDNs) offer an edge-based path to maintaining consumers’ QoE, aimed at optimizing mobile content delivery on that last-mile link from cell to device. 

“Sustaining a good user experience is extremely costly for operators and, despite their efforts to increase network capacity, video quality degrades as content gets more and more popular,” said Expway visit co-founder and CMO Claude Seyrat. “15 percent of videos never successfully start, and 25 percent of users give up when facing buffering. The mobile video market is at a crossroads: video traffic is accelerating and mobile operators have outmost difficulty to tame it.”

A number of vendors were showing CDN technologies at MWC-2016 including ExpwayQuickplay and Ericsson. The latter is also working to create a sort of global “super CDN” ecosystem among content providers (Brightcove, DailyMotion, EchoStarDeluxeLeTV and QuickPlay) and telcos (HutchisonTelstra,AIS, and Vodafone).  There’s also CDNetworks which claims to have a mobile CDN solution.

LTE Broadcast, a 3GPP feature available on some commercial LTE networks, is a one-to many approach that allows operators to deliver the content once, to a thousand users or more. Expway says it leads to ‘enormous’ bandwidth savings for the delivery of popular content. The company added that with its new offering called FastLane, mobile network operators can enhance their own CDNs or offer additional services with a guaranteed quality of service.

Related video tech at MWC-2016 included open caching solutions from such vendors as PeerApp and Qwilt.  Those vendors are trying to lighten the CDN load on the core network by moving popular content closer to end-users.

Akamai– the world’s leading CDN provider which claims to deliver between 15-30% of all Web traffic – demonstrated a CDN for mobile video delivery at MWC-2015.  At this year’s MWC, Akamai announced the commercial availability of its predictive content delivery solutions, intended to help solve mobile video quality challenges.  Akamai’s predictive content delivery (PCD) solutions were designed specifically to address the requirements of content providers, video platform providers and mobile network operators, and are available in two configurations: an SDK that can be integrated into existing or new media apps, or the turnkey, white-labelled Akamai WatchNow application. They both allow for pre-positioning or caching of new videos on the end-user device based on user preferences and viewing behavior. This is said to make searching for content easier and allows for offline viewing.

Other top rating CDN providers can be viewed here and here.


Exactly how many mobile operators will actually buy this new CDN/video technology is not clear. Getting into the video delivery business from entails a pretty steep learning curve, plus major capital and operational expense, and not every mobile operator will be able to afford to participate. Most likely it’ll be restricted to the largest wireless telcos for the foreseeable future, i.e. AT&T, VZW, Vodafone, etc.  Barriers to entry will be much more difficult for 2nd and 3rd tier wireless telcos like Sprint and T-Mobile.

Verizon CFO on 5G for landline services to homes? VZ to sell Data Centers?

More 5G Talk the Talk:

5G networks will provide speeds that could compete with landline service inside homes, but it’s not yet known whether wireless telecos could profit from offering the service, Verizon Communications Chief Financial Officer Fran Shammo told an investor conference Tuesday.   Mr. Shammo spoke at the Morgan Stanley Technology, Media and Telecom Conference, which was also webcast.

Verizon has been experimenting with 5G technology from multiple vendors in five different markets where the FCC gave the company permission to use 28 GHz spectrum for trial purposes. Based on what Verizon has learned, Shammo said the company believes 5G service could be launched in 2017 if spectrum were available.  

Author’s Note:  Brilliant!  Deploy 5G service in 2017 when the standards won’t be completed till the end of 2020!

“We’re trying to accelerate the FCC to clear spectrum,” Shammo said. He added that FCC Chairman Tom Wheeler recently visited Verizon’s location in Basking Ridge, N.J. – one of the places where the 5G technology has been deployed.

Shammo noted that when wireless carriers upgraded from 3G to 4G technology, costs decreased four- to five-fold (presumably on a per-bit basis). 5G has the potential to provide a similar cost advantage over 4G technology – at least with regard to video delivery, he said.


Separately, Shammo said:

  • Verizon might sell data centers to raise cash to “do something else to increase shareholder value.” 
  • The take rates for Verizon Custom TV have increased to about 40% since the company rebundled the offering – an increase from a previous level of about one-third
  • There’ll be more conflicts between content and linear video providers — and more channels being dropped from video service provider lineups. He believes Verizon’s Go90 offering gives the company negotiating power with content providers, though, because Go90 offers an additional way to monetize content

Reference

http://www.verizon.com/about/investors/morgan-stanley-technology-media-and-telecom-conference-2016

Analysis of Cogent Communications Group Results, by David Dixon of FBR & Co.

Editor’s Notes:  

1. This post was written by FBR’s David Dixon; edited for clarity and content by Alan J Weissberger.

2. Cogent Communications was a pioneer in offering 100M b/sec Ethernet services, especially for Internet access, in the from 1999-2001.  They are one of the few “new age” carriers that survived the dotcom bust/telecom crash. Cogent was founded in 1999, is headquartered in Washington, D.C. and is traded on the NASDAQ Stock Market under the ticker symbol CCOI.

3.  Today, Cogent Communications is a multinational, Tier 1 facilities-based ISP. Cogent specializes in providing businesses with high speed Internet access, point-to-point transport and colocation services.  Cogent’s reliable Tier 1, MPLS-enabled optical IP network connects to 2,200 office buildings and data centers.  Cogent owns and operates 51 data centers in North America and Europe used primarilly for collocations services.  


Analysis of Earnings Report:

Cogent reported another mixed quarter. Revenues for 4Q15 grew 8.7% YOY to $105.2M, in line with consensus. Corporate revenues maintained a doubledigit growth rate at 17.1% YOY. Over the last five years, corporate revenues as a percentage of total has steadily grown to 58.1% (versus 48.9% in 2011), compared to 41.9% for the higher-margin, but pressured, net-centric business.

Despite continued declines in the average price per megabit ($1.52 versus $1.81 in 4Q14 and $1.57 in 3Q15), CCOI has so far managed to offset this through increased customer count, driven by higher rep productivity. Rep sales productivity of 6.3 was the highest on record for the company, driven by a combination of:

(1) strong product demand,

(2) better training, and

(3) matching seasoned reps to correct accounts.

Excluding capex-related notes payable, capex of $4.9M was materially lower than the consensus estimate of $10.6M and our estimate of $11.2M, buoying free cash flow. We continue to see downside risk to the net-centric business due to our nonconsensus view that the FCC supports paid settlement peering and carrier network architecture shifts to decentralized lower latency compute platforms.


Key Points

 ■ 4Q15 results recap. Revenues for 4Q15 of $105.2M were in line with consensus and above our estimate of $101.6M. Adjusted EBITDA of $34.7M compared to consensus of $35.7M and our estimate of $34.8M. Diluted EPS was $0.06, versus our Street-comparable estimate of $0.08, due to higher income tax expense.

■ Downside risk to net-centric revenues. We think the FY16 revenue guidance range of 10% to 20% YOY, combined with likely lower capex and principal payments of capitalized leases, will be challenging. We expect the netcentric business to remain under pressure due to uncertainty associated with settlement-free peering, offsetting growth in the corporate business. MWC 2016 signaled strong carrier momentum in carrier network architecture shifts to decentralized lower latency compute platforms.

■ Return of shareholder capital. The board of directors increased the dividend to $0.36 per share, up from $0.35 per share. For full-year 2015, CCOI paid $66.3M in dividends and $39.4M in share repurchases. At year-end 2015, $47.8M of share repurchase authorization remains, which is set to expire at the end of 2016. Management noted that it expects gross leverage to fall below 4.25:1 in 2016, which will allow access to $115M currently in the builder basket.


Will revenue growth slow longer term?  Timeframe: 6 to 18 months

While higher capex and capitalized leases can help drive revenue growth, Cogent s on-net growth prospects are more uncertain going forward as Web 2.0 providers migrate to direct-carrier relationships over time (GOOG, FB, NFLX, etc.), particularly as content distribution becomes bundled with other services. In our view, the company has done well to avoid relationships with highbandwidth CDNs, which pressure competitors to a greater extent. We see multiple sellers of high-bandwidth pricing below $1/Mbps (below Cogent) and note that the market rate for peering is $0.50 $1/Mbps; contrary to Cogent s view, we believe the FCC supports this where traffic imbalances exist. Looking further ahead, a slowing of network footprint growth, coupled with continued perMbps pricing declines, implies a challenging revenue and FCF margin trajectory unless Cogent succeeds in further penetrating its existing footprint or raises capex guidance. We expect Cogent to show modest momentum from the lower-margin corporate and off-net segments.

Can Cogent drive margins higher in the longer term?  Timeframe: 12 to 24 months

The key question, in our view, is the relevancy of Cogent s network to the Internet backbone amid a major shift in network architectures. The impact of changes to the settlement-free peering model on Cogent s cost structure is negative. Furthermore, we see more pressure on Cogent from incumbent operators and policy makers in Europe. From an M&A perspective, it is important to note that many on-net customers are multi-homed to Cogent and other providers, such that, if combined, the net (high-margin) revenue loss on leased network facilities could be significant. FCF results to date have been underwhelming, and we believe caution is warranted.

Will capex management and incremental margin improvement lead to accelerated Free Cash Flow (FCF) generation?  Timeframe: 12 to 24 months

We believe the revenue guidance range, coupled with lower capex and capitalized leases, will prove challenging. Management has not generated adequate free cash flow growth for equityholders, in our opinion. The high-FCF-margin net-centric business is under pressure from an uncertain outlook for settlement-free peering in Europe and the U.S. The low-FCF-margin corporate segment is doing reasonably well. Under the new settlement-free peering agreement with Verizon, Cogent will not generate revenue from the increasing number of content customers behind Edgecast s differentiated CDN platform, which is scaling up nicely, and this may increase pressure on traffic ratios at Cogent s other interconnection points.

Conclusions:

We believe that current industry dynamics are challenging and that content relationships are changing amid a major shift in network architectures that will pressure the settlement-free peering model. Cogent appears increasingly challenged to reach an inflection point in free cash flow generation.


March 8th & 9th Presentations:

Dave Schaeffer, Cogent’s Chief Executive Officer, will present at the two upcoming conferences:

The Deutsche Bank 2016 Media, Internet & Telecom Conference is being held at The Breakers Hotel in Palm Beach, FL.Dave Schaeffer will be presenting on Tuesday, March 8th at 2:45 p.m. EST.

The Raymond James 37th Annual Institutional Investors Conference is being held at the JW Marriott Grande Lakes inOrlando, FL. Dave Schaeffer will be presenting on Wednesday, March 9th at 8:05 a.m. EST.

Investors and other interested parties may access a live audio webcast of the conference presentations by going to the “Events” section of Cogent’s website at www.cogentco.com/events. A replay of the Deutsche Bank webcast will be available for 90 days following the presentation and a replay of the Raymond James webcast will be available for 7 days following the presentation.

 

Reference:

Cogent Communications Reports Fourth Quarter 2015 and Full Year 2015 Results 

http://www.cogentco.com/files/docs/news/press_releases/Earnings_Release_…

IHS: Huawei Tops Optical Network Equipment Vendors for 2015; TMR: PON Equipment Mkt To Grow 20.7%

The global optical network equipment market totaled $12.5 billion in 2015, growing 3% from the prior year, reports IHS Inc in its Optical Network Hardware Market Tracker report.

“After a subdued 3Q15, this quarter’s results represent a much-needed boost to the optical hardware market, with revenues demonstrating the fastest-growing fourth quarter for five years.Huawei and Alcatel-Lucent, in particular, performed well in the quarter, both gaining significantly in share compared to 3Q15,” said Alex Green, senior research director for IT and networking at IHS.  

OPTICAL HARDWARE MARKET HIGHLIGHTS:

·    In the 4th quarter of 2015 (4Q15), worldwide optical spending was $3.5 billion, up 17 percent sequentially, and up 10 percent from the year-ago quarter (4Q14)

·    Spending on wavelength-division multiplexing (WDM) equipment in 4Q15 totaled $3.1 billion, up 18 percent from 4Q14

·    EMEA (Europe, Middle East, Africa) has not yet come out of its era of slow optical spending, showing a rolling quarter decline of 3 percent in 4Q15

·    And in North America there was somewhat of a bounce back in Q4 after a flat Q3, with the region seeing 5 percent rolling 4-quarter growth

·    For the full-year 2015, the top 5 optical hardware market share leaders are, in rank order, Huawei, Ciena, Alcatel-Lucent, ZTE and Infinera.  

 Editors note: it’s somewhat surprising that neither Cisco Systems or ADVA Optical Networking are among the top 5 optical network equipment vendors.

OPTICAL REPORT SYNOPSIS:

The quarterly optical network hardware report provides worldwide and regional market size, vendor market share, forecasts through 2020, analysis and trends for metro and long haul WDM and SONET/SDH equipment, Ethernet optical ports, SONET/SDH/POS ports and WDM ports. Companies tracked: Adtran, ADVA, Alcatel-Lucent, Ciena, Cisco, Coriant, ECI, Fujitsu, Huawei, Infinera, NEC, Padtec, Transmode, TE Connectivity, ZTE, others.

For more information about the report, contact the sales department at IHS in the Americas at +1 844 301 7334 or [email protected]; in Europe, Middle East and Africa (EMEA) at +44 1344 328 300 or [email protected]; or Asia Pacific (APAC) at +604 291 3600 or [email protected].

RELATED NEWS


Separately, the global passive optical network equipment market was valued at US$24.81 bn in 2014, at a CAGR of 20.7% from 2015 to 2023 to account for US$163.5 bn in 2023, according to a report by Transparency Market Research “Passive Optical Network (PON) Equipment Market – Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2015 – 2023. 

High investment in research infrastructure along with technological advancements in Asia Pacific serve as excellent opportunities for the market and is anticipated to augment the growth of the PON equipment market in the coming years.

By components, the global PON equipment market is subdivided into optical cables, optical power splitters, optical filters and wavelength division multiplexer/de-multiplexer. Wavelength division multiplexer/de-multiplexer is flexible and low cost solution. Moreover, these components enable operators to make full use of the available bandwidth. Thus, wavelength division multiplexer/de-multiplexer was the largest contributor in the PON equipment market in 2014, accounting for a market share of approximately 50%.

The global PON equipment market, by structure, is classified into two segments: Ethernet passive optical network equipment (EPON) and gigabit passive optical network equipment (GPON). GPON was the largest contributor in the PON equipment market in 2014, accounting for the market share of more than 65%, particularly due to wide implementation of GPON equipment in corporate and government projects. Moreover, GPON provides advanced security, higher bandwidth and larger downstream rate compared to EPON. Thus, the usage of GPON equipment is increasing at a rapid rate.

Get Sample Report Copy OR for further inquiries, click here: http://www.transparencymarketresearch.com/sample/sample.php?flag=S&rep_id=2024

The gigabit passive optical network equipment market, by components, is classified into two segments: – optical line terminal (OLT) and optical network terminal (ONT). ONT occupied the largest market share in 2014, accounting for approximately 64%, owing to low cost of optical network terminals and increasing demand among end users.

The Ethernet passive optical network equipment market, by components, is bifurcated into two segments: – optical line terminal (OLT) and optical network terminal (ONT). ONT acquired the largest market share in 2014, accounting for approximately 62%, owing to increasing usage of optical network terminals among customers.

Facebook’s Telecom Infra Project May Be Equivalent of Open Compute Project

Facebook’s Telecom Infra Project (TIP) goal is to make it easier and less expensive for telecommunications companies to connect people in places that don’t have cellular service, from urban basements to rural villages. By launching the TIP initiative, Facebook is also trying to send the message that it wants to work with telecom firms rather than replace them.

Jay Parikh, Facebook’s Global Head of Engineering and Infrastructure wrote in a blog announcing the project:

We know from our experience with the Open Compute Project that the best way to accelerate the pace of innovation is for companies to collaborate and to work in the open. To kick-start this work, TIP members such as Facebook, Intel, and Nokia have pledged to contribute an initial suite of reference designs, while other members such as operators Deutsche Telekom and SK Telecom will help define and deploy the technology as it fits their needs.

TIP members will work together to contribute designs in three areas — access, backhaul, and core and management — applying the Open Compute Project models of openness and disaggregation as methods of spurring innovation. In what is a traditionally closed system, component pieces will be unbundled, affording operators more flexibility in building networks. This will result in significant gains in cost and operational efficiency for both rural and urban deployments. As the effort progresses, TIP members will work together to accelerate development of technologies like 5G that will pave the way for better connectivity and richer services.

Facebook has joined with Intel Corp. and Nokia Corp. and carriers including Deutsche Telekom AG to share information about designing cellular networks, and to make these blueprints available for anyone to use and improve upon.  

The initial members of TIP comprise about 30 companies, including big and small carriers and equipment makers. The initiative is open-source, with companies soliciting ideas for design improvements from anyone who wants to contribute. Facebook’s initiative is similar to the Open Compute project it launched in 2011 to try to improve computer-server hardware.

The social networking company’s move to spearhead progress in mobile networks comes amid ongoing tensions between Silicon Valley and telecom firms. Some of the world’s biggest tech and telecom firms are gathering in Barcelona this week for the mobile industry’s biggest annual conference.

Telecom executives (like AT&T and Verizon) say Internet giants such as Facebook and Google Inc. are profiting at their expense. For example, telecom operators complain they do the grunt work of building towers and other infrastructure in the far reaches of the world, the backbone necessary to deliver Internet services.   Online companies are mostly spared that expense. Recently, they have started to offer messenger services such as Facebook’s WhatsApp, which has eaten away at the telecom industry’s old cash cows—text messages and phone calls. The Internet services also make money off ads.  Google Voice also cuts into telecom revenue for long distance calls and texts.

AT&T Inc. is in active discussions to join the Facebook effort but hasn’t yet joined, according to a person familiar with the matter.   Other operators remain wary of Facebook’s new initiative, worrying that it could eventually lead to a direct challenge to their core business of building and running networks.

“For the time being, Facebook needs operators to test these new systems. But tomorrow, maybe they won’t need us anymore,” a European telecommunications executive told the WSJ.

Tech giants have already dipped into the arena of telecom companies. Facebook is exploring ways to connect remote areas to the Internet, most notably by using drones to beam data. Google has taken a more aggressive stance, building out fiber networks and offering wireless service in the U.S.

To illustrate the advantages of testing new approaches to connectivity, Facebook, in collaboration with Globe, recently launched a pilot deployment based on TIP principles to connect a small village in the Philippines that previously did not have cellular coverage. In addition, EE is planning to work as part of TIP to pilot a community-run 4G coverage solution that can withstand the challenges presented by the remote environment of the Scottish Highlands to connect unconnected communities. Testing new technologies and approaches and sharing what we learn with the rest of the industry will enable operators to adopt new models with full confidence that they will be sustainable.

Working to enable operators and the broader telecom industry to be more flexible, innovative, and efficient is important for expanding connectivity. For Facebook, TIP is a new investment that ties into our other connectivity efforts already under way through Internet.org.

According to a blog post by Cade Metz:

Facebook plans on building everything from new wireless radios—the hardware that shuttles wireless signals to and from our phones—to new optical fiber equipment that can shuttle data between those radios. Then, the company says, it will “open source” the designs, so that any wireless carrier can use them.

The hope is that this will lead to better wireless networks—wireless networks that can keep up with all the stuff we’re doing on our cell phones, from listening to music and watching videos to, yes, diving into virtual reality. “These really immersive experiences are all looming,” says Facebook’s Jay Parikh. They’re looming not only for the telcos, but for, well, Facebook itself. That’s why the company is launching this new project. Facebook wants to ensure that the telcos can deliver all the video—and all the virtual reality—it will stream across its social network, all over the world, in the years to come.

Late last month, Facebook launched a new effort inside the Open Compute Projectthat seeks to help telecoms improve the hardware inside their data centers. Now, it also aims to help them improve the hardware across the rest of their networks—to help them expand and enhance their networks at a much faster rate. “The only path that I know that works is to basically take a couple of pages of our playbook for open source software and the hardware and data center work we’ve done, and try to approach the telecom infrastructure problem in a similar vein,” Parikh says.

For Axel Clauberg, a vice president of architecture at Deutsche Telekom, the project makes good sense—not just for Facebook but for the telecoms. “We believe that the exponential growth of Internet traffic requires new approaches,” he says. “The Open Compute Project has proven that open specifications for hardware, combined with an active community can have a drastic impact on efficiency and cost. TIP will trigger the same for all areas of the network.”

Erik Ekudden—a tech strategist at Ericsson, which, like Nokia, builds much of the gear that telcos used outside the data center—also sees potential in this fundamental idea. Lessons that companies like Facebook have learned in the data center, he says, could help telcoms improve their mobile networks. But he also says that ideas can move the other way—from the telcoms to Internet giants like Facebook.

Executives from a handful of top carriers said they will evaluate Facebook’s new Telecom Infra Project (TIP), but they generally offered a lukewarm view of Facebook’s stated effort to develop new technologies “and reimagine traditional approaches to building and deploying” networks.

“We’ll take a look at TIP when it’s a little more mature,” said Verizon’s Adam Koeppe, VP of the carrier’s access technology planning, during a press event here at the Mobile World Congress trade show. “TIP is being looked at.”

“We’ll make use of everything we can,” said Matt Beal, Vodafone’s technology strategy and architecture director, explaining that the carrier would use technologies including open source software to improve its network and services. But Beal stopped short of saying Vodafone would participate in TIP.

 

TIP focus areas

IHS: Network Functions Virtualization (NFV) Orchestration Software Vendors Analyzed

IHS-Infonetics released excerpts from its NFV Orchestration Software Vendor Leadership Analysis, which profiles and analyzes 10 leading network functions virtualization (NFV) orchestration software vendors: Brocade, Ciena Blue Planet, Cisco, Dell, Ericsson, Hewlett Packard Enterprise, Huawei, Juniper, NEC/Netcracker and Nokia. 

The report examines vendors’ approaches to and overall activities in the NFV orchestration software market to understand how suppliers are approaching this emerging opportunity and gauge the most likely market winners as the market matures. 

“Each of the vendors profiled in our network functions virtualization (NFV) leadership report brings a unique vision to the market, are providing innovation and thought leadership to support their vision, and will play an important role in shaping NFV orchestration with their products, partnerships and contributions to open source initiatives,” said Michael Howard, senior research director and advisor for carrier networks at IHS. 

“The big revenue opportunities in NFV are with big service providers who want a prime vendor or two that can put together all the multi-vendor software, hardware, partners and services to develop and deploy virtualization to help them meet their fairly urgent needs for automation, agility and services differentiation,” Howard said. 

NFV ORCHESTRATION VENDOR HIGHLIGHTS (in alphabetical order):

  • Brocade has become a strong contender in the NFV market via acquisitions and has created a portfolio including OpenDaylight software distribution, virtual routers (vRouters) and other virtual network functions (VNFs)
  • Thanks to multi-vendor/domain functionality, a standards-based NFV platform and professional services, Blue Planet, a division of Ciena, is gaining visibility with large service providers who’ve traditionally only worked with incumbent suppliers
  • Possessing many attributes the new world of NFV requires — including existing customer relationships, data center IT experience, NFV orchestration and enough employees to address many NFV opportunities — is telecom giantCisco
  • Dell brings its NFV hardware and software portfolio in combination with a number of partnerships with well-known NFV software suppliers to showcase an open, standards-based NFV platform
  • Ericsson is collaborating with other industry players to bring NFV to an industrial scale, providing a full suite of virtualized network applications, network and cloud managers, analytics, consulting and system integration services
  • Hewlett Packard Enterprise was one of the earliest large vendors to invest in a major strategic effort to become a significant NFV player and has all the ingredients to be a prime supplier for NFV projects
  • A serious contender in the service provider NFV and software-defined networking (SDN) markets, Huawei continues to invest for success and has all the major components to serve as a main vendor for large operators
  • Juniper’s use of standard protocols allows it to create third-party partnerships and increase the variety of NFV use cases, boding well as large service providers develop vendor-agnostic networks to avoid vendor lock-in
  • With deep operations and business support systems (OSS/BSS),NEC/Netcracker’s NFV software has the operational functionality needed for hybrid networks containing physical and virtual networks
  • Nokia is one of the main players for NFV management and orchestration (MANO) and is well suited to be a prime vendor for not just NFV MANO endeavors, but the full range of NFV and SDN projects

 

For more information about the report, contact the sales department at IHS in the Americas at +1 844 301 7334 or [email protected]; in Europe, Middle East and Africa (EMEA) at +44 1344 328 300 or [email protected]; or Asia Pacific (APAC) at +604 291 3600 or [email protected].


Please note that the content manager for this blog website (yours truly) has a different opinion of the NFV market. We’ve outlined in previous posts, that the main issues are: no standard Management & Orchestration functional block, lack of APIs and no implementable standards or backward compatibilty with the installed base of real/physical network appliances.  

We’ve also stated that Open NFV could potentially address those shortcomings, but we’ve not followed that open source consortium’s progress.  Failing to address those NFV shortcomings will result in a fractured market where different vendors sell various “virtual appliances” with APIs to their own Management & Orchestration entity or one created by a partner NFV company.


In an email reply on Feb 18, 2015, Michael Howard of IHS wrote:

“I see many directions for “industry standard” NFV orchestration: a split in the OPNFV group on the most difficult part, the MANO; the HP OpenNFV and other vendors’ versions; at least 1 initiative in Asia; Telefonica’s OpenMANO.

The issue for operators – and vendors – is that most large operators want standards – and NFV MANO inparticular — but don’t want to wait until they are available. This leaves vendors and operators to develop viable “platforms” into which pieces and parts can be mixed and matched—and these “platforms” are available or becoming available from many of the telecom vendors, the OSS companies, and some smaller specialists. Few of the operator contributors to OPNFV are sitting around awaiting the results—they can’t wait to get into the game, to find out how to bring automation to their services and networks.”

FBR’s David Dixon on Pacific Data Vision Wireless (PDVW) & UPDATE on Akamai (CDN global leader now facing challenges)

Written by David Dixon of FBR Inc; edited by Alan J Weissberger, IEEE ComSoc Content Manager.  

NOTE: Akami March 8th update in II. below.

I. Summary for PDVW:

Pacific Data Vision Wireless (PDVW)’s pending 900 MHz rebanding application is progressing in line with company expectations. Management has requested a meeting with the FCC next week to seek an updated understanding of the current status of the application and to provide details of additional industry support. We believe the timing is fortuitous: We think it coincides with work completed on the application by the FCC. We view the upcoming meeting as a positive step towards a Notice of Proposed Rulemaking (NPRM).

We are not privy to the details of current discussions with industry incumbents, due to non-disclosure agreements in place, but we believe significant progress has been made over the past six months. While a consensus industry position is positive, with so many incumbent users, we do not think the FCC sees this as necessary to move forward with an NPRM. In cases such as PDVW’s, where many parties are involved and consensus is difficult to achieve, we believe the FCC is more likely to weigh the petition’s benefits and make a determination. We have greater confidence in a positive outcome in the short term.

Key Points:

■ Upcoming FCC meeting. PDVW’s management will be meeting with the FCC next week. We believe this is an opportunity for management to showcase positive progress being made with incumbents, as well as to seek an updated view of the FCC’s current thinking, which, we believe, will be positive.

■ New spectrum acquisition. PDVW has acquired additional spectrum (~100 channels) for an average of $0.17/MHz/PoP. The licenses are in markets where PDVW’s channels are fewer than the average of other markets. While the price was higher than the $0.06/MHz/PoP paid to Sprint, these licenses are in 10 of the top markets, which warrant a higher price and are below market value, in our view. Management has been moving cautiously with regards to how much it is willing to pay for spectrum.

■ Slower PTT buildout. Management paused more market launches due to a slower ramp-up in its initial eight markets, similarly to what Nextel experienced early on.

(1.)  Certain site developments are taking longer (zoning, rent negotiations, etc.), but the Chicago/ NYC/ Philadelphia /DC/ Baltimore markets should be fully operational by the end of April.

(2.)  Despite positive customer feedback, third-party dealer sales have lagged; some dealers prefer to wait until the network build is completed and tested. In response, management has encouraged the hiring of dedicated sales reps. While it will take time to work out distribution issues, regulatory developments are the driver of PDVW shares, in our view.


Can Pacific DataVision fast track the FCC approval process to further enhance spectrum value?

Answer: If FCC approval to convert Pacific DataVision Wireless’ narrowband spectrum to 3 MHz x 3 MHz LTE occurs faster than expected (before June 2016), it could be a big positive for the company. Furthermore, if Pacific DataVision is successful in acquiring 80% 90% of the existing spectrum band from incumbent operators, it should provide additional flexibility, which should be accretive to valuation.

We do not believe Pacific DataVision is at risk of changes in the spectrum supply curve for capacity spectrum (>2 GHz), given that its spectrum is in the low band and this remains a scarce asset. We believe extra spectrum capacity can be leased in location-specific pockets in each region to serve corporate demand for private LTE networks. Moreover, if Pacific DataVision raises capital to acquire additional spectrum, this could provide increased synergies, revenue upside, and time-to-market advantages for the existing business and should be accretive to our valuation and price target. Management s past success at Nextel, industry knowledge, reputation, and experience with 800 MHz SMR spectrum rebanding are key.

In contrast to capacity layer spectrum (>2 GHz), we forecast increasing value of coverage layer spectrum (<1 GHz) due to:

(1) the strategic nature of this spectrum as the lowest-cost source of spectrum for wide coverage areas,

(2) the relative scarcity of this spectrum asset, and

(3) attractive comparables.

The recent H block auction won by DISH valued higher-frequency spectrum at $0.61/MHz PoP for 5 MHz x 5 MHz of 1.9 GHz spectrum in the top 20 markets. AT&T Inc. spectrum acquisition from QUALCOMM Incorporated valued low-frequency spectrum at $0.91/MHz PoP.


II. Akamai Techologies Inc.  Solid 4Q15 Results; Weak 1Q Outlook: Secular Challenges Are Growing:

NOTE: Akami March 8th update below the Feb 10th earnings report analysis.

On Feb 10, 2016, Akamai Technologies Inc. (AKAM) announced solid 4Q15 earning/revenue results and a new $1B share repurchase program. Revenue for 4Q was modestly above Wall Street estimates, driven by double-digit growth in the performance and security and the service and support solutions segments. The media delivery solutions revenue decline of 1.8% YOY was better than feared. Weak 1Q16 guidance is driven by aggressive pricing and revenue declines from its top two customers (13% of revenue, heading to 6% by mid 2016) as they migrate to “do it yourself” (DIY) platforms.

We see greater DIY (and repricing) risks in the CDN business as foundational data center and fiber assets are established for more players today than in 2011, providing low incremental cost opportunity. An intense sales focus has the performance and security solutions business ramping nicely, but we see secular challenges with the enterprise segment bifurcating. Specifically, we see more migration to cloud platforms, which is likely to confine AKAM to a reduced (partnership based) role for companies’ CDN, Web security, and enterprise security needs. A major enterprise security acquisition is necessary to mitigate the risk of a value trap, but this appears unlikely with management favoring the benefits of superior cash flow.


Will sales force investments and international expansion pay off?

Akamai continues to accelerate investment in its sales force. Most of the company hiring will be done with a focus on international, where the company believes the revenue opportunity could one day equal North America. We think that the growth seen in international revenue supports the company decision to aggressively expand sales capacity and that the move could ultimately pay off.

Can newer products contribute enough to offset maturing core markets and drive sustained midteens, or better, growth?

Akamai s focus over the past few years has been to increasingly diversify its business beyond media delivery and Web performance. Through acquisitions and investments, the company entered new end markets and doubled its addressable market. Akamai s newer product groups Web security, carrier products, and hybrid cloud optimization are growing well, but overall growth is still determined by performance in Akamai s slowing core markets. These businesses are achieving scale, but the rate of slowing in the core CDN business is occurring faster than expected, and the magnitude and timing of OTT opportunities are unclear.

Will Akamai s business model be pressured over time by the irreversible mix shift of Internet traffic toward two-way increasingly distributed on cloud-based architectures that provide compute and storage?

While the amount of Internet traffic is growing, there is an increase in DIY CDN business, and the amount of static, Akamaicacheable data on the Web is falling as a percentage of the total amount of data with which customers interact. In 1999, the Web was a read-only medium with very little user-generated content, customization, etc. Today, the flow is much more bidirectional (and therefore uncacheable). We do not see that Akamai has a play here; it may resist this architecture shift, as moving into these growth areas would likely cannibalize the CDN revenue base. More acquisitions to enhance the enterprise security portfolio in the interim are likely as the company continues to diversify away from the commodity CDN business segment. Yet the market has responded to the unification of software accessing three types of storage by moving toward distributed, layered IaaS/PaaS systems (e.g., Amazon Web Services aka AWS) using HTTPS APIs (versus FTP), providing compute and storage (versus caching of object storage). Improved performance, reliability, and scale are occurring fast, and we expect many cloud customers that are not scaled up will still require a CDN for performance enhancement.

AKAM Conclusions:

We believe Akamai Technologies is in transition as its core media delivery business matures. The company has stepped up its diversification efforts, including (1) broadening the product set, (2) ramping sales hiring, and (3) expanding internationally. The long-term impact of these efforts could be a positive, but we see increased pressure on Akamai’s CDN-based business model over time, driven by the irreversible mix shift of Internet traffic toward “two-way” content increasingly distributed on cloud-based architectures that provide compute and storage. We view the risk/reward at current levels to be negative, as near-term positive momentum is more than offset by fundamental challenges in the CDN segment.

March 8, 2016 Update after David Dixon’s attended Akamai’s Annual Investor Conference:

“We saw nothing to allay our concerns regarding greater do it yourself (and repricing) risks in the CDN business as foundational datacenter and fiber assets are established for more players today than in 2011, providing low incremental cost opportunities to deploy distributed compute platforms via technology partnerships with key vendors.

An intense sales focus has the performance and security solutions business ramping nicely, but we see secular challenges with the enterprise segment bifurcating.  Specifically, we see more migration to cloud platforms, which is  likely to confine AKAM to a reduced (partnership based) role for these companies’ CDN, Web security, and enterprise security needs.

A major enterprise security acquisition is necessary to mitigate the risk of a value trap, but this appears unlikely with management favoring the benefits of superior cash flow.”

References:

https://www.akamai.com/us/en/about/

https://en.wikipedia.org/wiki/Akamai_Technologies

http://www.ir.akamai.com/phoenix.zhtml?c=75943&p=quarterlyearnings


III. CDN Competitor Limelight Networks, Inc. (LLNW):

LLNW’s improvement in profitability highlights management’s focus on improving the company’s cost structure, including head-count reductions (down 47 heads sequentially), efficient infrastructure (fewer servers and racks), and software changes to improve server capacity. While 4Q traffic declined sequentially, the average selling price increased due to increased streaming traffic from higher-paying customers who demand higher quality and reliability of service. Holiday season traffic hit another record during the quarter. Management believes revenues will increase in 2016, driven by traffic increases, but partially offset by the expected continued decline in average selling prices. In the face of further commoditization and low barriers to entry, it becomes imperative for LLNW to find ways to grow the top line, as cost cutting may not be sufficient to sustain profitability, in our view. LLNW’s ability to maintain a positive revenue growth trajectory is still unclear.

Downward pricing pressure and competition will continue to be the primary variables in the commoditized high-volume content delivery market. Pricing continues to be an issue in the industry, although there has been more stability in recent quarters.


Will CDN competition and pricing pressure worsen?

Limelight’s decision not to renew some uneconomic contracts has been a headwind for CDN revenue. Should pricing worsen, there could be even more pressure on the CDN business. In general, we believe competitors will maintain price discipline due to rising peering costs and that predatory pricing for market share gains is abating; this should allow pricing to stabilize again at a better level over the long run. We think solid volumes should be able to offset at least some of the headwinds from pricing pressure. 6 to 12 Months

How long will it take Limelight new management team to return the company to growth?

While the company has established a strategy for its turnaround, we believe this will be a multi-quarter process and that shares could be range bound until signs of improved execution or growth appear. The transition has faced some bumps already, and this could continue. We believe the company goal of stabilizing the CDN business and achieving growth through the VAS business could work, but it will take time before we start to see the impact in fundamental results. Management needs to post consistently positive results to combat concerns that pure CDN players are at risk from the combination of:

(1) increased competition for a commoditized service,

(2) higher customer churn,

(3) technology risk from HTTP 2.0 and SPDY (which significantly improves Web site latency),

(4) higher peering interconnection costs, and (5) CDN functions increasingly being deployed by content and end-user networks.

 Will Limelight be acquired?

While the likelihood of a near-term takeout is lower, in our opinion, due to the recent management changes, we still consider Limelight to be a valuable strategic asset for a number of potential suitors. Specifically, we think a large telecom services company, content provider, or peripheral communications company could potentially make a bid for Limelight. At LLNW current valuation, we think the option of buying versus building has become more attractive for a strategic buyer.

With new management on board and a plan to invest for growth in the out years, the near term, we believe, is setting up to be a transition period for the company.  It remains to be seen what will unfold.

Page 272 of 320
1 270 271 272 273 274 320