Huawei to Invest more in Europe; Wants to be Perceived as a European Company
Huawei Technologies Co.’s chief executive Ren Zhengfei told reporters in London that the Chinese telecommunications-equipment maker plans to increase investment and hiring in Europe as part of an effort to change perceptions of a company he acknowledged has been seen as “mysterious.” Huawei wants to be perceived as more of a “European” company. As part of that directive, Huawei will increase investment in European research and development, and will extend an employee incentive plan to all key non-Chinese employees this year in order to attract and keep top talent, Mr. Ren said.
Ren said: “Right now we should not be expending too much effort in the United States as it might take 10 or 20 years for them to know that Huawei is a company with integrity. We will accelerate efforts in countries that have accepted us.”
Huawei CEO Ren Zhengfei (above) told Reuters:
“Our idea is to make people perceive Huawei as a European company.”
Ren started Huawei in 1987 after retiring from the Chinese military in 1983. Since then, he has built the company into the world’s largest maker of equipment for telecom networks behind Ericsson AB (ERICB), even without access to the U.S. telecommunications market, where Huawei has battled claims the company’s gear may provide opportunity for Chinese intelligence services to tamper with networks for spying. The closely held company has repeatedly denied those allegations, and doesn’t see a public listing as a way of building trust, Ren said.
Huawei has faced allegations in the U.S. that it is a security risk, as well as the threat of an European Union investigation into allegations that China is dumping or subsidizing products pertaining to mobile-telecommunications networks.
“My reluctance to meet with the media has been used as a reason to label Huawei as a mysterious company,” Mr. Ren said through a translator during an interview with Reuters. In a few years, “our idea is to make people perceive Huawei as a European company,” he said.
“By increasing our level of transparency, we still may not be able to address the U.S. government’s concerns,” Ren said. “The reality is that shareholders are greedy and want to squeeze every bit out of the company. Not listing on the stock market is one of the reasons we have overtaken our peers.” By contrast, Huawei’s employee-owned structure, he said, is “part of the reason Huawei could catch up and overtake some of our peers in our industry.”
Huawei grabbed nearly 22% of mobile-network infrastructure spending in Europe, the Middle East and Africa last year, up from just 12% in 2010, according to market-research firm Infonetics. By contrast, in North America, Huawei had only a 2.8% share of that market in 2013—prompting the company to pull back from investments there.
Even as Huawei fights cybersecurity concerns that have restricted access for its network equipment in the U.S. and Australia, the company said last month its sales will rise 77 percent to $70 billion by 2018, from $39.5 billion last year. Ren said the company would continue to increase its research spending, which rose last year to roughly $5 billion, at average 2013 exchange rates, compared with Ericsson’s $4.9 billion.
To reach its revenue target, Huawei is broadening its portfolio with smartphones, tablets and business-computing products, and cloud services. Ericsson, which divested its mobile-device business to focus on network equipment, has reported stalling revenue for two consecutive years. Ren acknowledges that Huawei faces a challenge to convince customers to buy its smartphones amid intense competition.
“There is no way to imitate the growth of Apple and Samsung,” he said. “We will have to work out the way for ourselves. We need to take a step-by-step approach.”
The approach has begun to pay off. Worldwide, the company sold more smartphones last year than any company aside from Samsung Electronics Co. and Apple Inc, as it boosted its share of the global market to 4.9 percent, from 4 percent in 2012, researcher International Data Corp. reported in January. The success has been possible as the company has faced fewer headwinds with consumer products than for its equipment that run mobile networks.
References:
http://www.bloomberg.com/news/2014-05-02/huawei-s-ren-plots-future-outsi…
http://online.wsj.com/news/articles/SB1000142405270230367840457953760327…?
Carrier WiFi Gaining Market Traction as Comcast Expands WiFi Coverage
Infonetics Research released excerpts from its 2014 Carrier WiFi Strategies and Vendor Leadership: Global Service Provider Survey, which explores the drivers, strategies, models, and technology choices that are shaping service provider WiFi deployments.
CARRIER WIFI SURVEY HIGHLIGHTS.
- Respondents have an average of around 32,000 access points currently, growing to just over 44,000 by 2015, representing 33% growth over the next year.
- 40% of Infonetics’ operator respondents expect to integrate Hotspot 2.0 into more than half their access points by the end of 2015.
- Among those surveyed, the top 3 monetization models for WiFi services are pre-pay, bundled with mobile broadband subscription, and tiered hotspots.
- WiFi as a separate overlay network currently leads the list of technologies and architectures for offloading data traffic; meanwhile, more sophisticated carrier WiFi architectures gain gradual traction as respondents look to bring WiFi into the mobile RAN via SIM-based service models or by deploying dual-mode WiFi/small cells.
- Respondents perceive Cisco and Ruckus Wireless as the top carrier WiFi manufacturers for second consecutive year
CARRIER WIFI ANALYST NOTE:
“Carrier WiFi deployments are evolving to deliver the same quality of experience as mobile and fixed-line broadband service environments, and this is driving WiFi networks to become more closely integrated. Hotspot 2.0, a key tool developed by the industry to aid this drive, shows rapid adoption by carriers participating in our latest carrier WiFi survey,” notes Richard Webb, directing analyst for mobile backhaul and small cells at Infonetics Research.
Webb adds: “Operators are betting pretty big on carrier WiFi, but they’re also keen to develop ways of monetizing services so that WiFi starts to pay for itself over the coming years. WiFi roaming and location-based services are examples of customer plans that are growing fast.”
CARRIER WIFI SURVEY SYNOPSIS:
For its 43-page WiFi strategies survey, Infonetics interviewed independent wireless, incumbent, competitive, and cable operators in Europe, Asia Pacific, the Middle East and Africa, North America, and Latin America that that have deployed WiFi in the public domain (or will soon). The report provides insights into carrier WiFi deployment drivers and locations; access point standards, form factors, features, ranges, and backhaul connections; hotspots; mobile data offload; service delivery models and challenges; customer plans; and opinions of WiFi equipment manufacturers. Vendors named in the survey include Alcatel-Lucent, Aruba Networks, Cisco, Ericsson/BelAir, Guoren, HP, Huawei, Motorola, NSN, Ruckus Wireless, Zhidakang, Xirrus, ZTE, and others.
To buy the report, contact Infonetics: http://www.infonetics.com/contact.asp
INDOOR WIRELESS SOLUTIONS WEBINAR AND FREE REPORT:
Join analyst Richard Webb May 6 at 11:00 EDT for Indoor Wireless Solutions: Technologies, Challenges and Backhaul, which compares picocells, metrocells, WiFi, DAS, and repeaters. Registrants receives a special Small Cells Market Report. Attend live or access the replay:
http://w.on24.com/r.htm?e=775881&s=1&k=4FBF544ACB6D1CB5462A6279E79E7A75
Separately, Comcast aims to expand its WiFi footprint to a whopping 8 million hotspots by the end of the year, far more than any other wireless provider. The largest MSO in the U.S. will focus on three different types of locations — outdoor public areas, business facilities, and neighborhood hotspots in private homes. The vast majority of hotspots will be added in customer homes, where the cabler is now installing access points with a second, public SSID signal on the new WiFi-enabled data gateways that it’s deploying throughout the nation.
http://www.lightreading.com/cable-video/cable-wi-fi/comcast-whips-up-mor…?
Infonetics: Small cell market on track to increase 65% in 2014; ABI + Res&Mkts + Mobile Experts predict healthy growth
Infonetics Research released excerpts from its latest Small Cell Equipment market size and forecast report, which tracks 3G microcells, picocells, and metrocells and LTE mini eNodeBs and metrocells.
SMALL CELL MARKET HIGHLIGHTS:
. As mobile operators have to approach a “critical mass” of data traffic before small cells even become a consideration, it is the developed countries (Japan, South Korea, the UK and US) that are driving early adoption
. 642,000 small cell units shipped in 2013, a 143% spike from 2012; over half of these units are of the 3G variety
. However, starting this year, 4G metrocells will close the gap with 3G, becoming the main growth engine
. Backhaul is no longer a major inhibitor to small cell deployment, but it will remain an issue for some mobile operators due to the locations in which they operate
. 5G is coming, fully loaded with small cells: NTT DOCOMO in Japan plans to have 5G commercially available in 2020, in time for the Tokyo Olympics
ANALYST NOTES:
“As we anticipated, the great small cell ramp did not happen in 2013 as many in the industry had hoped. Testing activity remained solid, but actual deployments were modest. Small cell revenue was just $771 million last year, a sharp contrast to the $24 billion 2G/3G RAN market,” reports Stéphane Téral, principal analyst for mobile infrastructure and carrier economics at Infonetics Research.
Richard Webb, directing analyst for mobile backhaul and small cells at Infonetics and co-author of the report, adds: “Nevertheless, the need to enhance existing saturated macrocellular networks that are struggling to maintain a decent mobile broadband experience, as well as to add capacity to existing LTE networks, is bringing some fuel to our forecast and, consequently, we expect the small cell market to grow 65% by year’s end, when it will reach $1.3 billion.”
ABOUT INFONETICS’ SMALL CELL REPORT:
Infonetics’ biannual small cell report provides worldwide and regional market size, forecasts through 2018, analysis, and trends for 3G microcells, picocells, and metrocells and LTE mini eNodeB and metrocells. The report also includes a small cell strategies tracker. Vendors tracked: Airspan, Airvana, Alcatel-Lucent, Argela, Contela, Ericsson, Huawei, ip.access, Juni, NEC, NSN, Samsung, SK Telesys, SpiderCloud, Ubiquisys, ZTE, and others.
To buy the report, contact Infonetics at:
http://www.infonetics.com/contact.asp
In late March 2014, ABI Research’s report on Outdoor Small Cells forecast a healthy 2014 growth in equipment revenue at a year-on-year rate of 33% to $1.8 billion. This growth comes thanks to operators like AT&T, Verizon, Vodafone, Telefonica, Softbank, SK Telecom, and Sprint driving shipments of both outdoor and Metrocell deployments.
“As mobile network operators implement small cell outdoor networks, several success factors emerge as critical for a successful deployment,” says Nick Marshall, principal analyst at ABI Research. “We see multiple solutions for backhaul, power, permitting, and siting employed by the operator community throughout the rest of 2014 and which will increase small cell momentum in 2015.”
In 2014, 4G small cells are the fastest growing small cell type in the market driven by venue and dense urban deployments. ABI Research forecasts the number of LTE small cells to grow by 2X in 2014 and by a similar factor each year through 2019 where the value of LTE small cells will reach more than half of a $10 billion equipment market.
https://www.abiresearch.com/press/small-cells-market-healthy-as-2014-dep…
Meanwhile, Research and Markets forecast the Global Wi-Fi-Enabled Small Cell market to grow at a CAGR of 185 percent over the period 2013-2018. One of the key factors contributing to this market growth is the increasing densification of the subscribers’ base. The Global Wi-Fi-Enabled Small Cell market has also been witnessing the increasing adoption of Hotspot 2.0 standard. However, the interference issues could pose a challenge to the growth of this market.
The report, the Global Wi-Fi-Enabled Small Cell Market 2014-2018, has been prepared based on an in-depth market analysis with inputs from industry experts. The report covers the Americas, and the APAC and EMEA regions; it also covers the Global Wi-Fi-Enabled Small Cell market landscape and its growth prospects in the coming years. The report also includes a discussion of the key vendors operating in this market.
The key purpose of all non-macrocell solutions in a telecom network is to address the inexorable demand for coverage and capacity. Such solutions include small cells, Wi-Fi, DAS, and the emerging CRAN. While the first three solutions have been in the market for a while, the CRAN is a relatively new architecture which, like DAS, concentrates the processing of the RAN of a mobile network in one or more centralized network nodes.
http://www.researchandmarkets.com/research/drnpw3/global
On April 1, 2014, Mobile Experts released a 94-page market study today, which provides detailed analysis of Small Cells and Low Power Base Stations. In this market study, deep insights are revealed to illustrate how the mobile infrastructure market is changing to address a need for concentrated mobile traffic in multiple scenarios.
Joe Madden, Principal Analyst at Mobile Experts explained, “During 2013, the deployment of 200,000 small cells in Asia has validated the accuracy of our forecasting over the past five years. This year, we’ve added revenue analysis and more quantitative ‘trigger points’ into our forecast based on real-world examples in Korea.”
The 94-page market study includes analysis on multiple types of Small Cells, and also for small base stations (often called microcells or picocells) which utilize a traditional RAN architecture. In addition, the study includes detailed analysis of Low Power Remote Radio Head applications, which are expected to enable Cloud RAN (C-RAN) functionality in the indoor environment.
http://mobile-experts.net/manuals/mexp-smallcell-14%20toc.pdf
AT&T Earnings Unchanged, Revenues Grow: G bit Internet Access may be Offered in Many New Metro Areas
On April 22nd, AT&T reported that its first-quarter earnings were unchanged from the first three months of last year, but revenue grew as the wireless business added 1,062,000 wireless subscribers in the quarter. That includes 625,000 smartphones and tablets in “post-paid” plans. Wireless service revenue grew 2 percent to $15.4 billion. Total wireless revenue, including phones and tablets sales, grew 7 percent to $17.9 billion.
First-quarter net income was $3.7 billion, or 70 cents per share, compared with $3.7 billion, or 67 cents, a year earlier, when AT&T had more shares outstanding. Adjusting for one-time items, including costs related to its March acquisition of Leap Wireless, income was 71 cents per share, compared with 64 cents in the same period last year. Analysts expected 70 cents. Revenue grew 4 percent to $32.5 billion, better than the $32.4 billion analysts expected, according to FactSet.
“Customers really like the new mobility value proposition and are choosing to move off device subsidies to simpler pricing while at the same time, they are continuing to move to smartphones with larger data plans,” AT&T CEO Randall Stephenson said in a statement.
In the landline business, revenue fell 0.4 percent to $14.6 billion. But its newer U-verse phone, TV and Internet service experienced solid growth. AT&T had 634,000 additional high-speed Internet subscribers and 201,000 TV subscribers.
http://about.att.com/story/att_first_quarter_earnings_2014.html
The company plans a major expansion of it’s newly announced gigabit Internet service.
AT&T says it is ready to offer its fiber-based GigaPower in many more cities, and has opened negotiations with more than 20 municipalities to discuss the viability of fiber to the home for its high-speed U-verse triple play service platform. The proposed expansion, a component of the carrier’s Project Velocity IP initiative, could see the delivery of 1 Gbps fiber-to-the-home service in a number of major metropolitan areas including Chicago, Los Angeles and San Francisco. Most of those new markets are currently served by Comcast and Time Warner Cable (which have agreed to merge). AT&T will only build out U-verse fiber to the home where it sees sufficient demand and a decent ROI.
AT&T executives have told investors recently that the deal has led them to recalibrate their priorities, prompting a more aggressive upgrade of their network. Lori Lee, Sr Executive VP of AT&T Home Solutions, said in a WSJ interview that the move would make AT&T a tougher competitor for the cable industry.
Earlier this month, AT&T said it was in advanced talk to bring speeds of up to one gigabit per second to six North Carolina cities, in addition to its current upgrading of Austin and plans for a similar service in Dallas.
The rollout is happening as Google builds up its own network of fiber in cities like Austin and Kansas City, Kans. In February, Google said it was eyeing dozens of municipalities where it wanted to expand its fiber network.
AT&T said Monday (April 21st) that it is looking to bring its higher speed service to Kansas City and the surrounding area, which was Google’s first location for its fiber service.
AT&T said the upgrades will fall under its planned spending to upgrade the bulk of its network to run on Internet technologies and won’t affect its 2014 budget.
The new cities are mostly ones where AT&T already offers its U-verse Internet and television services. AT&T has built a fiber-based network for the U-verse service, but typically uses copper wire to make the final connections to buildings.
http://online.wsj.com/news/articles/SB10001424052702304049904579515790508491128?
AT&T has already begun to roll out GigaPower in pockets of Austin, Texas, starting off with a 300 Austin, Texas, starting off with symmetrical speeds, and expecting to upgrade that to 1 Gbps by mid-2014. AT&T has previously announced plans to roll GigaPower to Dallas this summer and that it is in “advanced discussions” with the North Carolina Next Generation Network (NCNGN) to bring GigaPower to six cities in North Carolina, including Raleigh-Durham and Winston-Salem.
Here’s an updated list of current and potential AT&T U-verse GigaPower markets, and the incumbent cable operator that serves each market:
- Atlanta, Ga. (Alpharetta, Atlanta, Decatur, Duluth, Lawrenceville, Lithonia, McDonough, Marietta, Newnan, Norcross, and Woodstock). Incumbent MSO: Comcast
- Augusta, Ga. Incumbent MSO: Comcast
- Austin, Texas. Incumbent MSO: Time Warner Cable
- Charlotte, N.C. (Charlotte, Gastonia, and Huntersville). Incumbent MSO: TWC.
- Chicago, Ill. (Chicago, Des Plaines, Glenview, Lombard, Mount Prospect, Naperville, Park Ridge, Skokie, and Wheaton). Incumbent MSO: Comcast
- Cleveland, Ohio (Akron, Barberton, Bedford, Canton, Cleveland, and Massillon). Incumbent MSOs: TWC and MCTV.
- Dallas, Texas (Dallas, Farmer’s Branch, Frisco, Grand Prairie, Highland Park, Irving, Mesquite, Plano, Richardson, and University Park). Incumbent MSO: TWC.
- Fort Lauderdale, Fla. Incumbent MSO: Comcast.
- Fort Worth, Texas (Arlington, Euless, Fort Worth, and Haltom City). Incumbent MSOs: TWC and Charter Communications.
- Greensboro, N.C. Incumbent MSO: TWC.
- Jacksonville/St. Augustine, Fla. Incumbent MSO: Comcast.
- Houston, Texas (Galveston, Houston, Katy, Pasadena, Pearland, and Spring). Incumbent MSO: Comcast.
- Kansas City, Mo./Kan. (Independence, Kansas City, Leawood, Overland Park, and Shawnee). Incumbent providers: TWC, Comcast, SureWest, and Google Fiber (portions).
- Los Angeles, Calif. Incumbent MSO: TWC.
- Miami, Fla. (Hialeah, Hollywood, Homestead, Miami, Opa-Locka and Pompano Beach). Incumbent MSO: Comcast.
- Nashville, Tenn. (Clarksville, Franklin, Murfreesboro, Nashville, Smyrna and Spring Hill). Incumbent MSO: Comcast.
- Oakland, Calif. Incumbent MSO: Comcast.
- Orlando, Fla. (Melbourne, Oviedo, Orlando, Palm Coast, Rockledge, and Sanford). Incumbent MSO: Bright House Networks.
- Raleigh, Durham, N.C. (Apex, Garner, Morrisville, Carrboro, Chapel Hill, Durham, Raleigh). Incumbent MSO: TWC.
- St. Louis, Mo. (Chesterfield, Edwardsville, Florissant, Granite City, and St. Louis). Incumbent MSO: Charter.
- San Antonio, Texas. Incumbent MSO: TWC.
- San Diego, Calif. Incumbent MSO: Cox Communications.
- San Francisco, Calif. Incumbent MSO: Comcast.
- San Jose, Calif. (Campbell, Cupertino, Mountain View, and San Jose). Incumbent MSO: Comcast.
- Winston-Salem, N.C. Incumbent MSO: TWC.
Craig Moffett, senior research analyst at Moffet Nathanson Research, said in a recent research note that AT&T could be on the cusp of a grander GigaPower buildout plan that would help Google achieve its ambitions of nudging ISPs to beef up broadband capacity, but “on AT&T’s dime.”
See more at:
http://www.multichannel.com/blog/bauminator/google-fiber-fever/373793
Verizon brings 100G to U.S. Metro & Regional Areas
Verizon Communications (VZ) is rolling out 100G technology on select high-traffic metropolitan and regional networks in the U.S. The telco is implementing Fujitsu’s FLASHWAVE 9500 platform and the Tellabs+ 7100 system in its metro networks. Verizon will target metro areas where “traffic demand is highest,” the company said. It did not identify which markets will see the deployments.
“Metro deployment of 100G technology is the natural progression of Verizon’s aggressive deployment of 100G technology in its long-haul network,” said Lee Hicks, vice president of Verizon Network Planning. “It’s time to gain the same efficiencies in the metro network that we have in the long-haul network. By taking the long view, we’re staying ahead of network needs and customer demands as well as preparing for next-generation services.”
Verizon says the benefits of 100G scalability are especially relevant for signal performance, which is improved by using a single 100G wavelength as opposed to aggregating 10-10G wavelengths. Also, less space and reduced power requirements are needed to support 100G technology, compared with traditional 10G technology, so fewer pieces of equipment are needed to carry the same amount of traffic.
Verizon claims it’s been a leader in 100G technology and we tend to agree. Beginning in November 2007, the company successfully completed the industry’s first field trial of 100G optical traffic on a live system. Verizon currently has 39,000 miles of 100G technology deployed on its global IP network.
+ In Dec 2013, Tellabs was acquired by Marlin Equity Partners for $891 Million in cash (compare that to Google paying $19B for WhatsApp). http://www.tellabs.com/news/2013/marlin-completes-acquisition.pdf
References:
http://newscenter.verizon.com/corporate/news-articles/2014/04-15-100g-te…
http://www.channelpartnersonline.com/news/2014/04/verizon-unleashes-100g…
Infonetics: VoIP and Unified Communications to grow to $88 billion market by 2018
Infonetics Research released excerpts from its 2014 VoIP and UC Services and Subscribers report, which tracks service providers and their voice over IP (VoIP) and unified communications (UC) services revenue and subscribers.
VOIP AND UC SERVICES MARKET HIGHLIGHTS:
. The global business and residential VoIP services market grew 8% in 2013 from 2012,
to $68 billion
. SIP trunking shot up 50% in 2013 from the prior year, driven predominantly by activity in North America; EMEA is expected to be a strong contributor in 2014
. Sales of hosted PBX and unified communication (UC) services rose 13% in 2013 over 2012, and seats grew 35% due to continued demand for enterprise cloud-based services
. Global residential VoIP subscribers totaled 212 million in 2013, up 8% year-over-year
. Managed services are benefitting from the continued adoption of IP PBXs: Roughly 10%-20% of new IP PBX lines sold are part of a managed service or outsourced contract
. Infonetics expects continued strong worldwide growth in VoIP services revenue through 2018, when it will reach $88 billion
RELATED REPORT EXCERPTS:
. Infonetics’ April Voice, Video, and UC research brief: http://bit.ly/1iDYtXO
. Videoconferencing and collaboration show strongest growth among UC apps
. Carrier VoIP and IMS market gains 30%; Huawei, ALU, Ericsson, NSN ride the VoLTE wave
. Enterprise SBC market grew 42% in 2013
. Mergers and buyouts stir enterprise telephony market; UC revenue climbs 31% in 2013
. Exploding mobile device traffic, acquisitions heat up Diameter signaling controller market
“Business VoIP services have moved well beyond early stages to mainstream, strengthened by the growing adoption of SIP trunking and cloud services worldwide. Hosted unified communications are seeing strong interest up market as mid-market and larger enterprises evaluate and move more applications to the cloud, and this is positively impacting the market,” notes Diane Myers, principal analyst for VoIP, UC, and IMS at Infonetics Research.
VoIP AND UC REPORT SYNOPSIS:
Infonetics’ annual VoIP and unified communications report provides worldwide and regional market share, market size, forecasts through 2018, analysis, and trends for residential and business VoIP and UC services and subscribers. The report also includes a Hosted PBX/UC Tracker highlighting deployments by service provider, region, and vendor platform. Residential VoIP providers in the report include AT&T, Cablevision, Charter, Comcast, Cox, Embratel, Iliad, J:Com, Kabel Deutschland, KDDI, KPN, KT, LG Uplus, Liberty Global, NTT, ONO, Orange, Rogers, SFR, Shaw Communications, SK Broadband, Sky, SoftBank, TalkTalk, Telecom Italia, Time Warner Cable, Verizon, Vonage, and others.
To buy the report, contact Infonetics: http://www.infonetics.com/contact.asp
Related Article:
VoIP, The PSTN Killer, Won’t Kill Local Loops
- Many carrier frequencies on a pair. Orthogonal Frequency Division Multiplexing (OFDM) put hundreds of virtual modems in parallel, each on its own carrier frequency, over one wire pair.
- More efficient coding. How a bit appears on the wire changed from simple on/off signals (T-1) to 4-level signals (ISDN), to adding a phase change (quadrature coding in modems). The number of bits per baud (how many bits each digital symbol conveys) went from 1 to 64 and may go higher.
- Improved signal-to-noise (SNR) ratio. Echo canceling, first applied to voice, does wonders for data too.
- Interference canceling. The latest is highly adaptive “vectoring” among all the pairs in a cable.
Related Webinar: IMS IN THE CLOUD WITH NFV:
Join analyst Diane Myers April 16 for Deploying IMS in the Cloud with NFV, a live event that investigates the benefits of using IMS to leverage the cloud to achieve network scalability, cost, and flexibility, as well as how network functions virtualization (NFV) enables innovation:
http://w.on24.com/r.htm?e=763407&s=1&k=6DD532531DA28F11FF4CCCE20A63DBDE
Packet Optical & Long Haul Transport Market Experiencing Modest Growth- Market Research Firm Findings
Key Optical Market Trends 2013-2018:
- Continued demand for capacity driving the need for DWDM equipment and specifically 100 Gbps wavelengths. Dell’Oro Group expects the DWDM market to grow at an average annual rate of eight percent through 2018 and for 100 Gbps wavelengths to contribute the largest share of DWDM capacity shipments, approaching 80 percent by 2018.
- Movement towards OTN and packet transport driving the demand for optical packet platforms with OTN switching features. Dell’Oro Group projects optical packet platform revenue to grow at a 15 percent compounded annual growth rate.
- Ratio of equipment sales in metro optical versus core optical applications to drift over the next five years, with the majority of spending in metro applications.
– See more at: http://www.delloro.com/news/optical-transport-equipment-market-to-reach-15-billion-by-2018#sthash.PAZxiMjT.dpuf
From the Del’Oro Group: Key Optical Market Trends 2013-2018:
- Continued demand for capacity driving the need for DWDM equipment and specifically 100 Gbps wavelengths. Dell’Oro Group expects the DWDM market to grow at an average annual rate of eight percent through 2018 and for 100 Gbps wavelengths to contribute the largest share of DWDM capacity shipments, approaching 80 percent by 2018.
- Movement towards OTN and packet transport driving the demand for optical packet platforms with OTN switching features. Dell’Oro Group projects optical packet platform revenue to grow at a 15 percent compounded annual growth rate.
- Ratio of equipment sales in metro optical versus core optical applications to drift over the next five years, with the majority of spending in metro applications.
http://www.delloro.com/news/optical-transport-equipment-market-to-reach-…
Martin Casado: How the Hypervisor Can Become a Horizontal Security Layer in the Data Center
Introduction:
“Security will never be the same again. It’s a losing battle,” said Martin Casado, PhD during his Cloud Innovation Summit keynote on March 27th. Currently, security spend is outpacing IT spend, and the only thing outpacing security spend is security losses. Clearly this isn’t an issue of investment, innovation, or priorities as huge industries are built around security. Mr. Casado believes there is a fundamental architectural issue: that we must trade off between context and isolation when implementing security controls. With today’s huge data centers, there is a very large potential “attack surface” for malware and other cyber threats.
Astonishingly, Martin said that approximately “40% of actual SDN adopters paying money for SDN network virtualization are doing it as a security use case.” The concept is to use network virtualization as a primitive, as building blocks to build micro-segments. if you put something within one of those virtual networks, or within one of those segments, the only thing that it can see are also in that same segment. For example, for every application running on a virtual network can have its own security services, ie. its own L4 through L7 services. And if it gets compromised, the attack gets localized to just the layer effected. As a result, this use case is driving a lot of the adoption of network virtualization, according to Mr. Casado.
Martin said, “This has become, I think, the driving use case (for the data center) going forward. And as things like SDN and network virtualization cross the chasm (and become a significant revenue generating business), I think it’s security that’s going to do it.”
A Horizontal Security Layer:
Security in the data center involves a basic trade-off between context and isolation. If security control, such as a firewall a monitoring/tracking agent, is implemented within the application, it’s got great context. It knows the users, the data, and the files. But there’s no isolation. As a result, the user doesn’t trust the application or the endpoint. “So putting a security control there is kind of like taking the on-off switch to an alarm system and putting it on the outside of a house. It doesn’t make any sense.”
“Maybe I’ll put the security control in the infrastructure. Let’s put ACLs or whatever on servers, switches and routers, which provides very good isolation between the two. If I’m able to break into a server, I haven’t broken into the router, necessarily. But while the attack surface is much smaller (with isolation between the separate boxes), there isn’t any context. The resident security control doesn’t know the users or applications. It doesn’t have access to local file systems.”
So there’s a fundamental trade-off between:
a] Great context (know everything about the operational environment) without any real security/isolation, OR….
b] Terrible context (know nothing about the operational environment), but have great security through isolation.
Can the industry build a “Goldilocks zone” that goes ubiquitously throughout the (virtualized) data center, which provides both context and isolation? The Goldilocks Zone would be a place where both visibility and security are possible — in a location that’s not too visible or not too inaccessible, but just right. A horizontal security layer that provides both context and isolation was proposed as that “Goldilocks layer” by Martin.
Casado said that since the majority of workloads are virtualized, (horizontal) security control could be placed in the hypervisor (a separate trust domain). That security entity could then peer into the application to pull out meaningful context (like users and applications and observe the state of the network). It could also protect that visibility and provide protection and enforcement. Therefore, the hypervisor seems to be an optimal place to implement security- where you have visibility, context and isolation.
“And so this is kind of a major area that I’m looking into, because again, given the state of the security industry and if things go the way we are, we’re going to be spending all our time and money on it, we do need something that will change the architecture (of the data center) and the way we view it. What we’re missing today is a horizontal layer that we can provide meaningful security.”
If this horizontal security layer is built out as a software platform (residing within a hypervisor), new security features can be included. Martin cited two examples:
- Next-generation firewalling with deep visibility in the end host.
- Network access control that understands objects and people or meaningful policies or vulnerability assessment.
–> Martin claims that data center security, whether it’s end host security, or network access control, vulnerability assessment, IDS, or IPS, etc. would all be enhanced by such a horizontal security layer. All of them need better isolation and all of them need more context.
“So if we can build out this horizontal layer in this “Goldilocks zone,” I think we can actually move security in very much the same way that we have moved networking over the past seven years. I mean, I dedicated my life to SDN, and I think that we have the same type of opportunity here.”
Author’s Note: When malware invades a (physical) server it immediately tries to block the operations of any anti-malware software. Since any process running on a virtualized server has no way to reach the hypervisor, a security layer that’s operating within the hypervisor can take action to mitigate the malware or security threat. However, there is currently no security layer in VMware’s or anyone else’s hypervisor.
Martin’s Summary:
The IT industry needs to develop a horizontal layer for security controls and to use micro-segmentation to limit the attack surface within within the data centers. That will protect the data center and the assets within it from malicious attacks.
“This is a once in a wave opportunity, as we’re redefining these new architectures, to actually build security in as a primitive, as a fundamental primitive. So we have a root of trust. So you have a horizontal security layer that you can build rich systems on top of.”
Martin in Conversation with Michael Howard:
In the interview with Infonetics’ Michael Howard, Martin called attention to the problem of detecting the imminent arrival of a large flow of data (an elephant) that would trample smaller data flows (the mice). “Nobody knows how to detect elephants, and we can’t do it from within the network,” he said.According to Casado, the hypervisor actually can see the future, in that it can detect the amount of data that is queued to be transmitted. The hypervisor can therefore sniff out elephant flows. You can go ahead and mark it, and then that will solve this long standing performance issue (between the elephant and the mice data flows) in networking.
In a subsequent email exchange, Martin wrote: “The hypervisor, with the aid of a guest presence, can look directly into the TCP send buffer to detect an “elephant” (large packet queued to be sent). This is likely to be a far more accurate approach than anything stochastic, such as flow tracking in the network.”
Q & A with this Author:
Alan: “Michael (Howard) asked you to explain the situation of SDN, NFV and telco service provider networks, and you mentioned what the problem is, but not the challenge telco’s face. The problem being that web hosting (provided by telco’s) is a low-margin business, the telco’s customers are building overlay networks to deliver cloud services and carriers want a part of that cloud business.What’s your opinion of whether or not they’ll succeed, and what really are the obstacles in building a carrier cloud?”
Martin Casado: “That’s a good question. NFV, I think at the most basic level, is just disaggregating the service from the box, and people have different ideas about what that service is. I see basically two camps. One camp is for big carrier, heavy gear that’s sold by the likes of Ericsson and Nokia Siemens, I want to decouple that software and that hardware.I think that’s going to be a very difficult journey.
I think the incentives aren’t aligned correctly. I’m not sure there’s a technical rationale for doing that. So when it comes to actually doing NFV for core carrier equipment, I don’t buy this is going to actually happen. I could be wrong, but just from an industry standpoint, I just don’t see the incentives aligned correctly.
Another way that you can view NFV is providing L4 through L7 services, things that are already virtualized and running in (Intel) x86 processors. So I’m going to offer security services, I’m going to offer load-balancing services. For that, I think that, A, this is already happening. I think the telcos are in a great position because they own the infrastructure to provide this. You hear about virtualization of VPN using top solutions. I think all of that will happen.
I’m actually suggesting something even a little bit more radical. So, again, the NFV where you’re trying to disaggregate the big hardware boxes. I’m not sure there’s a technical justification. There’s a market justification. I don’t think there’s a technical justification. I think it’s going to be too difficult.
When it comes to kind of L4 through L7 services, these things are already on x86, virtualisation will happen. The carriers know how to provide these as a service. I think they’ll be successful with that. I’m suggesting something even more radical, which is why don’t you build an API and a platform that the guys that you typically have host have to use?
So instead of hosting BitTorrent or Netflix or whatever, have them program to your APIs. And so I’m not sure if anybody’s talking about that but me, but I do think that’s what NFV should become.”
Conclusions:
Lack of effective security remains the number one obstacle to cloud adoption for enterprise customers. Malware is getting worse and the evil people who create it are getting better at finding ways to insert malware/ spyware into both servers, switch/routers and virtual machines.
A solution like Casado proposes (the horizontal layer within a hypervisor) seems quite workable, but it hasn’t been implemented yet by any vendor we know of. Instead, there are a raft of add-on security appliances and agents that don’t provide a whollistic and effective security solution. Let’s hope that security becomes a competitive issue in the world of virtualized systems, especially within cloud resident data centers.
Addendum:
VMware: How the Hypervisor can be Security’s Savior
http://www.computerworld.in/news/vmware%3A-how-the-hypervisor-can-be-security’s-savior
Acknowlegement:
The author sincerely thanks Martin Casado, PhD Stanford, for his diligent review of this article and his helpful comments and corrections that made it more accurate.
Postscript:
A Sept 27, 2014 Barron’s article hints that VMWare may sell Hypervisor security software to commodity servers and bare metal switches:
“In an age of break-ins at major retailers like Target and Home Depot, he notes, more and more network attacks can’t be stopped by conventional network firewall devices sold by Cisco and Check Point Software Technologies. To Martin Casado of VMWare, the virtual machine will assume a new role of protecting all the precious containers running on each server.
“So, call it a security visor, call it whatever you want,” he says. “The nature of a hypervisor changes to one of providing isolation for those applications,” he says. Casado’s ambition is even broader. Some of the traditional network switching business of Cisco can be disrupted, he says. VMware hypervisor software can be sold as a program to manage inexpensive switches from Dell and others that undercut Cisco’s premium. It is, to Casado, a grand transformation of the networking business, one that clearly excites him as he draws various diagrams on a white board of the shifting architecture of networks. “We haven’t even seen yet what will happen with this fundamental change” in IT, he says.
The business he oversees, called NSX, is running at over $100 million annually, still small, but Casado has 3,000 VMware salespeople to help sell it, and 50 million VMware-enabled virtual machines running in data centers—”enormous” resources,” he says.”
If he can transition VMware to the next era of data centers and networking, Casado may both save the company from obsolescence and open up a new frontier on Cisco’s turf.
Alcatel Lucent introduces SDN Switch while Nuage Networks gets contract with Cloud Services Provider
Alcatel-Lucent has broadened its software-defined networking (SDN) portfolio with the introduction of a programmable access switch that features embedded analytics and can scale to deliver up to 32 10G uplinks. The OmniSwitch 6860 supports the OpenFlow and OpenStack protocols and will be commercially available next month.
It features 24 or 48 Gigabit Ethernet ports, four fixed 1G/10G SFP+ ports and two 20G Virtual Chassis link ports for stacking into a virtual chassis. Up to eight switches can be connected into a virtual chassis with up to 32 10G uplinks and 384Gigabit Ethernet ports.
The enhanced OmniSwitch 6860 “E” models also support four unique 1G PoE ports that offer up to 60 watts of power to support devices that require high power, such as small cells that combine cellular and Wi-Fi, and high definition video surveillance cameras. The 6860 also includes embedded analytics and programmability, Alcatel-Lucent says. It features an ASIC and coprocessor for deep packet inspection and policy enforcement. This is intended to give IT more visibility into applications passing through it, bandwidth consumption, and enforcement of prioritization, QoS and security policies.
More at: http://www.businesscloudnews.com/2014/04/02/numergy-selects-nuage-networks-for-sdn-to-support-cloud-datacentres/
Numergy selects Nuage Networks for SDN to support cloud data centers
One year old Nuage Networks provides a software defined networking (SDN) platorm for provisioning, orchestration, and control of virtual network resources/end points The wholly owned subsidiary of Alcatel Lucent provides needed software for that facilitates connectivity between virtual resources in data centers, inter-connection of federated data centers and tieing together of virtual private networks used by branch offices to access cloud services.
French cloud service provider and IT specialist Numergy recently announced that the company will deploy Alcatel-Lucent routers and Nuage Network’s SDN platform to support its cloud computing infrastructure. Numergy is owned by a French consortium that includes the government, SFR and Bull. It said the move is an essential stepping stone towards virtualising more of its datacentre resources. Their mission: to build a “sovereign cloud” that serves consumers and businesses, first in France and then in Europe, and to ensure the location and privacy of sensitive data in compliance with the laws of France and the European Union (EU).
Numergy implemented Nuage Networks’ virtualized services platform (VSP) and virtualized services gateway (VSG) as well as Alcatel-Lucent routers to manage and automate its datacentre networks. The company said the upgrades will make its internal networks more efficient, and that this is a key stepping stone in its broader strategy to virtualise more of its datacentre resources.
“We are pleased to implement the Nuage Networks product suite in our cloud infrastructure. The Nuage Networks SDN technology allows us to address key performance and compatibility requirements for an open environment,” said Erik Beauvalot, chief operating officer of Numergy. “This will allow us to virtualize our infrastructure and to offer our customers cloud services in a more dynamic way,” Beauvalot added.
During the Netevents Cloud Innovation Summit, CEO Sunil Khandekar said, “Look at application delivery as the product of the network. Because if we orient ourselves in making us think of networks and compute and storage in terms of allowing applications to be deployed very, very rapidly the whole model in how we build and automate these networks completely changes.” Sunil added that SDN was the technology that made rapid and robust application delivery happen. He said that the key attributes of SDN are abstraction, automation, control and visibility. Abstraction of the underlying network was defined as having the applications not be concerned about VLANs, IP addressing, what protocols they’re running, etc. in the network. They just specify what they need, what the application requirements are in an abstract format and the SDN tools facilitate the virtual connections.
Nuage Networks won Enterprise award at the 2014 NetEvents Cloud Innovation Summit held in Saratoga, California on March 27, 2014. All the award winners are listed in this article:
https://techblog.comsoc.org/2014/03/29/winners-of-the-netevents-cloud…
The Nuage Networks Portfolio includes:
- Nuage Networks Virtualization Services Platform (VSP) – lays the foundation for an open and dynamically controlled datacenter network fabric to accelerate application programmability, facilitate unconstrained mobility, and maximize compute efficiency for cloud service providers.
- Virtual Services Directory (VSD) – serves as a policy, business logic & analytics engine for the abstract definition of network services. Through RESTful APIs to the VSD, administrators can define and refine service designs and instantiate enterprise policies.
- Virtualized Services Controller (VSC) – serves as the robust control plane of the datacenter network, maintaining a full per-tenant view of network and service topologies. It is an SDN controller with advanced federation capabilities that ensure scaling and graceful interconnection to existing IP networks. Through interfaces such as Openflow, the VSC programs the datacenter network independent of networking hardware.
- Virtual Routing & Switching (VRS) – serves as a virtual endpoint for network services. Through the VRS, changes in the compute environment are immediately detected, triggering instantaneous policy-based responses in network connectivity to ensure that the needs of applications are met.
- Nuage Networks 7850 Virtualized Services Gateway (VSG) – extends the benefits of SDN automation seamlessly between virtualized and non-virtualized assets in the datacenter. The 7850 VSG is a high-performance gateway platform, offering up to a terabit of capacity in a single rack unit with full layer 2 to layer 4 capabilities for multi-tenant datacenter environments.
More at: http://www.alcatel-lucent.com/news/2014/numergy-and-secure-sovereign-cloud
http://www.businesscloudnews.com/2014/04/02/numergy-selects-nuage-networ…
http://www.nuagenetworks.net/resource-center/
Nuage Networks Launch Event April 2, 2013
http://www.youtube.com/watch?v=Y2WXJeOg5Ko
Stay tuned for a feature article on Nuage Networks, based on a visit to their Mt View, CA facility today (April 3, 2014) and their comments at last week’s Cloud Innovation Summit (March 27-28, 2014) in Saratoga, CA.
Regulatory Barriers to >95 GHz Wireless Technology
While ITU has spectrum allocations as high as 275 GHz and claims jurisdiction to 3000 GHz, FCC – and probably all nation spectrum regulators (“administrations” in ITU jargon) – have no specific rules, licensed or unlicensed, for frequencies greater than 95 GHz – with the minor exception of provisions for radio amateurs and ISM (e.g. microwave ovens) in a few small segments. This lack of rules and quick market access probably inhibits capital formation for innovative wireless products because it raises unusual and unquantifiable “regulatory risks”
The commentators of Fox News repeatedly comment on the “war on coal” and the “war on religion”. Well, the “war on millimeter (mmW) wave technology” at FCC is just as real and easier to document, although it is no doubt unintentional. There are 3 proceedings at FCC that document FCC’s present disinterest/apathy towards commercial use of cutting edge microwave technology, even as other national competitors advance in this area due to better collaboration between indusrial policy and spectrum policy.
The current situation of US regulation >95 GHz needs the urgent attention of communications technologists, especially researchers and firms dealing with millimeter wave technology. The lack of “service rules” beyond 95 GHz makes regular commercial licensed or unlicensed mmW use impossible. This in turn greatly complicates capital formation for such technology because VCs can easily find other technology to invest in that does not involve making a prominent communications lawyer member a member of your family for several years and paying his children’s college tuition while at the same time the entrepreneur has no access to market and bleeds red ink.
Sadly, with the exception of the IEEE 802 LAN/MAN Standards Committee (the techies behind Wi-Fi standards), Boeing, and the more obscure (at least in FCC circles) Battelle Memorial Institute and the rather obscure Radio Physics Solutions, Inc., no commercial interests have filed comments with FCC on 3 key issues blocking capital formation for technology above 95 GHz and by extension hindering US competitiveness in advanced radio technology.
The 3 dockets involved are:
- Docket 10-236 which as was supposed to encourage experimentation had the apparently unintended effect of complicating millimeter waver research by forbidding, for the first time and without an explanation, all experimental licenses in bands with only passive allocations – independent of whether there was any adverse impact on passive systems. Many mmW bands have only passive allocations and it is difficult a and expensive to avoid them in initial experiments with new technology and it is not important if there is no passive use near the experiment than could get interference. Since the text of the Report and Order contradicts itself on this issue, the simplest explanation is that a sentence was put in the wrong section. Your blogger filed a timely reconsideration petition when he noticed this 2 days before the deadline and that had been supported by Battelle and Boeing and has been opposed by none. But FCC doesn’t necessarily react in a timely way, specially when incentive auctions are very distracting and staffing is low, unless there are multiple expressions of concerns, preferably from corporate America.
- Docket 13-259 deals with the IEEE-USA petition seeking timely treatment of new technology proposals for this green field spectrum >95 GHz under the terms of 47 USC 157, although any clear statement from FCC on how to get timely decisions on such spectrum would be useful.
- Docket 13-84 has proposed updating the Commission’s RF safety rules. The rules currently only have numeric limits up to 100 GHz – the upper limit of the standard they were based on when they were last updated almost 2 decade ago – but the new proposals are silent on numeric limits above 100 GHz even though the standard that is now the base of the regulations now goes to 300 GHz! This lack of a specific safety standard above 100 GHz adds even more to the regulatory uncertainty of those interested in mmW technology. With today’s mmW technology, the specific numeric standard doesn’t really matter much because exposures will be low. But this proposal to leave ambiguity for mmW systems can be very damaging. Battelle has proposed one way to deal with a specific standard. Others interested in mmW technology should either support it or propose an alternative.
- RM-11713 A specific proposal from Battelle for rules to allow a licensed point-to-point service at 102-109.5 GHz (between 2 bands allocated for only passive use).
Technology above 95 GHz may not be available at retailers like Walmart and Radio Shack today, but it is not “blue sky” either. (Wi-Fi was not a household word – even even yet named – when FCC created the rules for it in 1985.) The pictures below shows a 120 GHz systems used 6 years ago and a recent German 237 GHz experiment.
Spectrum policy need not be a “spectator sport”. Wireless innovators should realize that access to capital for R&D depends on real business plans and that includes timely spectrum access in the case of wireless technologies. Listed above are 4 FCC proceedings that deal with technologies >95 GHz. The technical community has been oddly silent on all 4. You may not agree with all of them or even some of them, but the proper way to deal with that is make your voice known and tell FCC and/or you national spectrum regulator what you think about policies at the upper end of the spectrum.
vox populi, vox dei
Japanese 120 GHz system used at 2008 Beijing Olympics
German 237 GHz System exceeding 100 Gbits/s
( An experiment probably not permitted in USA under the terms of FCC’s recently revised experimental license rules)