No Surprise: Clearwire to shift from WiMAX to LTE – But Who WIll Fund It?

LTE is now being deployed in the US by Verizon Wireless and Metro PCS, with AT&T and LightSquared to follow (the latter’s LTE deployment depends on resolving the GPS interference issue with the FAA and other U.S. government regulators).

Clearwire had announced last year that it had begun testing LTE technology in Phoenix, AZ.  Those Clearwire LTE tests achieved data speeds of 120 megabits per second – 10 times faster than the fastest networks currently in operation. So we predicted at that time that Clearwire would opt for LTE rather than IEEE 802.16m (AKA WiMAX 2.0).  Now its for certain.

Today, Clearwire’s CEO John Stanton said that the company’s new LTE network would initially target densely populated urban areas in its existing 4G markets where current 4G usage is highest. It said it will be able to use its existing WiMax infrastructure in these markets to serve the company’s LTE needs, delivering substantial capital cost savings compared with similar rollouts by rival operators.

“Our leadership in launching 4G services forced a major change in the competitive mobile data landscape,” Mr. Stanton said. “Now we plan to bring our considerable spectrum portfolio to bear to deliver an LTE network capable of meeting the future demands of the market.”

John Saw, Clearwire’s chief technology officer, said: “Our extensive trial has clearly shown that our ‘LTE Advanced-ready’ network design, which leverages our deep spectrum with wide channels, can achieve far greater speeds and capacity than any other network that exists today.

“In addition, the 2.5GHz spectrum band in which we operate is widely allocated worldwide for 4G deployments, enabling a potentially robust, cost-effective and global ecosystem that could serve billions of devices.”

In a sideswipe seemingly aimed at rival LightSquared, he added: “Since we currently support millions of customers in the 2.5GHz band, we know that our LTE network won’t present harmful interference issues with GPS or other sensitive spectrum bands.”

Clearwire said its LTE implementation will use Time Division Duplex (TDD) LTE technology. The LTE deployment will take advantage of the company’s all-IP network architecture and will involve upgrading base station radios and some core network elements. Clearwire said it will use multicarrier, or multichannel, wideband radios that will be carrier-aggregation capable.

A key question is where will Clearwire get the needed capital to buld the planned LTE-TDD network?  The company said that plans to build the new LTE network “are subject to raising additional capital,” which has been a problem for Clearwire for the past three years. Furthermore, Clearwire states that it will need “substantial additional capital” to continue running its WiMax network “over the intermediate and long-term,” although the company says it currently has enough capital to maintain and operate the network “for at least the next 12 months.”

http://www.ft.com/intl/cms/s/0/eebc4628-be22-11e0-bee9-00144feabdc0.html#axzz1U2GQTZHb

http://www.fiercewireless.com/story/clearwire-deploy-lte-if-it-can-get-additional-funding/2011-08-03


Here is what CEO Stanton said about LTE during today’s earnings call;

Based on the success and insights from our now completed Phoenix trial, we plan to add LTE services to our present network in areas with high usage concentration where we can meet the needs of our current partners and other major carriers. Our carrier customers would use LTE capacity to supplement their offerings.

LTE will be implemented by overlaying most of our existing 4G network. We will not use Sprint’s project vision in our existing markets because it is substantially more expensive compared to the cost of overlaying our own network. We are in discussions with Sprint about using vision in new build markets in the future.

We plan to maintain the WiMAX network for a significant period of time to serve our present customers. We believe WiMAX will continue to represent an appealing product for certain market segments.

There are two key reasons we can implement this strategy, our spectrum and our network. We have the largest, deepest spectrum position in the industry on the best and only globally coordinated band, differentiating ourselves from any other carrier or want-to-be 4G operator.

With an average of 160 megahertz of spectrum nationwide, we have more spectrum than even AT&T and T-Mobile combined. With all of our spectrum in one contiguous band, our spectrum depth enables us to deploy wider channels or fatter pipes to enhance the throughput speed and capacity.

Spectrum in the 2.5 gigahertz band is ideally suited for high-volume wireless data. High-frequency spectrum is much more conducive than low- or mid-band spectrum to meeting the usage and speed requirements of heavy tonnage users in densely populated markets.

The 2.5 gigahertz band is also the sweet spot of global TDD LTE evolution. Earlier this year, Clearwire cofounded the GTI consortium with China Mobile, Vodafone, SoftBank and Bharti. Clearwire was the only American carrier included in the consortium. The members of this consortium serve more than 1.3 billion customers, representing 4x the population of the U.S. This means that this group will be driving the lowest possible cost and greatest variety of devices.”

http://seekingalpha.com/article/284461-clearwire-s-ceo-discusses-q2-2011-results-earnings-call-transcript

Opinion:  Clearwire’s announced plans for LTE along with Sprint overt hints that it will also deploy that technology sounds the death bell for mobile WiMAX.  It almost guarantees that IEEE 802.16m- WiMax 2.0- will be DoA.

Who is to blame for this market failure?   I’ll give you three guesses, but the 1st two don’t count!

Smart Energy Home Area Network (HAN) Consortium=HomePlug Alliance, Wi-Fi Alliance, HomeGrid Forum and ZigBee Alliance

Background:  SEP 2 was selected in 2009 by the U.S. National Institute of Standards and Technology (NIST) as a standard profile for smart energy management in home devices. The profile is suitable for operation on a variety of IP-based technologies. This consortium establishes a communications technology-agnostic forum to unify and accelerate the realization of interoperable SEP 2 products through a joint test and certification program. The consortium intends to utilize the processes and best practices recommended by the Smart Grid Interoperability Panel (SGIP) for smart grid testing and certification programs.  The Consortium for SEP 2 Interoperability invites participation from other trade associations in communications technology that have an interest in developing an interoperable smart grid.

The Main Message:  The HomePlug Alliance, Wi-Fi Alliance, HomeGrid Forum and ZigBee Alliance have agreed to create a Consortium for SEP 2 Interoperability. The new consortium will enable organizations whose technologies support communications over Internet Protocol (IP) to certify SEP 2 according to a consistent test plan. Recognizing that the vision of interoperable SEP 2 devices across the network will only be realized with consistent certification and interoperability testing, the Consortium is being structured as an open organization. This cooperation among alliances builds on the work of many industries to bring smart grid benefits to consumers.    
      
The joint certification and test program will be used to certify wireless and wired devices that support IP- based smart energy applications and end-user devices such as thermostats, appliances and gateways. It will address devices operating on one or more of a variety of underlying connectivity technologies and provide the smart energy ecosystem – including utilities, product vendors and consumers – assurances of application and device interoperability.

“As the hybrid wireless and wired home of the future takes shape, the need for easy interoperability becomes key,” said Rob Ranck, president of HomePlug Alliance. “We are excited to bring HomePlug Alliance’s strong expertise to this collaboration and help provide a robust certification program.”

“The smart grid will be comprised of all types of devices connecting in many different ways, and we must ensure those devices interoperate and communicate seamlessly, regardless of how they connect,” said Edgar Figueroa, CEO of Wi-Fi Alliance. “This collaboration represents a groundbreaking step in the industry. Through this collaboration, the smart grid ecosystem will benefit from interoperable smart energy products that use some of today’s most popular connectivity technologies.”

“HomeGrid Forum, which is responsible for certifying and promoting G.hn technology, is excited to be working with other leading industry organizations to help accelerate the adoption of the Smart Grid throughout the world,” said Matt Theall, president of HomeGrid Forum. “We believe SEP 2 will be an important factor in ensuring that wired and wireless technologies combine together to deliver Smart Grid and other services inside and outside the home and we are committed to using our expertise to help drive industry adoption.”

“As the organization that initiated the home Smart Energy standards activity, the ZigBee Alliance is committed to ensuring that the years of work invested by a broad stakeholder community in developing it translates into success in the marketplace,” said Bob Heile, chairman of the ZigBee Alliance. “The
ZigBee Alliance is pleased to contribute its considerable experience and expertise certifying Smart Energy products today to this new independent certification and testing consortium to ensure that consumers get smart products that are easy to use, independent of communications technology.”    
  
Quick take: Will this end the standards connundrum for residential smart energy management systems?  Let’s see!

Related articleNew Telco Services Enable the Connected Home

http://viodi.com/2011/07/25/new-telco-services-part-1/


About HomePlug Powerline Alliance (see Comment below)
The HomePlug Powerline Alliance, Inc is the leading industry-led initiative for powerline networking, creating specifications, marketing and certification programs to accelerate worldwide adoption of powerline networking. With HomePlug technology, the electrical wires in the home can now distribute broadband Internet, HD video, digital music and smart energy applications.

The Alliance works with key stakeholders to ensure HomePlug specifications are designed to meet the requirements of IPTV service providers, power utilities, equipment and appliance manufacturers, consumer electronics and other constituents. The HomePlug Certified Logo program is the powerline networking industry’s largest Compliance and Interoperability Certification Program and the program has certified over 240 devices. For more information, visit www.homeplug.org.

About the Wi-Fi Alliance
The Wi-Fi Alliance is a global non-profit industry association of hundreds of leading companies devoted to seamless connectivity. With technology development, market building, and regulatory programs, the Wi-Fi Alliance has enabled widespread adoption of Wi-Fi worldwide.

The Wi-Fi Alliance launched the Wi-Fi CERTIFIEDTM program in March 2000. It provides a widely-recognized designation of interoperability and quality, and it helps to ensure that Wi-Fi enabled products deliver the best user experience. The Wi-Fi Alliance has completed more than 10,000 product certifications to date, encouraging the expanded use of Wi-Fi products and services in new and established markets. For more information, visit www.wi-fi.org.

About HomeGrid Forum
HomeGrid Forum is a global, non-profit trade group promoting the International Telecommunication Union’s G.hn and G.hnem standardization efforts for next-generation home networking and SmartGrid Applications. HomeGrid Forum promotes adoption of G.hn and G.hnem through technical and marketing efforts, addresses certification and interoperability of G.hn and G.hnem-compliant products, and cooperates with complementary industry alliances. For more information on HomeGrid Forum, please visit www.homegridforum.org or follow us on http://twitter.com/homegrid_forum.

About the ZigBee Alliance
ZigBee offers green and global wireless standards connecting the widest range of devices to work together intelligently and help you control your world. The ZigBee Alliance is an open, nonprofit association of approximately 400 organizations driving development of innovative, reliable and easy-to-use ZigBee standards. The Alliance promotes worldwide adoption of ZigBee as the leading wirelessly networked, sensing and control standard for use in consumer, commercial and industrial areas. For more information, visit www.zigbee.org.

Contacts:
Megan Shockney, The Ardell Group for HomePlug
[email protected]

+1 858-442-3492

Karl Stetso
n, Edelman for Wi-Fi Alliance
[email protected]

+1 206-268-2215

Brian Dolby
for HomeGrid Forum
+44 7899 914168
Sheila Lashfor
d, for HomeGrid Forum
+44 7986 514240
[email protected]

Kevin Schader
ZigBee Alliance
[email protected]

+1 925-275-6672

Is it Lights Out for Lightsquared? FAA Says: Revised Plan to Mitigate GPS Interference Not Good Enough!

LightSquared is the upstart carrier building a “wholesale” LTE network in the U.S, which is to be sold to other carriers that want to offer “4G” mobile data services.  Their proposed broadband wireless network would be of great value to wireline only carriers like Century Link and XO Communications which currently have no offering for “the mobile workforce.”  MSOs like Cox are also interested in cutting a deal with Lightsquare to resell LTE.  Traditional carrier’s that aren’t building their own LTE network might be enticed to pursue a wholesale relationship with Lightsquare.  Last month, Sprint agreed to pursue a 15-year deal focusing on the sharing of network expansion and equipment costs. 

Sprint plans to use LightSquared to help bring its network to 4G LTE, an improvement from its current, and slower, mobile WiMax network (built by Clearwire). The company has promised to spend $5 billion to upgrade its network over the next three to five years after losing contract customers in 14 of the past 15 quarters. An upgraded network may give subscribers an incentive to stay with Sprint, rather than looking elsewhere for fast wireless speeds.

However, the LightSquared concept is feared to be a danger to the global positioning system (GPS). In June, according to NextGov, a Federal Aviation Administration advisory report said that the upper band allocated for use by LightSquared will result in “complete loss of GPS receiver functionality.” On June 30, LightSquared filed a new plan based on use of frequencies allocated to it that are not in question.  Many thought that plan would resolve the GPS interference complaint.

But LightSquared suffered a potential knock out blow this week when the Federal Aviation Ad,inistration (FAA) said that its proposal for a high-speed wireless network would “severely impact” the nation’s evolving aviation-navigation system, despite the company’s revised plan to quell concerns about interference.   The FAA estimated LightSquared’s interference would cost the aviation community an estimated $70 billion over the next 10 years, in part because of the loss of existing GPS safety and efficiency benefits, and the need to retrofit aircraft.

“Billions of dollars in existing FAA and GPS user investments would be lost,” the agency said in the report. The agency report was examining questions about the LightSquared proposal presented by the national coordination office director for the Space-Based Positioning, Navigation and Timing Executive Committee, which is part of the Executive Office of The President.

LightSquared’s Executive Vice President of Regulatory Affairs Jeff Carlisle disagrees with the FAA assessment, saying it doesn’t accurately reflect its proposed changes and seems to be evaluating a plan that is no longer on the table.

“Simply put, the vast majority of the interference issues raised by this report are no longer an issue. We look forward to discussing this with the FAA,” Mr. Carlisle said.

The seven-page FAA assessment said LightSquared’s plan could also hurt U.S. leadership in international aviation by eroding confidence in the U.S.-owned global positioning system. That would be despite “presidential commitments” to the International Civil Aviation Organization about the continued safety and availability of GPS technology, the FAA said.

On June 20, LightSquared offered a new plan that it said wouldn’t interfere with the vast majority of GPS systems. It would use just the portion of its frequencies that are farthest away from GPS signals and would transmit weaker signals. Even with those changes, LightSquared’s network could still affect some precision GPS systems, which are generally used by farmers, the aviation industry and others.

At a June 23 congressional hearing about LightSquared’s broadband-spectrum proposal, several lawmakers cited concern about the company’s plans. The company now says the hearing was addressing its original plan, not its proposed changes. One key House lawmaker said at the hearing that the Federal Communications Commission shouldn’t approve a service that disrupts or burdens GPS devices in the aviation industry.

In the FAA’s recent assessment, it said LightSquared’s most recent proposal would “severely impact” NextGen, an FAA initiative to build a new national air-traffic control system that calls for satellite technology to replace ground-based facilities. NextGen, officially called the Next Generation Air Transportation System, relies heavily on GPS-based technologies. LightSquared’s interference would not only erode existing GPS safety and efficiency benefits, but would also force the FAA to replan NextGen investments, the FAA said, resulting in additional development costs and delays.

The FAA would have to return to dependency on ground-based aviation aids and billions of dollars in existing agency and GPS-user investments would be lost, the agency said.

Read more: http://online.wsj.com/article/SB10001424053111904800304576472361793662904.html#ixzz1TVdNhlME

8 Telcos to test "cognitive" wireless technology from xG Technology (Comment on IEEE 802.22 WRAN standard)

A group of eight telecom providers will begin field testing a mobile technology that relies on unlicensed spectrum and frequency hopping to optimize broadband connectivity. The carriers, which are located across the country — in states ranging from California to Florida — will build on an earlier VoIP trial of xG Technology’s “cognitive” wireless platform, which uses spectrum in the 900 MHz and 5.8 GHz bands and avoids line interference by jumping between bands. XG Technology is expected to release a chip in September that supports data access at speeds of 3 Mbps.

xG Technology is pioneering what the company calls “cognitive” wireless technology that can sense interference from other devices using the same spectrum and hop away from those frequencies, said Chris Whiteley, vice president of business development for xG Technology, in an interview. Initially the company is targeting spectrum between 902 and 928 MHz—a band used in the U.S. for garage door openers, baby monitors, cordless phones and some video surveillance. But a new version of a chip that uses technology developed by xG is scheduled for availability in September, and will also support communications in 100 MHz of unlicensed spectrum in the 5.8 GHz range and will be able to shift between the two spectrum bands within 30 milliseconds.

Radio signals in the 900 MHz range penetrate buildings very well, Whiteley noted. “But as soon as you step outside, 5.8 GHz is a great line of sight [option] and you can offload capacity for outdoor use.”
In the future, the technology could be used in other spectrum bands, such as the TV white spaces band, Whiteley said.

xG has field tested its technology in a 32-square mile network in Ft. Lauderale, Fla. supporting mobile VoIP services and also has a trial of a voice network underway with Texas-based Independent telco Townes Telecommunications. Whiteley said xG focused on supporting voice service initially because in comparison with data transmission “getting VoIP to work correctly on an IP mobile network is the tougher challenge.”

The new XG chip coming out in September will also support data services, and companies such as Townes Telecommunications that have already been working with xG will be the first to deploy devices with the new chips. xG does not manufacture chips but develops technology which will be implemented on a chip. 

Telcos that have signed agreements to evaluate the xG Technology include Redi-Call Communications of Delaware, TelAtlantic Communications of Virginia, Cook Telecom of California, Silver Star Telephone Company of Wyoming, Venture Communications Cooperative of South Dakota, Smart city Telecom of Florida, and Public Service Cellular of Georgia, as well as Townes Telecommunications.

http://connectedplanetonline.com/independent/news/Eight-small-telcos-ink-deals-to-evaluate-new-broadband-wireless-technology-0725/


Comment:  Cognitive radio research has been ongoing for many years.  The IEEE 802.22 Wireless Regional Area Network (WRAN) standard was based on it.  Yet that recently ratified standard is apparently Dead on Arrival (DoA) as no networks based on it have been deployed or even announced. 

Not only must the cognitive radios detect interference and defer use of those bands, but also re-negotiate use of the same channel on a time shared basis, else hop to a different channel.  Hasn’t happened yet.  Good luck to these eight small telcos that are trialing xG Technology’s cognitive radios.

 Here’s the IEEE Standards Association press release on IEEE 802.22 standard:

IEEE 802.22TM-2011 Standard for Wireless Regional Area Networks in TV Whitespaces Completed

PISCATAWAY, N.J.–(BUSINESS WIRE)–IEEE, the world’s largest professional association advancing technology for humanity, today announced that it has published the IEEE 802.22TM standard. IEEE 802.22 systems will provide broadband access to wide regional areas around the world and bring reliable and secure high-speed communications to under-served and un-served communities.

This new standard for Wireless Regional Area Networks (WRANs) takes advantage of the favorable transmission characteristics of the VHF and UHF TV bands to provide broadband wireless access over a large area up to 100 km from the transmitter. Each WRAN will deliver up to 22 Mbps per channel without interfering with reception of existing TV broadcast stations, using the so-called white spaces between the occupied TV channels. This technology is especially useful for serving less densely populated areas, such as rural areas, and developing countries where most vacant TV channels can be found.

IEEE 802.22 incorporates advanced cognitive radio capabilities including dynamic spectrum access, incumbent database access, accurate geolocation techniques, spectrum sensing, regulatory domain dependent policies, spectrum etiquette, and coexistence for optimal use of the available spectrum.

The IEEE 802.22 Working Group started its work following the Notice of Inquiry issued by the United States Federal Communications Commission on unlicensed operation in the TV broadcast bands.

Additional information on the standard can be found at the IEEE 802.22 WG page. To purchase the standard, visit the IEEE Standards Store.

http://www.businesswire.com/news/home/20110726007223/en/IEEE-802.22TM-2011-Standard-Wireless-Regional-Area-Networks

AT&T adds 202,000 U-verse TV subscribers in 2nd Quarter- It’s now coming to S.F!

U-verse Subscribers and Revenues Jump in 2nd Quarter

AT&T is now the eighth-largest pay-TV provider in the U.S. after netting 202,000 U-verse subscribers in the second quarter for a total of 3.4 million.  The vemerable U.S. carrier gained 439,000 U-verse broadband subs from a year ago.  That’s an increase of up 36% from one year ago! Impressively, U-verse revenue jumped 57% from a year earlier.

 “U-verse has transformed our consumer business,” said Chief Financial Officer John Stephens. 

Author’s Note:  AT&T’s total video subscribers, which include U-verse TV and bundled satellite customers (AT&T resells Dish Network), reached 5.26 million at the end of the reported quarter (representing 21.5% of households served). By contrast, Verizon had 3.7 million FiOS TV customers at the end of March, 2011, according to its first-quarter earnings release.  This will be updated soon when VZ releases their 2nd Quarter earnings report.


AT&T said it lost 451,000 traditional DSL customers (that mostly have ADSL + POTS service- like this author). 

Author’s Comment: It appears AT&T has no plans to retain old DSL subs, but instead to convert them to U-verse based high speed Internet when it is available in their area.

http://www.multichannel.com/article/471330-AT_T_Reels_In_202_000_U_verse_TV_Customers.php


Sanford C. Bernstein analyst Craig Moffett said AT&T’s quarterly U-verse net additions were “not far from consensus [expectations] of 199,000 in what is traditionally a seasonally soft quarter.” Barclays Capital analyst James Ratcliffe also said the gains were in line with expectations.

Miller Tabak analyst David Joyce had predicted 220,000 U-verse TV user additions, but the actual result didn’t make him change his predictions for overall pay TV subscriber growth in the latest quarter.

Joyce projects industry-wide net adds of 97,000 in the second quarter driven by gains for AT&T and Verizon (+170,000), as well as satellite TV firms DirecTV (+100,000) and Dish Network (+50,000). But he once again expects cable operators to post subscriber losses, which he estimates at 317,000 for publicly traded and 443,000 for privately held companies.

The user growth he expects would make for the third consecutive quarter of pay TV subscriber increases after two quarters of declines last year kicked off a debate over whether some consumers may be dropping their cable packages to substitute them with online video options.

http://www.hollywoodreporter.com/news/att-adds-202000-u-verse-213941

San Francisco Residents to get U-verse

AT&T can offer U-verse in areas where the city/ municipality permits it, the copper lines are good enough for high speed DSL transmission, and the video server can be placed close enough to the homes being served.

Three years after its initial proposal, the San Francisco Board of Supervisors voted 6 to 5 on July 19th to let AT&T deploy its U-Verse TV and broadband Internet service.  AT&T got clearance to install hundreds of utility boxes (AKA cabinets) on city sidewalks and alleyways without first having to undergo a lengthy and costly environmental analysis. The metal cabinets, which measure 4 feet tall, 4 feet wide and 2 feet deep, will house telecommunications equipment for the U-Verse triple play service bundle, which can include Internet access at speeds up to 26M bps (bits per second) along with digital TV and VoIP (voice over IP).  The cabinets that AT&T wants to install are much larger than its existing boxes in the city. Neighborhood activists had complained that the cabinets would block sidewalks, attract graffiti and clash with the dense scale and historic character of some of San Francisco’s communities.

While AT&T now has environmental clearance to install up to 726 boxes, the company said it would put in no more than 495 without going back to the Board of Supervisors for permission to install more of those boxes in the future.

The cabinets are used to interconnect AT&T’s fiber network with copper wires that go the rest of the way to individual homes. The carrier will still have to get approval for each box, but won’t have to undergo a study of the total impact of the equipment on the city’s environment.

“This decision means we’re finally going to be able to bring competition and choice to San Francisco,” said Marc Blakeman, AT&T regional vice president.

 Let’s see how quickly AT&T moves to deploy it in the city.

http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2011/07/20/BA2T1KCHSC.DTL

Observation and Comment on U-verse

For quite some time, we’ve been waiting for U-verse to come to Santa Clara County.  It would break the monopoly Comcast now enjoys on pay TV and higher speed Internet service.  In many condo’s in Santa Clara (including the one I own), we can’t put satellite dishes on the roof.  In other cases, heavy tree covered homes don’t have a direct line of sight for a satellite dish.  So anyone who wants to watch live sports, must buy digital cable from Comcast, since there are very few  games broadcast on free over the air digital TV these days.  Even the MLB final league championship playoff games were on TBS and not free TV!

One of our very active IEEE ComSocSCV Discussion list members had U-verse installed in Los Altos, CA last year.  He says he is very happy with the service, despite some initial outages after installation.  A close companion of this author recently had U-verse installed in her Santa Clara apartment and it worked great right out of the box!  Her TV reception was crystal clear with a larger choice of channels than available with Comcast Digital Cable for the same price. 

This author is quite anxious to try U-verse, but we’re a bit worried about AT&T customer servcice, which has been less than stellar for DSL and even POTS.  Initial calls for service are outsourced to India and it generally takes a long time to resolve any technical problem.  We wonder if U-verse customer service will be handled differently than traditional DSL and POTS.  We certainly hope so!

U-verse References (by this author)

1. Increased Video traffic necessitates AT&T to cap DSL Internet + U-Verse

http://viodi.com/2011/03/13/increased-video-traffic-necessitates-att-to-…

2. A Perspective of Triple Play Services: AT&T U-Verse vs Verizon FiOS vs Comcast Xfinity

https://techblog.comsoc.org/2010/10/10/a-perspective-of-triple-play-service…

3. AT&T’s U-verse Build-Out Over by Year end (Source: DSL Reports)

https://techblog.comsoc.org/2011/05/20/ats-u-verse-build-out-over-by-year-end-source-dsl-reports 


Addendum:  In its July 22, 2011 earnings announcement, Verizon reported it had added 189,000 FiOS Internet and 184,000 FiOS TV customers.

ITU-T FG Cloud 6th Meeting: Progress on 7 Output Documents

The sixth ITU-T Focus Group on Cloud Computing (FG Cloud) meeting took place in Geneva, Switzerland, from June 27 to July, 1 2011. 40 participants, representing 18 organizations, submitted 92 contributions, input liaisons and presentations to this meeting.  Meeting documents are only available to ITU-T members with a TIES account.

General information about the FG Cloud is available on its web site: http://www.itu.int/ITU-T/focusgroups/cloud/

The main results of the sixth FG Cloud meeting were the progression of seven output documents:
1. Introduction to the cloud ecosystem: definitions, taxonomies, use cases, high level requirements and capabilities.
2. Functional requirements and reference architecture
3. Infrastructure and network enabled cloud
4. Cloud security, threat & requirements
5. Benefits of cloud computing from telecom/ICT perspectives
6. Overview of SDOs involved in cloud computing
7. Cloud resources management gap analysis (initial draft for review)

These documents are in various stages of development.  Some of them are fairly stable, others are not.


This author believes that two of the above list of output documents will be especially important for cloud network providers and hardware/ software vendors. 

Here is a very brief high level overview of each of those two documents (they are works in progress):

Functional Requirements and Reference Architecture

Cloud architecture must meet several requirements to enable sustained innovation and development of cloud services. With multiple stakeholders involved, the cloud architecture must be flexible to fit the needs of infrastructure providers, service providers and service resellers. Cloud architecture must enable multiple models and use-cases, some currently known and others to be envisioned in future. Currently known models include IaaS, PaaS and SaaS and it is possible these would be used in combination. A cloud provider must be able to provide all or some of these services using the same architecture. For private and hybrid cloud operations, cloud services must appear like intranet services. This means a user must be able to access resources using the same domain names as on the intranet. Hosts and resources that have been migrated from private to public clouds should be accessed transparent to where they are being currently hosted.  Cloud architecture must enable early detection, diagnosis and fixing of infrastructure or service related problems. The consumers may have little to no control on ensuring that this is running correctly so the service rides on a provider’s ability to fix issues quickly.

Telecom Cloud Computing reference architecture should consider four entities:

1. Clients: which will be users, internet applications or software clients; they all have corresponding functions to interwork with cloud services.
2. Network: also called “pipeline”, will be more intelligent in cloud computing. Because computing and storage are aggregated at network center and we think it must cause the network architecture to be changed. There should be more study on it. All Cloud interwork activities will happen on network.
3. Cloud itself: it generally includes three layers: Physical DC, Cloud OS, Services Capabilities and Portal. It provides APIs to “Clients” or other Clouds. We think cloud is complicated because of its technologies and services types. However, no matter virtualization, distributed computing or multi-tenant are methods of how computing (and storage) is organized, they can be thought as Cloud core functions (Here we call “OS”). Above Cloud “OS”, all type of services run. IaaS, PaaS and SaaS can be mapped into instances run on “OS” and have different service form. 
4. External Interwork Entities (e.g. Management Platform, other Cloud): Cloud services platform must consider how to integrated with the old operating platform and how to interconnect with other cloud(the same operator or different operators)

Infrastructure and Network Enabled Cloud

The ITU FG Cloud participants believe that network service providers have a unique opportunity to bundle or combine Network and IT resources to provide cloud computing and/or storage services. Network service providers can also leverage their network assets to ensure excellent network availability and performance for secure end to end cloud services.  Another opportunity for service providers is to evolve network resource allocation and control to more dynamic in order to meet the needs to provision on-demand cloud services.

The activity of this work area will be focused on:
a[ the ability to link existing networks services, Internet connectivity,  L2/L3 VPN efficiently to  public or private cloud services.
-b] the ability to link a flexible L2 and L3 network management and cloud technology forming an integrated cloud infrastructure enabling cloud services.

The infrastructure and network enabled cloud can deliver IT infrastructure (especially virtualized IT resources) as a service. Virtualization allows the splitting of a single physical piece of hardware into independent, self-governed environments, which can be extended in terms of CPU, RAM, Disk, I/O and other elements. The infrastructure includes servers, storages, networks, and other hardware appliances.

The common characteristics of infrastructure and network enabled cloud include:
-Network centric: The framework of infrastructure and network enabled cloud consists of plenty of computing resource, storage resource, and other hardware devices that connect with each other through network.
-Service provisioning: Infrastructure & network enabled cloud provides a multi-level on-demand service mode according to individualized demand of different customers.
-High scalability/reliability: Infrastructure & network enabled cloud can adapt to changing requirements of customers quickly and flexibly, and realize high scalability and high reliability through various mechanisms.
-Resource pooling/ transparency: The underlying resources (computing, storage, network, etc.) of Infrastructure amd network enabled cloud are transparent to the customer, the customer does not need to know the how and where resources are deployed.


Next meeting:  The seventh FG-Cloud meeting is scheduled for  September 26-30, 2011 in Seoul, Korea.   September 26th will be a Joint Meeting with ISO/IEC JTC1 and NIST.


References:  This author has written many articles about the Cloud Computing standards (or the lack thereof).  Here are links to a few of them:

http://viodi.com/2011/06/23/cloud-leadership-forum-opportunities-obstacles-to-cloud-adoption/

https://techblog.comsoc.org/2010/12/10/whats-the-uni-nni-and-network-infrastructure-needed-for-cloud-computing

IEEE P2302 Inter-Cloud Working Group Kickoff Meeting: July 15, 2011

DisclaimerThis is not an official meeting report  The author is not an officer of this committee.  He attended this meeting (on his own time and expense) as an observer representing IEEE ComSoc, where he is a full time volunteer.

Executive Summary

The IEEE P2302 WG held its first meeting on Friday afternoon, July 15th in Santa Clara, CA.  Approximately 16 people, including two IEEE Standards Association employees attended the meeting.  There were two presentations and some discussion (much of it precipitated by this author).  The Chairman of the IEEE Cloud Initiatives also spoke.

1.  The scope, terms of reference, and problems to be solved were addressed in a presentation by WG Chair David Bernstein. 

2.  The goals, objectives and output whitepaper of the Japan based Global Inter-Cloud Technology Forum (GICTF) was presented by Kenji Motohashi of NTT Data.

3.  Steve Diamaond, IEEE Cloud Standards chairman hosted this meeting at the EMC Santa Clara facility. Steve welcomed the attendees and made a few concluding remarks about the near term work plan.

Background article:  https://techblog.comsoc.org/2011/04/07/ieee-cloud-computing-initiative-will-it-have-legs


Abstract

The proposed P2302 standard will define topology, functions, and governance for cloud-to-cloud interoperability and federation. Topological elements include clouds, roots, exchanges (which mediate governance between clouds), and gateways (which mediate data exchange between clouds). Functional elements include name spaces, presence, messaging, resource ontologies (including standardized units of measurement), and trust infrastructure. Governance elements include registration, geo-independence, trust anchor, and potentially compliance and audit. The standard does not address intra-cloud (within cloud) operation, as this is cloud implementation-specific, nor does it address proprietary hybrid-cloud implementations.

Scope:  The working group will develop the Standard for Intercloud Interoperability and Federation (SIIF). This standard defines topology, functions, and governance for cloud-to-cloud interoperability and federation. Topological elements include clouds, roots, exchanges (which mediate governance between clouds), and gateways (which mediate data exchange between clouds). Functional elements include name spaces, presence, messaging, resource ontologies (including standardized units of measurement), and trust infrastructure. Governance elements include registration, geo-independence, trust anchor, and potentially compliance and audit. The standard does not address intra-cloud (within cloud) operation, as this is cloud implementation-specific, nor does it address proprietary hybrid-cloud implementations.

Purpose: This standard creates an economy amongst cloud providers that is transparent to users and applications, which provides for a dynamic infrastructure that can support evolving business models. In addition to the technical issues, appropriate infrastructure for economic audit and settlement must exist.

P2302 WG web site:   http://grouper.ieee.org/groups/2302/


David Bernstein’s Inter-Cloud Introduction presentation

David indicated an emerging view on inter-cloud would come from three types of organizations:

1  Standards organizations and industry associations/forums.

2. Research institute work and open source software organizations

3  Public test beds

Mr Bernstein cited an inter-cloud use case for storage roaming, where a client could gain access to “federated cloud” storage with the cloud storage provider synchronizing the data stored in the cloud(s) to the mobile access device.

A proposed Inter-Cloud Reference Network Topology was presented which focused on two cloud network elements:  An inter-cloud route and inter-cloud exchange  David said there was a lot of research work going on in this area.  In response to a question of how meeting attendees could gain access to those related research papers, he said they are now on IEEE Explore, but would eventually be uploaded to the “IEEE P2302 Collaboration web site.”  The timing for that was not specified, but a user ID and password will be required for access to those and other WG documents.  It was noted that copyright agreements with the authors would be needed prior to uploading.

David noted that a “Registration and Trust Authority” for inter-cloud was urgently needed.  It would interact with other similar authorities, e.g. IEEE or GICTF Registraion Authority.  Trust architecture and functional elements also must be defined.

The need for a standardized Conversational Protocol between cloud gateway entitiies is also needed.  David suggested that might be XMPP or perhaps SIP.  No details were given for why those might be a good choice

A high level overview of the P2302 deliverable outputs was presented by David.  He identified three Inter-cloud work items for this small WG:

1  Functional Overview and functional description of each inter-cloud network element

2  Specification of protocols and formats

3  Co-ordination of test beds and open source software activities

Discusssion: 

It was noted that there had not been much, if any, work done on these areas by other cloud computing SDOs.  This author suggested compiling a list of relevant Cloud SDOs and the inter-cloud work they were doing.  After evaluating that, it was suggested to request formal liaisons with said SDOs.  A first cut at such a Cloud Computing SDO list is at:

https://techblog.comsoc.org/2011/07/15/cloud-computing-standards-dev…


Kenji Motohashi’s presentation on Global Inter-Cloud Technology Forum (GICTF)

Motohashi-san stated that the goal of the GICTF was to promote global standardization of the “inter-cloud system.”  It is expected that more workloads (and storage) will move from one cloud to another, yet be accessed by the same entity. Therefore, solid standards are needed for inter-cloud interfaces,

Note: The GICTF web site states: “We aim to promote standardization of network protocols and the interfaces through which cloud systems interwork with each other, and to enable the provision of more reliable cloud services than those available today.”   http://www.gictf.jp/index_e.html

The first output of the GICTF was a white paper: Use Cases and Functional Requirements for Inter-Cloud Computing, August 9, 2010.  It is available for free download at: 

http://www.gictf.jp/doc/GICTF_Whitepaper_20100809.pdf

Kenji noted that proviioning, control, monitoring and auditing (for SLAs and billing) across multiple clouds were all urgently needed.  Inter-cloud architecture and standardized interfaces must be defined/ specified.  This will require a non-trivial set of problems to be solved by GICTF and related SDOs.  Two that were mentioned were: OMG Telecom Cloud and NIST Cloud Computing Standards Roadmap.  Kenji thought that ITU-T FG might be a good organization to collaborate on inter-cloud, but he wasn’t up to date on the work they were doing (Mr Hiroshi Sakai of NTT attended the last FG Cloud meeting).


P2302 Workplan:

-Have a conference call in 2 weeks. 

Author’s Note: Hopefully, the P2302 web site will be uploaded with: the 2 presentations made at this meeting, the relevant inter-cloud research documents alluded to, and the Chair’s July 15th meeting report (and/or meeting minutes from the Secretary) by then.  And WG members will each receive a user name/password to access that content.

-Next f2f WG meeting will be in about 6 weeks.  Dial in access (via the web) will be possible for those who can’t be physically present at the meeting.  However, a meeting host is needed and no one in the room volunteered.


Observation and Closing Comment:

This meeting was scheduled for four hours, but it lasted less than two hours (it adjourned at 3:35pm but there was a 40 minute break).  The agenda was not circulated in advance, there was no call for contributions, and Motohashi-san told us he only created his presentation the same day!  There was a reference made to all the inter-cloud research/ papers presented at conferences, but no list of such papers was presented. 

So there seems to be a huge mis-match between the amount of work that needs to be done with the very little that was accomplished at this first P2302 meeting.  It seemed almost like an attempt to identify the problem set, but not seriously undertake the solution (which would involve a tremendous amount of work and colloboration with other SDOs).

It appeared that the majority of attendees at this meeting were curiosity seekers rather than folks that had a desire to contribute to the huge standards project sketched out.  In particular, the major Cloud vendors (Amazon, Rackspace, Microsoft Azure, etc) were either silent or not present (the attendence list was not available to the attendees).  I don’t believe the P2302 Chairman or the IEEE Cloud Initiative Chairman recognize the “heavy lifting” type of work that has to be done  Or the time consuming process of liaising with other like minded standards organizations.  Unless there is a huge uptake in dedicated delegates concurrent with more aggressive leadership and organization, this standards initiative will fall by the wayside.

Cloud Computing Standards Development Organizations (SDOs) and their output documents

At the recently concluded IEEE P2302 Inter-Cloud Interoperability Working Group meeting, it was noted that there are many SDOs working on cloud computing whitepapers, standards and specifications. The P2302 WG is interested in those that are addressing inter-cloud aspects including communications, policy, protocols, or security for potential collaboration.  Inter-cloud scenarios include: public to public, public to private (and vice-versa), private to private cloud interconnections for both computing and storage.

Here is an incomplete list of Cloud Computing SDOs along with their output documents and work in progress:

NIST National Institute of Standards and Technology

Cloud Computing Project: NIST’s role in cloud computing is to promote the effective and secure use of the technology within government and industry by providing technical guidance and promoting standards.

Outputs:

-NIST definition of Cloud Computing   v15  2009-10
 
-NIST Cloud Computing Standards Roadmap  Working draft – 12th  2011.05-24


ISO/IEC JTC1 SC38

Distributed Application Platforms and Services: Study group on Cloud Computing is addressing:

-Terms of Reference of Study Group on Cloud computing :

-Provide a taxonomy, terminology and value proposition for Cloud Computing

-Assess the current state of standardization in Cloud Computing within JTC 1 and in other SDOs and consortia -beginning with document JTC 1 N 9687.

-Document standardization market/business/user requirements and the challenges to be addressed.

-Liaise and collaborate with relevant SDOs and consortia related to Cloud Computing

-Hold open meetings to gather requirements as needed from a wide range of interested organizations.

-Provide a report of activities and recommendations to SC 38 including: reviewing current concepts, characteristics, definitions, use cases, reference architecture, types and components used in Cloud Computing; a comparison of Cloud Computing to related technologies; analysing standardization activities for Cloud Computing in other standards organizations.

Output:  Draft Study Group on Cloud Computing report  V.2  2011-05


Cloud Computing Use Case Discussion Group
 
This open discussion group exists to define use cases for cloud computing. Theyare considering: Definitions and Taxonomy, Use Case Scenarios, Customer Scenarios, Developer Requirements, Security Scenarios & use cases and recommendations for SLAs.

Output:  Cloud Computing Use Case whitepaper v4 July 2010  


Global Inter-Cloud Technology Forum (GICTF)

This Japan based forum is trying to promote standardization of network protocols and the interfaces through which cloud systems inter-work with each other, and to enable the provision of more reliable cloud services

Output:   Use cases and Functional Requirements for Inter-Cloud Computing  White paper  v1  2010-08


ETSI Cloud

In June 2006 ETSI technical committee  GRID was created and held its first meeting in September.
TC GRID’s task is to address issues associated with the convergence of Information Technology (IT) and telecommunications, paying particular attention initially to the lack of interoperable Grid solutions in situations which involve contributions from both the IT and telecommunications industries.

In 2008, TC GRID undertook a survey of existing stakeholders in the Grid domain, for which the European Commission (EC) provided financial support. A test frame for Grid standards is being developed in collaboration with ETSI’s Centre for Testing & Interoperability (CTI).

There is also an increasing interest in addressing the convergence between ETSI technical committees GRID and TISPAN (Telecommunication and Internet converged Services and Protocols for Advanced Networking).

Outputs: 

-Use Cases for Cloud Service Scenarios  Technical Report  (TR) v1  2010-2011

-Standardization requirements for cloud services (ETSI TR102 997)  TR  v1  2010-2011


Distributed Management Task Force (DMTF)

DMTF is a not-for-profit association of industry members dedicated to promoting enterprise and systems management and interoperability. One of the key standards it maintained is the Common Information Model (CIM) .

Outputs:
 -Use cases and interactions for Managing Clouds DSP-IP0103 2010-6-18

 -Interoperable Clouds (DSP-IS0101)  White paper  v1.0.0  2009-11-11

 -Architecture for Managing Clouds(DSP-IS0102)  White paper  v1.0.0  2010-6-18
 -Cloud Management Interface Requirements on Protocol, Operations, Security & Message Specification  v1.0.0

  -Cloud Service Management Models  Specification  v.1.0.0  At the latest by 2011-12-31

  -Open Virtualization Format (DSP0243) Standard v1.0 2009- Feb

  -August 2010, DeltaCloud API specification for Apache Delta cloud has been submitted to the DMTF to be an candidate standard for inter-cloud operations.


CSA Cloud Security Alliance

The Cloud Security Alliance (CSA) is a not-for-profit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing, and to provide education on the uses of Cloud Computing to help secure all other forms of computing. The Cloud Security Alliance is led by a broad coalition of industry practitioners, corporations, associations and other key stakeholders.

Outputs:

-Top Threats to Cloud Computing  White paper  v1.0  2010-03
 
-Security Guidance for Critical Areas of Focus in Cloud Computing  White paper  v3  Q4 2011
 
-CSA Cloud Control Matrix Trusted Cloud Initiative  Controls framework  v1.1  2011-2
 
-Trusted Cloud Initiative  Certification  v1  Q4 2010
 
-Cloud audit / cloud trust protocols  white paper  v1  Q4 2011


Telecommunications Management Forum (TMF)

 The primary objective of TM Forum’s Managing Cloud Services Initiative is help the industry overcome these barriers and assist the growth of a vibrant commercial marketplace for cloud based services. In May, 2010, TMF released TMF523 Single Sign-On and Single Sign-Off for the OSS World Release 1.0 including SSO business scenarios and use cases in a Cloud Computing Environment.

TMF is currently working on the cloud services management to cover:

-Cloud Business Process Framework

-Cloud Service Definitions

-Cloud Billing Interest Group

Outputs:  

-Managing Cloud Services SLA  v1  2010-2011
 
-Single Sign-On Business Agreement  V0.10  2010-05


Open Grid Forum (OGF)

he Open Cloud Computing Interface working group (OCCI-WG) of the OGF was established in 2009. The purpose of this group is the creation of a practical solution to interface with Cloud infrastructures exposed as a service (IaaS). It focuses on the creation of an API for interfacing “IaaS” Cloud computing facilities, which is sufficiently complete as to allow for the creation of interoperable implementations.

Output:  Open Cloud Computing Interface Specification  2009-09


Storage Networking Industry Association (SNIA)

The common goal of the SNIA is to promote acceptance, deployment, and confidence in storage-related architectures, systems, services, and technologies, across IT and business communities.

Outputs:

-Cloud Data Management Interface CDMI standard  v1.0  2010-04

This specification defines an interface for interoperable transfer and management of data in a cloud storage environment. This interface provides the means to access cloud storage and to manage the data stored there.

-Storage Management Technical Specification  Standard  V.1.5  2010-12
 
-Cloud Storage for Cloud Computing  White paper  V 1.0  2009-09
 
-Managing Data Storage in the Public Cloud  White paper  V 1.0  2009-10


OASIS: Identity in the Cloud Technical Committee (TC)


The OASIS IDCloud (Identity in the Cloud) TC works to address the serious security challenges posed by identity management in cloud computing and gaps in existing standards. The purpose of the TC is to harmonize definitions/terminologies/vocabulary of Identity in the context of Cloud Computing; to identify and define use cases and profiles; and to identify gaps in existing Identity Management standards as they apply in the cloud.



IETF Cloud Draft Status (as of IETF-80 meeting)
The following list includes some Cloud related IETF drafts (some have not yet official status). This list will be updated at next meeting based on the official status of all relevant drafts.

-Requirements and Framework for VPN‐Oriented Cloud Services

Scope: to addresses the service providers’ requirements to support VPN-Oriented Cloud services.

-Network Abstraction for Enterprise and SP Class Cloud

Scope: to introduce a network related Cloud abstraction called the Seamless Cloud, which facilitates secure and seamless extension of an enterprise (Intranet) into an enterprise and SP grade Cloud.

-Protocol Considerations for Workload Mobility in Clouds

Scope: to consider the migration of application, OS, compute, storage and policy (workloads) within and between service provider, enterprise or 3rd party data centers.

-Virtual Network Management Information Model

Scope: to provide an example of the XML-based data model that is implemented according to the proposed information model.

-Syslog Extension for Cloud. Abstract:

Scope: to provide an open and extensible log format to be used by any cloud entity or cloud application to log and trace activities that occur in the cloud. It is equally applicable for cloud infrastructure (IaaS), platform (PaaS), and application (SaaS) services.

-Cloud Service Broker.

Scope: to introduce a Cloud Service Broker (CSB) entity to provide brokering service between different Cloud Service Providers which can be based on private cloud, community cloud, public cloud and hybrid cloud.

-Cloud Reference Framework.

Scope: to present Intra-cloud and inter-cloud a reference framework for Cloud Services, based on the survey of Cloud-based systems and services.

-Service Management for Virtualized Networks .

Scope:  to provide the reference model for service mobility in a virtual environment and defines the control protocol between the virtualized platform and the managing controller to realize service mobility.


Virtual Network Research Group (VNRG)

In the network community, “Virtual Networks” is a very broad term, including running multiple wavelengths over a fiber, MPLS, virtual routers, and overlay systems. VN technologies are widely used in parts of the Internet and other IP-based networks, but the community lacks a common understanding of the impact of virtualized networks on IP networking, or how VNs are best utilized. As a result, virtualization has been difficult to integrate across various systems, such as network operators, vendors, service providers and testbed providers (e.g., GENI, FEDERICA, etc).

One current challenge with existing VN systems is the development of incompatible or competing networking techniques in the Internet, causing deployment issues in the future (or even now). For instance, there are numerous ways to virtualize routers and their internal resources (e.g., multiple, isolated routing and forwarding tables) and to virtualize core networks (e.g. MPLS, LISP), but the end host virtualization has not been addressed (e.g., beyond the need for virtual interfaces). Few virtual network systems allow a particular virtual machine in an end host to control its attachment to a specific private network. End host virtualization architecture also determines whether virtualization is per virtual machine, per process, or per connection – and this difference can determine exactly how the end host can participate in VNs. Similar issues arise for virtual services, virtual links, etc.

The VNRG builds on the efforts of a number of IETF WGs, including encapsulated subnets (LISP at layer 3, TRILL at layer 2), subnet virtualization (PPVPN, L3VPN, L2VPN), and aspects of managing virtual components (VRRP), as well as some work in more general areas, notably on tunnels (INTAREA). A side effect of the VNRG is to help place these contributions in a broader context.

The Virtual Networks Research Group (VNRG) will consider the whole system of a VN and not only single components or a limited set of components; we will identify architectural challenges resulting from VNs, addressing network management of VNs, and exploring emerging technological and implementation issues.

Initial set of work items:

  • concepts/background/terminology
  • common parts of VN architectures
  • common problems/challenges in VN
  • descriptions of appropriate uses
  • some solutions (per-problem perhaps)

The VNRG will initially focus on VNs but at a later stage the VNRG will also be open to related topics, such as system virtualization.


Alliiance for Telecommunications Industry Standardization (ATIS)

Creation of Cloud Services Forum (CSF) focusing on Cloud, Cloud Peering and the “inter-cloud”. This Forum was established with the intent to deliver a data model supporting 5 key service enablers across use cases which drive “inter-cloud” demand (UNI, NNI, Security, Provider & Customer Management)


IEEE
Started two working groups on cloud portability and Intercloud interoperability.There are two new standards projects:

-IEEE P2301™, Draft Guide for Cloud Portability and Interoperability Profiles, and

-IEEE P2302™, Draft Standard for Intercloud Interoperability and Federation.

Unofficial report of the July 15, 2011 P2302 meeting is at:

https://techblog.comsoc.org/2011/07/16/ieee-p2302-inter-cloud-working-group-kickoff-meeting-july-15-2011


Open Data Center Alliance (ODCA)

The ODCA is an independent IT consortium comprised of global IT leaders who have come together to provide a unified customer vision for long-term data center requirements. One of the ODCA mission is to collaborate with industry standards bodies to define required industry standard development aligned with Alliance priorities.

ODCA has delivered the first customer requirements for cloud computing documented in eight Open Data Center Usage Models which identify member prioritized requirements to resolve the most pressing challenges facing cloud adoption.


The Green Grid (TTG)

TTG is a global consortium of IT companies and professionals seeking to improve energy efficiency in data centers and business computing ecosystems around the globe. The organization seeks to unite global industry efforts to standardize on a common set of metrics, processes, methods and new technologies to further its common goals. The PUE (power usage effectiveness) and DCIE (datacenter infrastructure efficiency) metrics are most famous and generally accepted. TGG is currently developing CUE (Carbon Usage Effectiveness) and WUE (Water Usage Effectiveness) metrics for promoting eco-friendly datacenter.

 

Output:  Impact of virtualization on data center physical infrastructure  White Paper  2010-1-27


The Open Group

The Open Group Cloud Work Group exists to create a common understanding among buyers and suppliers of how enterprises of all sizes and scales of operation can include Cloud Computing technology in a safe and secure way in their architectures to realize its significant cost, scalability and agility benefits.

 

Outputs:

-Building Return on Investment from Cloud Computing White paper V1.0 2010-04

-Cloud Computing Business Scenario Workshop Workshop report V1.0 2009-08


ITU-T Focus Group on Cloud Computing (FG Cloud)

 

The ITU FG Cloud will contribute with the telecommunication aspects, i.e., the transport via telecommunications networks, security aspects of telecommunications, service requirements, etc., in order to support services/applications of “cloud computing” making use of telecommunication networks; specifically:

  • identify potential impacts on standards development and priorities for standards needed to promote and facilitate telecommunication/ICT support for cloud computing
  • investigate the need for future study items for fixed and mobile networks in the scope of ITU-T
  • analyze which components would benefit most from interoperability and standardization
  • familiarize ITU-T and standardization communities with emerging attributes and challenges of telecommunication/ICT support for cloud computing
  • analyze the rate of change for cloud computing attributes, functions and features for the purpose of assessing the appropriate timing of standardization of telecommunication/ICT in support of cloud computing

The Focus Group will collaborate with worldwide cloud computing communities (e.g., research institutes, forums, academia) including other SDOs and consortia.

ITU-T TSAG is the parent group of this Focus Group.

Editors Note:  This author has written reports of all the ITU-T FG Cloud meetings for ComSoc Community readers.  Scroll down the blogs or do a Google search on: “ITU FG Cloud  Alan J Weissberger”


I’m sure there are other SDOs and research institute’s working on Cloud Computing specifications.  Please be good enough to either comment below or email this author ([email protected]) to update the above list.

Pyramid Research: Asia-Pacific to Be Global Leader in LTE by 2014 (following Wireless Intelligence Forecast)

Pyramid Research (www.pyr.com) says that LTE subscriptions in Asia-Pacific will top all other regions by 2014, with Japan and China as regional leaders; however LTE will take longer to gain scale in emerging Asia-Pacific markets, 

Long-term evolution (LTE) has become the worldwide standard for next-generation mobile technology, and a handful of operators in Asia-Pacific will be in the forefront of its development and implementation. Through the combined pressure of demand-side and supply-side factors, LTE deployment is likely to eventually occur in all major markets in the region.

Demand-side drivers include the rapid growth in the base of mobile subscribers, the uptake of next-generation devices by consumers and the growing popularity of bandwidth-heavy applications. On the supply side, the need for more efficient use of limited spectrum assets in order to keep pace with growth, the need to lower operational costs and the desire to introduce new bandwidth-rich applications to differentiate themselves from the competition have been driving operators toward LTE deployment.

LTE will be deployed in developed markets first where more customers are willing to pay for better service, with wide-scale deployments in emerging markets expected after costs for equipment, devices and handsets begin to decrease. Although the market share potential for LTE in emerging markets in the next five years is limited, due to its huge population we expect emerging markets to capture more than half of the LTE market share in Asia-Pacific by year-end 2016. Overall, we expect LTE to reach 238.1m subscriptions by year-end 2016, comprising 5.8% of the Asia-Pacific total mobile subscriptions. In the developed markets, the figure will be 29.4%, and in emerging markets, it will be 4.0%.

The Pyramid Research report, “Asia-Pacific to Be Global Leader in LTE by 2014”  analyzes the market opportunity for LTE by looking at factors affecting operators’ decisions to deploy LTE in the next several years across developed and emerging markets in Asia-Pacific, the motivations behind the trendsetters as opposed to the early majority, and the factors that might cause an operator not to deploy LTE in the next five years. This report provides subscription forecasts and projected LTE capital expenditure amounts from selected operators.

“LTE networks will exhibit strong growth in Asia-Pacific in both developed markets and specific emerging markets, due to a combination of cost and competition considerations by operators and exponentially increasing data usage by consumers,” says Pyramid Research analyst Emily Smith. LTE will be deployed first in developed markets, where more customers are willing to pay for better service, with wide-scale deployments in emerging markets.  The market is expected to ramp up soon after costs for equipment, devices and handsets begin to decrease. “Although the market share potential for LTE in emerging markets in the next five years is limited, due to their huge population Pyramid expects emerging markets to capture more than half of the LTE market share in Asia-Pacific by year-end 2016,” Smith indicates.

http://www.pyramidresearch.com/store/ins_ap_110708.htm?sc=GL071111_INSAP32


An earlier study by Wireless Intelligence concluded that Asia Pacific would surpass 120 million LTE connections in 2015.  That firm had forecast that LTE will account for around 3 percent of all connections in the region by this point, driven by key regional markets such as China, Japan, Indonesia and South Korea.

The figures are the first LTE forecasts published by Wireless Intelligence and form part of a global LTE study due to be published later this year. The Asia-Pacific study includes LTE forecasts for 35 mobile operators across 11 regional markets: China, Japan, South Korea, Taiwan, Australia, New Zealand, Malaysia, Hong Kong, Singapore, Indonesia and the Philippines.

The pioneering LTE operator in the Asia-Pacific region is expected to be Japanese market leader NTT Docomo, which plans to launch its ‘Xi’-branded LTE service in December this year. The service will initially be switched on in Tokyo, Nagoya and Osaka, with plans to gradually expand coverage to additional cities.

Docomo has been testing LTE since June this year and is deploying new WCDMA base stations equipped with newly-developed remote radio equipment (RRE) units to support both existing WCDMA (3G) and forthcoming LTE services. The deployment is a part of the operator’s plan to layer a 2GHz LTE network over its existing 3G network to provide dual WCDMA/LTE services. Meanwhile, Docomo’s domestic rivals SoftBank Mobile and EMOBILE (eAccess) are both planning LTE launches in 2011 and 2012, respectively, while supporting high bandwidth mobile services via their HSPA+ networks in the meantime. This market scenario means that Docomo will benefit from a first mover advantage, which is expected to boost its LTE market share in Japan to approximately 60 percent by 2015 (it currently has an overall mobile market share of just under 50 percent).

Wireless Intelligence estimates that 20 percent of the Japanese mobile market will have migrated to LTE networks within five years, closely followed by South Korea on 17 percent. Both markets have already migrated a significant majority of their customers (70 percent and 60 percent, respectively) onto WCDMA and HSPA networks, which will support a rapid migration to LTE.

Japanese and South Korean mobile users will account for almost 30 percent of total LTE connections in the Asia-Pacific region by 2015 (see table). However, almost half (47 percent) of LTE connections by this point will be based in China, the world’s largest mobile market.

http://dev.wirelessintelligence.com/analysis/2010/11/asia-pacific-to-surpass-120-million-lte-connections-in-2015/


Question & Comment:  Where does this leave Mobile WiMAX in Asia Pacific?  South Korea, Japan and Taiwan were big proponents of that technology which seems to have faded fast from all market researchers radar screens!  Has Korea switched from WiBro to LTE?  Can WiMAX from UQ Communications (with a $43M investment from Intel) compete with LTE from NTT DoComo in Japan?  What about all the Taiwanese WiMAX operators who’ve been silent for over 1 year?
 

IEEE Global Communication Newsletter: A World View of Communications and IEEE ComSoc Activities

Introduction:

Want a quick overview of what’s going on within the communications industry in different geographical areas of the world?  Check out the IEEE Global Communications Newsletter (GCN).  It appears monthly within IEEE Communications Magazine+. The GCN presents news, events, and activities related to global communications.  It also highlights significant IEEE Communications Society (ComSoc) regional and chapter activities.  There are three or four articles in each GCN issue.  They are concise, informative and fun to read.  In a relatively short amount of time, you can get an overview of communications technology status, applications and policies from all over the world.


+ Note: We have previously called attention to the improvements in IEEE Communications magazine due to the Editor in Chief Steve Gorshe.  Please refer to:
https://techblog.comsoc.org/2011/06/04/ieee-communications-magazine-shows-great-improvement-in-past-year-thanks-to-editor-in-chief-steve-gorshe


Since mid 2007, the GCN scope, breadth and content have all greatly improved due to the diligent work of Editor in Chief Stefano Bregni -Associate Professor of Telecommunications Networks at Politecnico di Milano. This newsletter is interesting, informative and very well organized.  The diversity of articles and high quality content is the direct result of Stefano’s work ethic.  He reads all papers submitted and recommends changes to authors whenever appropriate. Because the authors are responsible for the actual papers, Professor Bregni does not rewrite articles or mandate a specific writing format or style. He will occassionaly make a few edits to correct evident flaws, but without significantly impacting the writing style or content provided by the contributing author.  

Under Stefano’s supervision, 148 papers have been published in GCN and six are in queue for future publication. The percentage of rejected papers is minimal (23 since 2007).  Several submissions have been provisionally accepted with major revisions recommended by Stefano.  Once those revisions were deemed acceptable, the individual paper was accepted for publication in GCN.

Professor Bregni says that geographical diversity is one of his major objectives. He has consistently asked the ComSoc Regional Directors to appoint Regional Delegates, to be responsible for selecting content and inviting ComSoc Chapters to submit articles from their respective regions.  This author is the North American (NA) Regional Delegate for GCN and has several times solicited papers from NA ComSoc Chapter Chairs with an offer to help organize and edit the articles.  We are hoping to receive papers on NA Communications activities and offer help to prospective authors who are reading this article.

Stefano says that “much work is needed in the Latin America Region” and he is waiting for a new regional co-ordinator there to help recruit articles from Latin Amercia.  There have been many GCN papers published from Asia and Europe.

Stefano remembers some GCN articles which were quite different from the other 99%.  For example, a couple of reports highlighting how computers could be used for teaching.  Others described the design and development of IT infrastructures in small, remote, rural villages in Colombia, Cambodia and Malaysia.  One example of the latter was a September 2009 article: The eBario Project: A Rural ICT Internet Access Initiative in Malaysia, by Alvin Yeo, Poline Bala, Peter Songan and Khairuddin Ab Hamid, Universiti Malaysia Sarawak, Malaysia.  Another example was an October 2010 article:  Project iREACH: Informatics for Rural Empowerment and Community Health in Cambodia by Brian Unger, University of Calgary, Canada, Chea Sok Huor, iREACH Project Manager, Cambodia,and Helena Grunfeld, Victoria University, Australia.  Please check the enthusiastic faces of the little girl watching and older woman using a PC to access the Internet, probably for the very first time.   

This author has published GCN articles in September 2010 and January 2011 on various ComSocSCV (Santa Clara Valley-USA) chapter activitiies and technical meetings.

Current and previous GCN issues can be accessed at  http://dl.comsoc.org/gcn/


As a volunteer run non profit organization, IEEE ComSoc (as well as other IEEE Societies) is very much dependent on the committment, effort and energy of its volunteer leaders.  Stefano is to be commended for the outstanding job he has done in several volunteer roles and especially for the GCN.  HIs IEEE ComSoc and other accomplishments are listed in his abbreviated biography below. 

We wish IEEE had more leaders who are as knowledgable, dedicated, and passionate about their volunteer work as Professor Bregni.  We should all be very appreciative and supportive of Stefano’s contributions to IEEE ComSoc.  I’d personally like to congratulate him on a superb job as the GCN Editor in Chief! 

About Stefano Bregni (IEEE M’93-SM’99)

Since 2004, he has been Distinguished Lecturer of the IEEE Communications Society, where he holds or has held the following official positions: Member at Large on the Board of Governors (2010-12), Director of Education (2008-11), Chair of the Transmission, Access and Optical Systems (TAOS) Technical Committee (2008-2010; Vice-Chair 2002-2003, 2006-2007; Secretary 2004-2005) and Member at Large of the Globecom/ICC Technical Content (GITC) committee (2007-2010). He is or has been Technical Program Vice-Chair of IEEE GLOBECOM 2012, Symposia Chair of GLOBECOM 2009 and Symposium Chair in eight other ICC and GLOBECOM conferences. He is Editor of the IEEE ComSoc Global Communications Newsletter, Associate Editor of the IEEE Communications Surveys and Tutorials Journal and Associate Editor of the HTE Infocommunications Journal. He was tutorial lecturer in four IEEE conferences ICC and GLOBECOM. He served on ETSI and ITU-T committees on digital network synchronization.

He is author of about 80 technical papers, mostly in IEEE conferences and journals, and of the two books Synchronization of Digital Telecommunications Networks (Chichester, UK: John Wiley & Sons, 2002; translated and published to Russian by MIR Publishers, Moskow, 2003) and Sistemi di trasmissione PDH e SDH – Multiplazione (PDH and SDH Transmission Systems – Multiplexing. Milano, Italy: McGraw-Hill, 2004). Stefano likes travelling, spending summer holidays in Greece, listening to music, playing sports, and photography.  More about his activities and accomplishments can be found on his personal web site:    http://home.dei.polimi.it/bregni/

Page 84 of 93
1 82 83 84 85 86 93