Google’s Acquisition of Moto Mobility ups the stakes in Cell Phone Patent Mania

Viodi View Managing Editor Ken Pyle writes:
“Alan Weissberger may have said it best when he suggested that the Google purchase of Motorola Mobility is representative of what has become a sort of patent mania these days.  At approximately $734k per existing patent and $500k if pending patents are included, this is at least comparable to the the $750k per patent that the Apple/EMC/Ericsson/Microsoft/RIM/Sony consortium paid for the intellectual property assets of Nortel.  That figure ignores Motorola’s business ($3.3B revenue in the latest quarter) and the well reported synergies of having in its fold a hardware manufacturer  with deep ties to service providers. “

Clearly, Google had to play “patent catch-up” after being shut out in the %4.5B winning bid for Nortel’s patents, which was won  by a consortium of unlikely partners (Apple, Microsoft, RIM, EMC, Ericsson and Sony).  Was that consortium a collusion or anti-trust violation?  (See Huffington post article below).  

The search engine giant needs Moto’s patents for both offensive and defensive purposes.  They may also share the patents with Android device makers.  Analysts at Jefferies  estimated that $9.5B of the $12.5B purchase price was for Moto’s patent portfolio!

Jefferies & Co analyst Youssef Squali said:  “We believe that Google is paying approximately $9.5B for MMI’s patents, assuming $3B in value for MMI’s home and devices businesses. This implies $560K per MMI patent vs. $700K that Apple/Microsoft consortium paid per Nortel patent.”

 

Here are what some respected news sources wrote today about the patent based motivation for this huge deal:

Google Primes Patent Pump-

Google Inc.’s $12.5 billion deal for Motorola Mobility Holdings Inc. provides the latest evidence that patents have become the hottest currency in high technology.  Tech companies, particularly in the market for mobile devices, have been furiously snapping up patents to use as weapons in lawsuits and bargaining chips in settlement negotiations. That was a key reason Google cited for buying Motorola Mobility, though some experts disagree about the value of that company’s intellectual property.

In some such cases, companies buy patents to go on the offensive against rivals, seeking hefty royalties for patent licenses or injunctions that could bar sales of competing products. Google, by contrast, expressed defensive motivations; the company, which has relatively few patents on mobile technologies, could theoretically use Motorola Mobility’s patents to countersue companies that sue Google or companies that use its Android software.

http://online.wsj.com/article/SB1000142405311190348090457651061220870619…

In the World of Wireless, It’s All About Patents

That intellectual property portfolio is a treasure trove for Google because the battle in wireless is one that is increasingly being fought in court.

Corporate warfare over patents is not new. Companies historically preferred to reach truces, choosing to cross-license their intellectual property rather than risking bigger losses in court.

But patent battles are no longer waged between just two competitors, like Intel and Advanced Micro Devices. Platforms like Android and Windows Phone 7 are built upon a handful of device makers, adding more players with different stakes at risk.

That has changed the calculus of settling, as product makers have become increasingly willing to sue rather than reach peaceful settlements.

“Now you’re seeing more suits being brought by product companies willing to step up and say we will defend our patents,” said Colleen Chien, an assistant professor at the Santa Clara University School of Law.

Apple has sued important Android phone makers like HTC and Samsung, while Oracle has taken Google to court. The fighting has been likened to a “patent arms race.”

“The best way to fight a big portfolio of patents is to have your own big portfolio of patents,” said Herbert Hovenkamp, a law professor at the University of Iowa. “That appears to be what Google is doing here, arming itself with patents to be able to defend itself in this fast-growing market.”

Large sums hang in the balance, especially if phone makers are forced to pay out royalties for each handset they make. Microsoft has already persuaded HTC to pay a fee for every Android phone manufactured, and is seeking to extract similar royalties from Samsung.

If left unchecked, such payments could make creating new devices for Android prohibitively expensive for manufacturers, forcing them to turn to alternative platforms like Windows Phone 7.

“With a slim patent portfolio, Google is especially vulnerable to lawsuits against its Android licensees, if not itself,” Charlie Wolf, an analyst with Needham, wrote.

By acquiring Motorola Mobility, Google is seeking to ensure that growth in the Android market will not be choked by the burden of royalties.

The importance of bulging patent portfolios became clear this summer after a consortium led by Apple, Microsoft and Research in Motion, the maker of the BlackBerry, paid $4.5 billion for some 6,000 patents held by Nortel Networks, the Canadian telecommunications maker that filed for bankruptcy.

Google, which initially offered $900 million for the collection, fell short after several bids. Shortly afterward, Google executives complained that the company’s rivals had banded together to smother its Android system with patents.

“We’re determined to preserve Android as a competitive choice for consumers, by stopping those who are trying to strangle it,” David Drummond, Google’s chief legal officer, wrote in a blog post earlier this month.

Mobility’s Benefits for Google Not Patently Obvious

Google is hoping to secure the long term future of its business—by turning that business on its head. With its proposed $12.5 billion acquisition of Motorola Mobility, Google is jumping into a lower-margin, cut-throat hardware business. Even for a company with as varied ambitions as Google, this is a risky deal.

Google’s willingness to buy Mobility highlights how much it needs to protect its Android mobile operating system, now caught up in a raging patent fight. Along with other Android-powered handset makers such as Samsung and HTC, Mobility has been sued for patent infringement by Apple and Microsoft.

Google’s purchase of Motorola Mobility was cheered by the street, as Google looks to control more of its handset maker of its Android phones. The deal also gives Google access to a library of patents, which can be used to protect the Android operating system

Even settling the patent lawsuits could be harder, argues patent expert Florian Mueller. Google may want to use Mobility’s patents to negotiate a settlement that covers all Android handset makers. For Apple and Microsoft, agreeing to an Android-wide settlement may be unpalatable.

In a sign of how badly it appeared to want Motorola Mobility’s patents, Google offered $40 a share, a rich 63 percent premium to Motorola’s closing price on Friday. Analysts at Jefferies calculated that, of the $12.5 billion offer price, Google was essentially paying $9.5 billion for the patents.

http://dealbook.nytimes.com/2011/08/15/in-the-world-of-wireless-its-all-about-patents/

Google, until now largely on the sidelines of that fight, has good reason to get directly involved. As an increasing portion of people’s Web surfing shifts to mobile devices, Android gives Google a vital position in mobile advertising. Having snared 47.7% of global smartphone shipments by operating system in the second quarter, according to Strategy Analytics, it is clear why Android threatens rivals like Apple and Microsoft.

http://online.wsj.com/article/SB10001424053111903392904576510570512191428.html?mod=markets_newsreel&mg=com-wsj

Google’s Motorola deal seen as Cold War arms race

Open warfare between technology giants is nothing new, but when Google this week announced it was acquiring Motorola’s mobile division, the conflict over mobile phones went nuclear.

Behind the headlines of the $12.5 billion deal, say analysts, is a Cold War-style arms race, with leading firms racing to stockpile the patents that will serve as weapons of mutually-assured destruction.

But as Google squares off against Apple, Microsoft and the creators of BlackBerry, the question is: will anyone benefit from this escalation in potential hostilities or, like the standoff between America and the Soviet Union, will it ultimately prove futile?

Industry observers say Google’s latest deal, which saw it pay a 63% premium on shares, is primarily aimed at laying its hands on Motorola’s arsenal of patents — legally protected innovations built up over years at the frontline of cell phone development.

Most of these estimated 24,000 patents have little intrinsic value, says Lee Simpson, a London-based analyst at Jeffries International, but a core 500 or so represent the mother lode, giving Google ownership of key cellular communication technology.

And it is these patents that Google will turn to should it be accused of stealing Apple’s own legally-protected iPhone innovations to enhance Google’s Android operating system — a software now used on many popular handsets.

http://edition.cnn.com/2011/TECH/mobile/08/16/google.motorola.patents/

Verizon Says Google Deal May Stabilize Patent Fights

Verizon Communications Inc. said Google Inc.’s $12.5 billion bid for Motorola Mobility Holdings Inc. was a welcome development because it may bring “stability” to a recent slate of smartphone patent disputes, though it stopped short of totally endorsing the proposed acquisition.

http://online.wsj.com/article/SB10001424053111903392904576512360865045134.html#ixzz1VEndvWVj

Patent Wars and Blackmail in Silicon Valley

The U.S. Justice Department’s Antitrust Division is investigating another possible conspiracy among Silicon Valley companies. This one arises out of the collective bid in the late spring of nearly every wireless phone operating system manufacturer, except Google, for a portfolio of 6,000 cell phone patents formerly held by bankrupt Canadian company Nortel. Simply put, Google started the bidding at about $1 billion, but the others joined forces to lift the price to an astounding $4.5 billion and win the prize.

That’s the legal background to Google’s just-announced Motorola Mobility acquisition, and it’s one that could have serious anticompetitive consequences. If the curiously named “Rockstar Bidco” consortium — which includes Microsoft, Apple, RIM, EMC, Ericsson and Sony — refuses to license the erstwhile Nortel patents to Google for its Android wireless operating system, they will be agreeing as “horizontal” competitors not to deal with a rival. Classically such group boycotts are treated as a serious antitrust no-no, and a criminal offense. If the group licenses the patents, on the other hand, they could be guilty of price fixing (also a possible criminal offense), since a common royalty price was not essential to the joint bid and would eliminate competition among the members for licensing fees.

http://www.huffingtonpost.com/glenn-b-manishin/microsoft-motorola-patent-wars_b_928728.html

2Q-2011 VC Investment Survey: Internet 2.0 Bubble while Telecom & Networking Start-ups Struggle for Funding!

Introduction

The just released PricewaterhouseCoopers National Venture Capital Association MoneyTreeTM Report for 2Q-2011 contains some very revealing information about the amounts and types of companies venture capitalists (VCs) are investing in.

https://www.pwcmoneytree.com/MTPublic/ns/moneytree/filesource/exhibits/M…

VCs opened their wallets and invested $7.5 billion in 966 deals in 2Q-2011. That was an increase of 19% in terms of both dollars and the number of deals compared to the first quarter of 2011 when $6.3 billion was invested in 814 deals. The quarterly investment level represents the highest total in a single quarter since the second quarter of 2008.

Internet Companies are Hot (or in Bubble 2.0)

Here’s an Eye Opener: Investment in Internet-specific companies surged in the second quarter with $2.3 billion going into 275 companies. That’s about one third of all VC money invested this quarter! It represents a 72% increase in dollars and a 46% increase in deals from the first quarter when $1.4 billion went into 189 deals. The second quarter marks the most dollars going into Internet-specific companies in a decade, since the second quarter of 2001!

Five of the top 10 deals this quarter, including the top two deals, were classified as Internet-specific investments, which is a discrete classification assigned to a company with a business model that is fundamentally Internet based, e.g. e-commerce, on line games or daily coupons, social networking, etc. These are generally software companies which have nothing at all to do with the underlying Internet infrastructure that they use to generate revenues (and hopefully profits).

Telecom and Networking Start-ups Continue to Suffer

In sharp contrast, there were only 29 deals totalling $169M invested in Telecom start-ups of all types (wireless, wireline, metro, WAN, etc). That was down from 35 deals worth $188M in the 1Q-2011 and basically flat from one year ago.

Networking and equipment companies fared even worse. They received only $115M in 21 deals in 2Q, which was flat from 1Q but DOWN from $303M in 24 deals one year ago!

Telecom combined with networking & equipment only accounted for 3.7% of all 2Q-2011 investments- an insignificant percentage, especially when compared to Internet related companies. As we have pointed out in several other articles, this does not augur well for future Internet infrastructure or for technology innovation in general.

San Jose Mercury had 2 recent articles about the Money Tree VC Survey, but didn’t accurately report the sorry state of telecom and networking start ups. In particular, this Sunday’s SJ Mercury VC report didn’t explicitly mention telecom start-ups http://www.mercurynews.com/business/ci_18664212?nclick_check=1

Saturday’s SJ Mercury article was more interesting:  Silicon Valley in another tech-stock bubble? (Really means private equity bubble, since the companies receiving huge investments and high valuations are NOT publicly traded)

“New figures from the National Venture Capital Association show more venture money poured into Internet startups last quarter — $2.3 billion — than in any period since the dot-com bubble, driven largely by investments in social media companies.”

While VCs are throwing lots of money at Internet start-ups, telecom and network equipment companies are struggling to get funding from VCs (or even angel investors)!


This article is continued at: 

http://viodi.com/2011/08/14/vc-falling-over-internet-start-ups-telecom-tie-angels-wins-for-smaller-deals/

More info on early stage company funding and the role of TiE Angels

AT&T to Throttle Heavy Mobile Data Users with "Unlimited" Data Plans

On July 29, 2011, AT&T surreptitiously announced it would reduce throughput for the top 5 percent of their heaviest mobile data users in a billing period. These customers falling into this highest mobile data use group on average use 12 times more data than the average of other smartphone data customers. The move does not apply to the 15 million AT&T smartphone customers on a tiered data plan nor to most smartphone customers who still have unlimited data plans. 

AT&T wrote: “The amount of data usage of our top 5 percent of heaviest users varies from month to month, based on the usage of others and the ever-increasing demand for mobile broadband services.  To rank among the top 5 percent, you have to use an extraordinary amount of data in a single billing period.”

In announcing this new policy, AT&T said that “nothing short” of wrapping up its T-Mobile merger “will provide additional spectrum capacity to address these near term challenges.”

http://www.att.com/gen/press-room?pid=20535&cdvn=news&newsarticleid=32318&mapcode=corporate

Streaming video apps, remote Web camera apps, uploading large data files like video and some online gaming as well as streaming music daily over a wireless network can ratchet up data use and may push a customer into the top 5 percent category. The company pointed out that users of its Wi-Fi network do not contribute to the wireless network congestion.  That’s because the WiFi backhaul normally uses a broadband wireless Internet connection.

The new data throttling will begin Oct. 1st for AT&T customers with unlimited data plans. Customers will experience reduced data transfer speeds once they reach a level that pushes them into the top 5 percent of heaviest data users. Unlimited transfers will continue to be available, although at a reduced speed, and speeds will be restored with the beginning of the next billing cycle.

Points to Ponder: We wonder whether heavy mobile data users will balk when they notice a significant slowdown or buffer underrun in streaming video or other real time applications.  Will they then switch to tiered data plans and pay substantially more in overcharges?

No Surprise: Clearwire to shift from WiMAX to LTE – But Who WIll Fund It?

LTE is now being deployed in the US by Verizon Wireless and Metro PCS, with AT&T and LightSquared to follow (the latter’s LTE deployment depends on resolving the GPS interference issue with the FAA and other U.S. government regulators).

Clearwire had announced last year that it had begun testing LTE technology in Phoenix, AZ.  Those Clearwire LTE tests achieved data speeds of 120 megabits per second – 10 times faster than the fastest networks currently in operation. So we predicted at that time that Clearwire would opt for LTE rather than IEEE 802.16m (AKA WiMAX 2.0).  Now its for certain.

Today, Clearwire’s CEO John Stanton said that the company’s new LTE network would initially target densely populated urban areas in its existing 4G markets where current 4G usage is highest. It said it will be able to use its existing WiMax infrastructure in these markets to serve the company’s LTE needs, delivering substantial capital cost savings compared with similar rollouts by rival operators.

“Our leadership in launching 4G services forced a major change in the competitive mobile data landscape,” Mr. Stanton said. “Now we plan to bring our considerable spectrum portfolio to bear to deliver an LTE network capable of meeting the future demands of the market.”

John Saw, Clearwire’s chief technology officer, said: “Our extensive trial has clearly shown that our ‘LTE Advanced-ready’ network design, which leverages our deep spectrum with wide channels, can achieve far greater speeds and capacity than any other network that exists today.

“In addition, the 2.5GHz spectrum band in which we operate is widely allocated worldwide for 4G deployments, enabling a potentially robust, cost-effective and global ecosystem that could serve billions of devices.”

In a sideswipe seemingly aimed at rival LightSquared, he added: “Since we currently support millions of customers in the 2.5GHz band, we know that our LTE network won’t present harmful interference issues with GPS or other sensitive spectrum bands.”

Clearwire said its LTE implementation will use Time Division Duplex (TDD) LTE technology. The LTE deployment will take advantage of the company’s all-IP network architecture and will involve upgrading base station radios and some core network elements. Clearwire said it will use multicarrier, or multichannel, wideband radios that will be carrier-aggregation capable.

A key question is where will Clearwire get the needed capital to buld the planned LTE-TDD network?  The company said that plans to build the new LTE network “are subject to raising additional capital,” which has been a problem for Clearwire for the past three years. Furthermore, Clearwire states that it will need “substantial additional capital” to continue running its WiMax network “over the intermediate and long-term,” although the company says it currently has enough capital to maintain and operate the network “for at least the next 12 months.”

http://www.ft.com/intl/cms/s/0/eebc4628-be22-11e0-bee9-00144feabdc0.html#axzz1U2GQTZHb

http://www.fiercewireless.com/story/clearwire-deploy-lte-if-it-can-get-additional-funding/2011-08-03


Here is what CEO Stanton said about LTE during today’s earnings call;

Based on the success and insights from our now completed Phoenix trial, we plan to add LTE services to our present network in areas with high usage concentration where we can meet the needs of our current partners and other major carriers. Our carrier customers would use LTE capacity to supplement their offerings.

LTE will be implemented by overlaying most of our existing 4G network. We will not use Sprint’s project vision in our existing markets because it is substantially more expensive compared to the cost of overlaying our own network. We are in discussions with Sprint about using vision in new build markets in the future.

We plan to maintain the WiMAX network for a significant period of time to serve our present customers. We believe WiMAX will continue to represent an appealing product for certain market segments.

There are two key reasons we can implement this strategy, our spectrum and our network. We have the largest, deepest spectrum position in the industry on the best and only globally coordinated band, differentiating ourselves from any other carrier or want-to-be 4G operator.

With an average of 160 megahertz of spectrum nationwide, we have more spectrum than even AT&T and T-Mobile combined. With all of our spectrum in one contiguous band, our spectrum depth enables us to deploy wider channels or fatter pipes to enhance the throughput speed and capacity.

Spectrum in the 2.5 gigahertz band is ideally suited for high-volume wireless data. High-frequency spectrum is much more conducive than low- or mid-band spectrum to meeting the usage and speed requirements of heavy tonnage users in densely populated markets.

The 2.5 gigahertz band is also the sweet spot of global TDD LTE evolution. Earlier this year, Clearwire cofounded the GTI consortium with China Mobile, Vodafone, SoftBank and Bharti. Clearwire was the only American carrier included in the consortium. The members of this consortium serve more than 1.3 billion customers, representing 4x the population of the U.S. This means that this group will be driving the lowest possible cost and greatest variety of devices.”

http://seekingalpha.com/article/284461-clearwire-s-ceo-discusses-q2-2011-results-earnings-call-transcript

Opinion:  Clearwire’s announced plans for LTE along with Sprint overt hints that it will also deploy that technology sounds the death bell for mobile WiMAX.  It almost guarantees that IEEE 802.16m- WiMax 2.0- will be DoA.

Who is to blame for this market failure?   I’ll give you three guesses, but the 1st two don’t count!

Smart Energy Home Area Network (HAN) Consortium=HomePlug Alliance, Wi-Fi Alliance, HomeGrid Forum and ZigBee Alliance

Background:  SEP 2 was selected in 2009 by the U.S. National Institute of Standards and Technology (NIST) as a standard profile for smart energy management in home devices. The profile is suitable for operation on a variety of IP-based technologies. This consortium establishes a communications technology-agnostic forum to unify and accelerate the realization of interoperable SEP 2 products through a joint test and certification program. The consortium intends to utilize the processes and best practices recommended by the Smart Grid Interoperability Panel (SGIP) for smart grid testing and certification programs.  The Consortium for SEP 2 Interoperability invites participation from other trade associations in communications technology that have an interest in developing an interoperable smart grid.

The Main Message:  The HomePlug Alliance, Wi-Fi Alliance, HomeGrid Forum and ZigBee Alliance have agreed to create a Consortium for SEP 2 Interoperability. The new consortium will enable organizations whose technologies support communications over Internet Protocol (IP) to certify SEP 2 according to a consistent test plan. Recognizing that the vision of interoperable SEP 2 devices across the network will only be realized with consistent certification and interoperability testing, the Consortium is being structured as an open organization. This cooperation among alliances builds on the work of many industries to bring smart grid benefits to consumers.    
      
The joint certification and test program will be used to certify wireless and wired devices that support IP- based smart energy applications and end-user devices such as thermostats, appliances and gateways. It will address devices operating on one or more of a variety of underlying connectivity technologies and provide the smart energy ecosystem – including utilities, product vendors and consumers – assurances of application and device interoperability.

“As the hybrid wireless and wired home of the future takes shape, the need for easy interoperability becomes key,” said Rob Ranck, president of HomePlug Alliance. “We are excited to bring HomePlug Alliance’s strong expertise to this collaboration and help provide a robust certification program.”

“The smart grid will be comprised of all types of devices connecting in many different ways, and we must ensure those devices interoperate and communicate seamlessly, regardless of how they connect,” said Edgar Figueroa, CEO of Wi-Fi Alliance. “This collaboration represents a groundbreaking step in the industry. Through this collaboration, the smart grid ecosystem will benefit from interoperable smart energy products that use some of today’s most popular connectivity technologies.”

“HomeGrid Forum, which is responsible for certifying and promoting G.hn technology, is excited to be working with other leading industry organizations to help accelerate the adoption of the Smart Grid throughout the world,” said Matt Theall, president of HomeGrid Forum. “We believe SEP 2 will be an important factor in ensuring that wired and wireless technologies combine together to deliver Smart Grid and other services inside and outside the home and we are committed to using our expertise to help drive industry adoption.”

“As the organization that initiated the home Smart Energy standards activity, the ZigBee Alliance is committed to ensuring that the years of work invested by a broad stakeholder community in developing it translates into success in the marketplace,” said Bob Heile, chairman of the ZigBee Alliance. “The
ZigBee Alliance is pleased to contribute its considerable experience and expertise certifying Smart Energy products today to this new independent certification and testing consortium to ensure that consumers get smart products that are easy to use, independent of communications technology.”    
  
Quick take: Will this end the standards connundrum for residential smart energy management systems?  Let’s see!

Related articleNew Telco Services Enable the Connected Home

http://viodi.com/2011/07/25/new-telco-services-part-1/


About HomePlug Powerline Alliance (see Comment below)
The HomePlug Powerline Alliance, Inc is the leading industry-led initiative for powerline networking, creating specifications, marketing and certification programs to accelerate worldwide adoption of powerline networking. With HomePlug technology, the electrical wires in the home can now distribute broadband Internet, HD video, digital music and smart energy applications.

The Alliance works with key stakeholders to ensure HomePlug specifications are designed to meet the requirements of IPTV service providers, power utilities, equipment and appliance manufacturers, consumer electronics and other constituents. The HomePlug Certified Logo program is the powerline networking industry’s largest Compliance and Interoperability Certification Program and the program has certified over 240 devices. For more information, visit www.homeplug.org.

About the Wi-Fi Alliance
The Wi-Fi Alliance is a global non-profit industry association of hundreds of leading companies devoted to seamless connectivity. With technology development, market building, and regulatory programs, the Wi-Fi Alliance has enabled widespread adoption of Wi-Fi worldwide.

The Wi-Fi Alliance launched the Wi-Fi CERTIFIEDTM program in March 2000. It provides a widely-recognized designation of interoperability and quality, and it helps to ensure that Wi-Fi enabled products deliver the best user experience. The Wi-Fi Alliance has completed more than 10,000 product certifications to date, encouraging the expanded use of Wi-Fi products and services in new and established markets. For more information, visit www.wi-fi.org.

About HomeGrid Forum
HomeGrid Forum is a global, non-profit trade group promoting the International Telecommunication Union’s G.hn and G.hnem standardization efforts for next-generation home networking and SmartGrid Applications. HomeGrid Forum promotes adoption of G.hn and G.hnem through technical and marketing efforts, addresses certification and interoperability of G.hn and G.hnem-compliant products, and cooperates with complementary industry alliances. For more information on HomeGrid Forum, please visit www.homegridforum.org or follow us on http://twitter.com/homegrid_forum.

About the ZigBee Alliance
ZigBee offers green and global wireless standards connecting the widest range of devices to work together intelligently and help you control your world. The ZigBee Alliance is an open, nonprofit association of approximately 400 organizations driving development of innovative, reliable and easy-to-use ZigBee standards. The Alliance promotes worldwide adoption of ZigBee as the leading wirelessly networked, sensing and control standard for use in consumer, commercial and industrial areas. For more information, visit www.zigbee.org.

Contacts:
Megan Shockney, The Ardell Group for HomePlug
[email protected]

+1 858-442-3492

Karl Stetso
n, Edelman for Wi-Fi Alliance
[email protected]

+1 206-268-2215

Brian Dolby
for HomeGrid Forum
+44 7899 914168
Sheila Lashfor
d, for HomeGrid Forum
+44 7986 514240
[email protected]

Kevin Schader
ZigBee Alliance
[email protected]

+1 925-275-6672

Is it Lights Out for Lightsquared? FAA Says: Revised Plan to Mitigate GPS Interference Not Good Enough!

LightSquared is the upstart carrier building a “wholesale” LTE network in the U.S, which is to be sold to other carriers that want to offer “4G” mobile data services.  Their proposed broadband wireless network would be of great value to wireline only carriers like Century Link and XO Communications which currently have no offering for “the mobile workforce.”  MSOs like Cox are also interested in cutting a deal with Lightsquare to resell LTE.  Traditional carrier’s that aren’t building their own LTE network might be enticed to pursue a wholesale relationship with Lightsquare.  Last month, Sprint agreed to pursue a 15-year deal focusing on the sharing of network expansion and equipment costs. 

Sprint plans to use LightSquared to help bring its network to 4G LTE, an improvement from its current, and slower, mobile WiMax network (built by Clearwire). The company has promised to spend $5 billion to upgrade its network over the next three to five years after losing contract customers in 14 of the past 15 quarters. An upgraded network may give subscribers an incentive to stay with Sprint, rather than looking elsewhere for fast wireless speeds.

However, the LightSquared concept is feared to be a danger to the global positioning system (GPS). In June, according to NextGov, a Federal Aviation Administration advisory report said that the upper band allocated for use by LightSquared will result in “complete loss of GPS receiver functionality.” On June 30, LightSquared filed a new plan based on use of frequencies allocated to it that are not in question.  Many thought that plan would resolve the GPS interference complaint.

But LightSquared suffered a potential knock out blow this week when the Federal Aviation Ad,inistration (FAA) said that its proposal for a high-speed wireless network would “severely impact” the nation’s evolving aviation-navigation system, despite the company’s revised plan to quell concerns about interference.   The FAA estimated LightSquared’s interference would cost the aviation community an estimated $70 billion over the next 10 years, in part because of the loss of existing GPS safety and efficiency benefits, and the need to retrofit aircraft.

“Billions of dollars in existing FAA and GPS user investments would be lost,” the agency said in the report. The agency report was examining questions about the LightSquared proposal presented by the national coordination office director for the Space-Based Positioning, Navigation and Timing Executive Committee, which is part of the Executive Office of The President.

LightSquared’s Executive Vice President of Regulatory Affairs Jeff Carlisle disagrees with the FAA assessment, saying it doesn’t accurately reflect its proposed changes and seems to be evaluating a plan that is no longer on the table.

“Simply put, the vast majority of the interference issues raised by this report are no longer an issue. We look forward to discussing this with the FAA,” Mr. Carlisle said.

The seven-page FAA assessment said LightSquared’s plan could also hurt U.S. leadership in international aviation by eroding confidence in the U.S.-owned global positioning system. That would be despite “presidential commitments” to the International Civil Aviation Organization about the continued safety and availability of GPS technology, the FAA said.

On June 20, LightSquared offered a new plan that it said wouldn’t interfere with the vast majority of GPS systems. It would use just the portion of its frequencies that are farthest away from GPS signals and would transmit weaker signals. Even with those changes, LightSquared’s network could still affect some precision GPS systems, which are generally used by farmers, the aviation industry and others.

At a June 23 congressional hearing about LightSquared’s broadband-spectrum proposal, several lawmakers cited concern about the company’s plans. The company now says the hearing was addressing its original plan, not its proposed changes. One key House lawmaker said at the hearing that the Federal Communications Commission shouldn’t approve a service that disrupts or burdens GPS devices in the aviation industry.

In the FAA’s recent assessment, it said LightSquared’s most recent proposal would “severely impact” NextGen, an FAA initiative to build a new national air-traffic control system that calls for satellite technology to replace ground-based facilities. NextGen, officially called the Next Generation Air Transportation System, relies heavily on GPS-based technologies. LightSquared’s interference would not only erode existing GPS safety and efficiency benefits, but would also force the FAA to replan NextGen investments, the FAA said, resulting in additional development costs and delays.

The FAA would have to return to dependency on ground-based aviation aids and billions of dollars in existing agency and GPS-user investments would be lost, the agency said.

Read more: http://online.wsj.com/article/SB10001424053111904800304576472361793662904.html#ixzz1TVdNhlME

8 Telcos to test "cognitive" wireless technology from xG Technology (Comment on IEEE 802.22 WRAN standard)

A group of eight telecom providers will begin field testing a mobile technology that relies on unlicensed spectrum and frequency hopping to optimize broadband connectivity. The carriers, which are located across the country — in states ranging from California to Florida — will build on an earlier VoIP trial of xG Technology’s “cognitive” wireless platform, which uses spectrum in the 900 MHz and 5.8 GHz bands and avoids line interference by jumping between bands. XG Technology is expected to release a chip in September that supports data access at speeds of 3 Mbps.

xG Technology is pioneering what the company calls “cognitive” wireless technology that can sense interference from other devices using the same spectrum and hop away from those frequencies, said Chris Whiteley, vice president of business development for xG Technology, in an interview. Initially the company is targeting spectrum between 902 and 928 MHz—a band used in the U.S. for garage door openers, baby monitors, cordless phones and some video surveillance. But a new version of a chip that uses technology developed by xG is scheduled for availability in September, and will also support communications in 100 MHz of unlicensed spectrum in the 5.8 GHz range and will be able to shift between the two spectrum bands within 30 milliseconds.

Radio signals in the 900 MHz range penetrate buildings very well, Whiteley noted. “But as soon as you step outside, 5.8 GHz is a great line of sight [option] and you can offload capacity for outdoor use.”
In the future, the technology could be used in other spectrum bands, such as the TV white spaces band, Whiteley said.

xG has field tested its technology in a 32-square mile network in Ft. Lauderale, Fla. supporting mobile VoIP services and also has a trial of a voice network underway with Texas-based Independent telco Townes Telecommunications. Whiteley said xG focused on supporting voice service initially because in comparison with data transmission “getting VoIP to work correctly on an IP mobile network is the tougher challenge.”

The new XG chip coming out in September will also support data services, and companies such as Townes Telecommunications that have already been working with xG will be the first to deploy devices with the new chips. xG does not manufacture chips but develops technology which will be implemented on a chip. 

Telcos that have signed agreements to evaluate the xG Technology include Redi-Call Communications of Delaware, TelAtlantic Communications of Virginia, Cook Telecom of California, Silver Star Telephone Company of Wyoming, Venture Communications Cooperative of South Dakota, Smart city Telecom of Florida, and Public Service Cellular of Georgia, as well as Townes Telecommunications.

http://connectedplanetonline.com/independent/news/Eight-small-telcos-ink-deals-to-evaluate-new-broadband-wireless-technology-0725/


Comment:  Cognitive radio research has been ongoing for many years.  The IEEE 802.22 Wireless Regional Area Network (WRAN) standard was based on it.  Yet that recently ratified standard is apparently Dead on Arrival (DoA) as no networks based on it have been deployed or even announced. 

Not only must the cognitive radios detect interference and defer use of those bands, but also re-negotiate use of the same channel on a time shared basis, else hop to a different channel.  Hasn’t happened yet.  Good luck to these eight small telcos that are trialing xG Technology’s cognitive radios.

 Here’s the IEEE Standards Association press release on IEEE 802.22 standard:

IEEE 802.22TM-2011 Standard for Wireless Regional Area Networks in TV Whitespaces Completed

PISCATAWAY, N.J.–(BUSINESS WIRE)–IEEE, the world’s largest professional association advancing technology for humanity, today announced that it has published the IEEE 802.22TM standard. IEEE 802.22 systems will provide broadband access to wide regional areas around the world and bring reliable and secure high-speed communications to under-served and un-served communities.

This new standard for Wireless Regional Area Networks (WRANs) takes advantage of the favorable transmission characteristics of the VHF and UHF TV bands to provide broadband wireless access over a large area up to 100 km from the transmitter. Each WRAN will deliver up to 22 Mbps per channel without interfering with reception of existing TV broadcast stations, using the so-called white spaces between the occupied TV channels. This technology is especially useful for serving less densely populated areas, such as rural areas, and developing countries where most vacant TV channels can be found.

IEEE 802.22 incorporates advanced cognitive radio capabilities including dynamic spectrum access, incumbent database access, accurate geolocation techniques, spectrum sensing, regulatory domain dependent policies, spectrum etiquette, and coexistence for optimal use of the available spectrum.

The IEEE 802.22 Working Group started its work following the Notice of Inquiry issued by the United States Federal Communications Commission on unlicensed operation in the TV broadcast bands.

Additional information on the standard can be found at the IEEE 802.22 WG page. To purchase the standard, visit the IEEE Standards Store.

http://www.businesswire.com/news/home/20110726007223/en/IEEE-802.22TM-2011-Standard-Wireless-Regional-Area-Networks

AT&T adds 202,000 U-verse TV subscribers in 2nd Quarter- It’s now coming to S.F!

U-verse Subscribers and Revenues Jump in 2nd Quarter

AT&T is now the eighth-largest pay-TV provider in the U.S. after netting 202,000 U-verse subscribers in the second quarter for a total of 3.4 million.  The vemerable U.S. carrier gained 439,000 U-verse broadband subs from a year ago.  That’s an increase of up 36% from one year ago! Impressively, U-verse revenue jumped 57% from a year earlier.

 “U-verse has transformed our consumer business,” said Chief Financial Officer John Stephens. 

Author’s Note:  AT&T’s total video subscribers, which include U-verse TV and bundled satellite customers (AT&T resells Dish Network), reached 5.26 million at the end of the reported quarter (representing 21.5% of households served). By contrast, Verizon had 3.7 million FiOS TV customers at the end of March, 2011, according to its first-quarter earnings release.  This will be updated soon when VZ releases their 2nd Quarter earnings report.


AT&T said it lost 451,000 traditional DSL customers (that mostly have ADSL + POTS service- like this author). 

Author’s Comment: It appears AT&T has no plans to retain old DSL subs, but instead to convert them to U-verse based high speed Internet when it is available in their area.

http://www.multichannel.com/article/471330-AT_T_Reels_In_202_000_U_verse_TV_Customers.php


Sanford C. Bernstein analyst Craig Moffett said AT&T’s quarterly U-verse net additions were “not far from consensus [expectations] of 199,000 in what is traditionally a seasonally soft quarter.” Barclays Capital analyst James Ratcliffe also said the gains were in line with expectations.

Miller Tabak analyst David Joyce had predicted 220,000 U-verse TV user additions, but the actual result didn’t make him change his predictions for overall pay TV subscriber growth in the latest quarter.

Joyce projects industry-wide net adds of 97,000 in the second quarter driven by gains for AT&T and Verizon (+170,000), as well as satellite TV firms DirecTV (+100,000) and Dish Network (+50,000). But he once again expects cable operators to post subscriber losses, which he estimates at 317,000 for publicly traded and 443,000 for privately held companies.

The user growth he expects would make for the third consecutive quarter of pay TV subscriber increases after two quarters of declines last year kicked off a debate over whether some consumers may be dropping their cable packages to substitute them with online video options.

http://www.hollywoodreporter.com/news/att-adds-202000-u-verse-213941

San Francisco Residents to get U-verse

AT&T can offer U-verse in areas where the city/ municipality permits it, the copper lines are good enough for high speed DSL transmission, and the video server can be placed close enough to the homes being served.

Three years after its initial proposal, the San Francisco Board of Supervisors voted 6 to 5 on July 19th to let AT&T deploy its U-Verse TV and broadband Internet service.  AT&T got clearance to install hundreds of utility boxes (AKA cabinets) on city sidewalks and alleyways without first having to undergo a lengthy and costly environmental analysis. The metal cabinets, which measure 4 feet tall, 4 feet wide and 2 feet deep, will house telecommunications equipment for the U-Verse triple play service bundle, which can include Internet access at speeds up to 26M bps (bits per second) along with digital TV and VoIP (voice over IP).  The cabinets that AT&T wants to install are much larger than its existing boxes in the city. Neighborhood activists had complained that the cabinets would block sidewalks, attract graffiti and clash with the dense scale and historic character of some of San Francisco’s communities.

While AT&T now has environmental clearance to install up to 726 boxes, the company said it would put in no more than 495 without going back to the Board of Supervisors for permission to install more of those boxes in the future.

The cabinets are used to interconnect AT&T’s fiber network with copper wires that go the rest of the way to individual homes. The carrier will still have to get approval for each box, but won’t have to undergo a study of the total impact of the equipment on the city’s environment.

“This decision means we’re finally going to be able to bring competition and choice to San Francisco,” said Marc Blakeman, AT&T regional vice president.

 Let’s see how quickly AT&T moves to deploy it in the city.

http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2011/07/20/BA2T1KCHSC.DTL

Observation and Comment on U-verse

For quite some time, we’ve been waiting for U-verse to come to Santa Clara County.  It would break the monopoly Comcast now enjoys on pay TV and higher speed Internet service.  In many condo’s in Santa Clara (including the one I own), we can’t put satellite dishes on the roof.  In other cases, heavy tree covered homes don’t have a direct line of sight for a satellite dish.  So anyone who wants to watch live sports, must buy digital cable from Comcast, since there are very few  games broadcast on free over the air digital TV these days.  Even the MLB final league championship playoff games were on TBS and not free TV!

One of our very active IEEE ComSocSCV Discussion list members had U-verse installed in Los Altos, CA last year.  He says he is very happy with the service, despite some initial outages after installation.  A close companion of this author recently had U-verse installed in her Santa Clara apartment and it worked great right out of the box!  Her TV reception was crystal clear with a larger choice of channels than available with Comcast Digital Cable for the same price. 

This author is quite anxious to try U-verse, but we’re a bit worried about AT&T customer servcice, which has been less than stellar for DSL and even POTS.  Initial calls for service are outsourced to India and it generally takes a long time to resolve any technical problem.  We wonder if U-verse customer service will be handled differently than traditional DSL and POTS.  We certainly hope so!

U-verse References (by this author)

1. Increased Video traffic necessitates AT&T to cap DSL Internet + U-Verse

http://viodi.com/2011/03/13/increased-video-traffic-necessitates-att-to-…

2. A Perspective of Triple Play Services: AT&T U-Verse vs Verizon FiOS vs Comcast Xfinity

https://techblog.comsoc.org/2010/10/10/a-perspective-of-triple-play-service…

3. AT&T’s U-verse Build-Out Over by Year end (Source: DSL Reports)

https://techblog.comsoc.org/2011/05/20/ats-u-verse-build-out-over-by-year-end-source-dsl-reports 


Addendum:  In its July 22, 2011 earnings announcement, Verizon reported it had added 189,000 FiOS Internet and 184,000 FiOS TV customers.

ITU-T FG Cloud 6th Meeting: Progress on 7 Output Documents

The sixth ITU-T Focus Group on Cloud Computing (FG Cloud) meeting took place in Geneva, Switzerland, from June 27 to July, 1 2011. 40 participants, representing 18 organizations, submitted 92 contributions, input liaisons and presentations to this meeting.  Meeting documents are only available to ITU-T members with a TIES account.

General information about the FG Cloud is available on its web site: http://www.itu.int/ITU-T/focusgroups/cloud/

The main results of the sixth FG Cloud meeting were the progression of seven output documents:
1. Introduction to the cloud ecosystem: definitions, taxonomies, use cases, high level requirements and capabilities.
2. Functional requirements and reference architecture
3. Infrastructure and network enabled cloud
4. Cloud security, threat & requirements
5. Benefits of cloud computing from telecom/ICT perspectives
6. Overview of SDOs involved in cloud computing
7. Cloud resources management gap analysis (initial draft for review)

These documents are in various stages of development.  Some of them are fairly stable, others are not.


This author believes that two of the above list of output documents will be especially important for cloud network providers and hardware/ software vendors. 

Here is a very brief high level overview of each of those two documents (they are works in progress):

Functional Requirements and Reference Architecture

Cloud architecture must meet several requirements to enable sustained innovation and development of cloud services. With multiple stakeholders involved, the cloud architecture must be flexible to fit the needs of infrastructure providers, service providers and service resellers. Cloud architecture must enable multiple models and use-cases, some currently known and others to be envisioned in future. Currently known models include IaaS, PaaS and SaaS and it is possible these would be used in combination. A cloud provider must be able to provide all or some of these services using the same architecture. For private and hybrid cloud operations, cloud services must appear like intranet services. This means a user must be able to access resources using the same domain names as on the intranet. Hosts and resources that have been migrated from private to public clouds should be accessed transparent to where they are being currently hosted.  Cloud architecture must enable early detection, diagnosis and fixing of infrastructure or service related problems. The consumers may have little to no control on ensuring that this is running correctly so the service rides on a provider’s ability to fix issues quickly.

Telecom Cloud Computing reference architecture should consider four entities:

1. Clients: which will be users, internet applications or software clients; they all have corresponding functions to interwork with cloud services.
2. Network: also called “pipeline”, will be more intelligent in cloud computing. Because computing and storage are aggregated at network center and we think it must cause the network architecture to be changed. There should be more study on it. All Cloud interwork activities will happen on network.
3. Cloud itself: it generally includes three layers: Physical DC, Cloud OS, Services Capabilities and Portal. It provides APIs to “Clients” or other Clouds. We think cloud is complicated because of its technologies and services types. However, no matter virtualization, distributed computing or multi-tenant are methods of how computing (and storage) is organized, they can be thought as Cloud core functions (Here we call “OS”). Above Cloud “OS”, all type of services run. IaaS, PaaS and SaaS can be mapped into instances run on “OS” and have different service form. 
4. External Interwork Entities (e.g. Management Platform, other Cloud): Cloud services platform must consider how to integrated with the old operating platform and how to interconnect with other cloud(the same operator or different operators)

Infrastructure and Network Enabled Cloud

The ITU FG Cloud participants believe that network service providers have a unique opportunity to bundle or combine Network and IT resources to provide cloud computing and/or storage services. Network service providers can also leverage their network assets to ensure excellent network availability and performance for secure end to end cloud services.  Another opportunity for service providers is to evolve network resource allocation and control to more dynamic in order to meet the needs to provision on-demand cloud services.

The activity of this work area will be focused on:
a[ the ability to link existing networks services, Internet connectivity,  L2/L3 VPN efficiently to  public or private cloud services.
-b] the ability to link a flexible L2 and L3 network management and cloud technology forming an integrated cloud infrastructure enabling cloud services.

The infrastructure and network enabled cloud can deliver IT infrastructure (especially virtualized IT resources) as a service. Virtualization allows the splitting of a single physical piece of hardware into independent, self-governed environments, which can be extended in terms of CPU, RAM, Disk, I/O and other elements. The infrastructure includes servers, storages, networks, and other hardware appliances.

The common characteristics of infrastructure and network enabled cloud include:
-Network centric: The framework of infrastructure and network enabled cloud consists of plenty of computing resource, storage resource, and other hardware devices that connect with each other through network.
-Service provisioning: Infrastructure & network enabled cloud provides a multi-level on-demand service mode according to individualized demand of different customers.
-High scalability/reliability: Infrastructure & network enabled cloud can adapt to changing requirements of customers quickly and flexibly, and realize high scalability and high reliability through various mechanisms.
-Resource pooling/ transparency: The underlying resources (computing, storage, network, etc.) of Infrastructure amd network enabled cloud are transparent to the customer, the customer does not need to know the how and where resources are deployed.


Next meeting:  The seventh FG-Cloud meeting is scheduled for  September 26-30, 2011 in Seoul, Korea.   September 26th will be a Joint Meeting with ISO/IEC JTC1 and NIST.


References:  This author has written many articles about the Cloud Computing standards (or the lack thereof).  Here are links to a few of them:

http://viodi.com/2011/06/23/cloud-leadership-forum-opportunities-obstacles-to-cloud-adoption/

https://techblog.comsoc.org/2010/12/10/whats-the-uni-nni-and-network-infrastructure-needed-for-cloud-computing

IEEE P2302 Inter-Cloud Working Group Kickoff Meeting: July 15, 2011

DisclaimerThis is not an official meeting report  The author is not an officer of this committee.  He attended this meeting (on his own time and expense) as an observer representing IEEE ComSoc, where he is a full time volunteer.

Executive Summary

The IEEE P2302 WG held its first meeting on Friday afternoon, July 15th in Santa Clara, CA.  Approximately 16 people, including two IEEE Standards Association employees attended the meeting.  There were two presentations and some discussion (much of it precipitated by this author).  The Chairman of the IEEE Cloud Initiatives also spoke.

1.  The scope, terms of reference, and problems to be solved were addressed in a presentation by WG Chair David Bernstein. 

2.  The goals, objectives and output whitepaper of the Japan based Global Inter-Cloud Technology Forum (GICTF) was presented by Kenji Motohashi of NTT Data.

3.  Steve Diamaond, IEEE Cloud Standards chairman hosted this meeting at the EMC Santa Clara facility. Steve welcomed the attendees and made a few concluding remarks about the near term work plan.

Background article:  https://techblog.comsoc.org/2011/04/07/ieee-cloud-computing-initiative-will-it-have-legs


Abstract

The proposed P2302 standard will define topology, functions, and governance for cloud-to-cloud interoperability and federation. Topological elements include clouds, roots, exchanges (which mediate governance between clouds), and gateways (which mediate data exchange between clouds). Functional elements include name spaces, presence, messaging, resource ontologies (including standardized units of measurement), and trust infrastructure. Governance elements include registration, geo-independence, trust anchor, and potentially compliance and audit. The standard does not address intra-cloud (within cloud) operation, as this is cloud implementation-specific, nor does it address proprietary hybrid-cloud implementations.

Scope:  The working group will develop the Standard for Intercloud Interoperability and Federation (SIIF). This standard defines topology, functions, and governance for cloud-to-cloud interoperability and federation. Topological elements include clouds, roots, exchanges (which mediate governance between clouds), and gateways (which mediate data exchange between clouds). Functional elements include name spaces, presence, messaging, resource ontologies (including standardized units of measurement), and trust infrastructure. Governance elements include registration, geo-independence, trust anchor, and potentially compliance and audit. The standard does not address intra-cloud (within cloud) operation, as this is cloud implementation-specific, nor does it address proprietary hybrid-cloud implementations.

Purpose: This standard creates an economy amongst cloud providers that is transparent to users and applications, which provides for a dynamic infrastructure that can support evolving business models. In addition to the technical issues, appropriate infrastructure for economic audit and settlement must exist.

P2302 WG web site:   http://grouper.ieee.org/groups/2302/


David Bernstein’s Inter-Cloud Introduction presentation

David indicated an emerging view on inter-cloud would come from three types of organizations:

1  Standards organizations and industry associations/forums.

2. Research institute work and open source software organizations

3  Public test beds

Mr Bernstein cited an inter-cloud use case for storage roaming, where a client could gain access to “federated cloud” storage with the cloud storage provider synchronizing the data stored in the cloud(s) to the mobile access device.

A proposed Inter-Cloud Reference Network Topology was presented which focused on two cloud network elements:  An inter-cloud route and inter-cloud exchange  David said there was a lot of research work going on in this area.  In response to a question of how meeting attendees could gain access to those related research papers, he said they are now on IEEE Explore, but would eventually be uploaded to the “IEEE P2302 Collaboration web site.”  The timing for that was not specified, but a user ID and password will be required for access to those and other WG documents.  It was noted that copyright agreements with the authors would be needed prior to uploading.

David noted that a “Registration and Trust Authority” for inter-cloud was urgently needed.  It would interact with other similar authorities, e.g. IEEE or GICTF Registraion Authority.  Trust architecture and functional elements also must be defined.

The need for a standardized Conversational Protocol between cloud gateway entitiies is also needed.  David suggested that might be XMPP or perhaps SIP.  No details were given for why those might be a good choice

A high level overview of the P2302 deliverable outputs was presented by David.  He identified three Inter-cloud work items for this small WG:

1  Functional Overview and functional description of each inter-cloud network element

2  Specification of protocols and formats

3  Co-ordination of test beds and open source software activities

Discusssion: 

It was noted that there had not been much, if any, work done on these areas by other cloud computing SDOs.  This author suggested compiling a list of relevant Cloud SDOs and the inter-cloud work they were doing.  After evaluating that, it was suggested to request formal liaisons with said SDOs.  A first cut at such a Cloud Computing SDO list is at:

https://techblog.comsoc.org/2011/07/15/cloud-computing-standards-dev…


Kenji Motohashi’s presentation on Global Inter-Cloud Technology Forum (GICTF)

Motohashi-san stated that the goal of the GICTF was to promote global standardization of the “inter-cloud system.”  It is expected that more workloads (and storage) will move from one cloud to another, yet be accessed by the same entity. Therefore, solid standards are needed for inter-cloud interfaces,

Note: The GICTF web site states: “We aim to promote standardization of network protocols and the interfaces through which cloud systems interwork with each other, and to enable the provision of more reliable cloud services than those available today.”   http://www.gictf.jp/index_e.html

The first output of the GICTF was a white paper: Use Cases and Functional Requirements for Inter-Cloud Computing, August 9, 2010.  It is available for free download at: 

http://www.gictf.jp/doc/GICTF_Whitepaper_20100809.pdf

Kenji noted that proviioning, control, monitoring and auditing (for SLAs and billing) across multiple clouds were all urgently needed.  Inter-cloud architecture and standardized interfaces must be defined/ specified.  This will require a non-trivial set of problems to be solved by GICTF and related SDOs.  Two that were mentioned were: OMG Telecom Cloud and NIST Cloud Computing Standards Roadmap.  Kenji thought that ITU-T FG might be a good organization to collaborate on inter-cloud, but he wasn’t up to date on the work they were doing (Mr Hiroshi Sakai of NTT attended the last FG Cloud meeting).


P2302 Workplan:

-Have a conference call in 2 weeks. 

Author’s Note: Hopefully, the P2302 web site will be uploaded with: the 2 presentations made at this meeting, the relevant inter-cloud research documents alluded to, and the Chair’s July 15th meeting report (and/or meeting minutes from the Secretary) by then.  And WG members will each receive a user name/password to access that content.

-Next f2f WG meeting will be in about 6 weeks.  Dial in access (via the web) will be possible for those who can’t be physically present at the meeting.  However, a meeting host is needed and no one in the room volunteered.


Observation and Closing Comment:

This meeting was scheduled for four hours, but it lasted less than two hours (it adjourned at 3:35pm but there was a 40 minute break).  The agenda was not circulated in advance, there was no call for contributions, and Motohashi-san told us he only created his presentation the same day!  There was a reference made to all the inter-cloud research/ papers presented at conferences, but no list of such papers was presented. 

So there seems to be a huge mis-match between the amount of work that needs to be done with the very little that was accomplished at this first P2302 meeting.  It seemed almost like an attempt to identify the problem set, but not seriously undertake the solution (which would involve a tremendous amount of work and colloboration with other SDOs).

It appeared that the majority of attendees at this meeting were curiosity seekers rather than folks that had a desire to contribute to the huge standards project sketched out.  In particular, the major Cloud vendors (Amazon, Rackspace, Microsoft Azure, etc) were either silent or not present (the attendence list was not available to the attendees).  I don’t believe the P2302 Chairman or the IEEE Cloud Initiative Chairman recognize the “heavy lifting” type of work that has to be done  Or the time consuming process of liaising with other like minded standards organizations.  Unless there is a huge uptake in dedicated delegates concurrent with more aggressive leadership and organization, this standards initiative will fall by the wayside.

Page 84 of 94
1 82 83 84 85 86 94