Success! Aug 24 ComSocSCV Social with tech discussions, networking and inter-personal communications

Aug 24 6pm-9pm  China Stix,  Santa Clara, CA  www.comsocscv.org

During our 6pm -7:15pm networking session we had breakout groups to discuss:

  • Sprint’s M2M platform and initiatives (Led by Sprint’s M2M product mgr)
  • High speed transmission on twisted pair:  10G BaseT (LANs) and xDSL  (Sept 8 meeting topic)
  • Smart Grid, Smart Devices, Internet of Things (Sept 25 workshop topic)
  • 40G/100G Ethernet (Oct 13 meeting topic)
16 people (2 from Sprint) attended our gala dinner at China Stix in Santa Clara.  In addition, two came for the free networking session that preceded the dinner.  We had attendees driving to Santa Clara from all over the SF Bay Area, Monterey, Sacramento/Folsom and Orinda.  Lots of great conversation, camraderie, fine food and wine.  Two lucky individuals took home a bottle of premium wine as a gift.

Here are a few of the comments/ testimonials received via email with my acknowledgement at the end of this chain:

Alan —

Thanks again for the wonderful time last Tuesday–I will make every one of these events from now on.  I hope to make the 25 Sep event, though I think may be on a plane coming in to San Jose at the time.  You run a great meeting and social event, my friend–many thanks!

Cheers,

Karl

— KARL D. PFEIFFER, PhD, Lt Col, USAF
Assistant Professor


Thanks for a very well-organized Aug 24th social!

Alan, thanks a million from my side as well. It was well-organized and every one enjoyed the evening. I finally got to meet with Alan Earman and within minutes found out that we know several people in common. The world of connectivity does these wonders and Comsoc is a big part of it!

MP Divakar


Hello Alan and everyone,

Thank you for a great event last night.  Thank you for having us there.  We had a great time meeting everyone, the food was delicious, and the wines were tasteful.  I hope we can meet again soon.  Have a great day everyone!


Yes Alan, I agree with Sameer – this was great! I wish more people came and enjoyed the social.
Thanks,
Prasanta


Hi Alan,
Just wanted to say thank you once again for a very well-organized social yesterday at China Stix. My wife Sumi and I enjoyed meeting with everyone, the stimulating conversations and the very good food & drink.
Great job and thanks again!
Sincerely,
Sameer


Hi Alan,

Thank you so much for a great evening. I enjoyed meet with everyone there as well as your good choice of food. Appreciate all your good words about me. I look forward to future collaborations.

Alice


Hi Alan

I had a wonderful time at the ComSoc social. The food, wine, and conversation were fantastic. Thank you for inviting me to attend.

Thank you for sharing the panel questions with me. Perhaps we should discussing security and or privacy aspects as it relates to regulatory issues.

I’m sure there will be a few folk interested in IPv4 addressing and connecting billion of machines at some point we will run out of IPv4 addresses.

Kind regards,

Michael Finegan

Manager, M2M Solutions Engineering

Emerging Solutions Group at

Sprint Nextel


Chairman’s Response:

Dear All

Thanks for your compliments on the social.  Alice had earlier acknowledged she had a great time and I hope everyone else did too!
We need more of these events to make people feel good, improve their networking/ inter-personal communications skills, and exchange information and opinions.  It gets your mind off all the financial/economic/political problems of the day.  And there are sure a lot of those!
I encourage all of the attendees (see To: list) to communicate with one another and build your personal network of contacts.  You can continue the living dinner conversation via email and phone conversations.  Just reach out to those people you’d like to know better or exchange ideas/ proposals with.  Let the round two communications begin upon receipt of this email

Thanks again for coming last night.  Especially appreciate the dedication of those who had very long commutes- Olu from Sacramento/Folsom, Karl from Monterey, Michael from Orinda, Erin from?  We really appreciate your attendence at our gala social!

Warmest regards and best wishes
Alan J Weissberger, ScD, retired Prof SCU EE Dept
IEEE ComSoc SCV Chairman www.comsocscv.org

Sept 8 ComSocSCV meeting backgrounder: High Speed Transmission on Twisted Pair in LANs and xDSL

IEEE ComSocSCV Sept 8 meeting:   High Speed Transmission on Twisted Pair in LANs and DSL
 
Our Sept 9th meeting features talks on high speed transmission on both LAN/data centers and DSL access networks.  Details at
 
Several of your ComSocSCV officers spent yesterday afternoon and early evening at ATT Labs in San Ramon.  We were surprised to learn of ATTs extensive use of VDSL2 in FTTN deployments of U-Verse (their triple play bundled servie that includes TV/VoD via IPTV, HIgh speed Internet access, and Voice (either POTS or VoIP).  They are also using VDSL to reach subscribers in Multi Dwelling Units (MDUs).  Separate from U-Verse, ADSL2 is being used for single point Internet access.
 
Our other talk will be the status of 10G Base T for LANs and data centers.  It’s amazing that in 1993, High Speed Internet was only 100M bits/sec.  Now 1G BaseT Ethernet is widely deployed and 10G Base T is coming along fast (the standard has been completed)

 
Here is a Brief History of Twisted pair based Ehternet and xDSL, based on prsonal observations in the 1980s and 1990s:

 
Twisted Pair based LANs
 
 In the mid 1980s, ATT had a 1M b/s twisted pair transmission system named “STAR-LAN.”  It never went anywhere as a cheaper version of coax based Ethernet (10Base2) was more popular.  Then in the late 1980s Manchester coded 10BaseT became very popular.

Notes on nomenclature: 
 
“BASE” is short for baseband, meaning that there is no frequency-division multiplexing (FDM) or other frequency shifting modulation in use; each signal has full control of wire, on a single frequency. 
 
“T” designates twisted pair cable, where the pair of wires for each signal is twisted together to reduce radio frequency interference and crosstalk between pairs (FEXT and NEXT).
 
“UTP” = Unshielded twisted pair, as in UTP-3 (voice grade) and UTP-5 data grade) twisted pair.
 
“PMD” is the lowest sublayer in the IEEE 802.3 PHY layer.  It stands for Physical Medium Dependent.  Any coax, twisted pair, or fiber optic transmission system is the essence of the PMD sublayer.

Continuing the story……………….
 
Sometime in 1992, ANSI X3T9.5 began developing a standard for 100M b/sec FDDI on Twisted Pair.  It was called “TP-PMD.”  Discussion Group member (and former IEEE 802.3 Chair/ Vice-Chair) Geoff Thompson and I participated in that committee.  It was chaired by a guy from DEC.  In 1993 there was a performance test between the two competing twisted pair transmission technologies that were candidates for the TP-PMD standard.  It was conducted at an independent test lab in New Hampshire.  Crescendo’s technology (based on a 3 state Pulse Amplitude Modulation code called “MLT-3”) beat out National Semiconductor’s and was chosen as the TP-PMD standard.
 
Also during 1993, there was a “Fast Ethernet” standards war, with HP’s 100 VG AnyLAN (new MAC and PHY) battling 100BaseT (where the Ethernet MAC was not changed).  100BaseT had one version for UTP-3 and another for UTP-5.  It was that latter version, known as 100 Base TX that dominated the market.  Grand Junction seemed to be the ring leader of that camp, although Intel was a staunch supporter.  100Base TX used the PMD from TP-PMD without any changes.  Ironically, both Crescendo and Grand Junction (as well as Kalpana) were all acquired by Cisco and that’s how Cisco came to dominate the LAN switching market.
 
Years later, 1G Base T and now 10G Base T became IEEE 802.3 PMD standards.  I have not followed the market acceptance of those, but I’m sure our Sept 8 speakers from Teranetics will fill us in.

Digital Subscriber Loop  (xDSL)
 
The first version of DSL was for Basic Rate ISDN U interface (between the Network Terminating Unit and the voice grade twisted pair access network.  In North America, it was based on the 2B1Q line code (Pulse Amplitude Modulation) which was selected by the T1E1.4 committee in August 1986 as a compromise, because it couldn’t decide between 3 completely different transmission systems (I was actually at that meeting in Monterrey, CA and head the “dark horse” presentation by Andrew Siroka of Mitel Semiconductor.  He claimed BT had done extensive tests that showed 2B1Q outperformed the other systems and that Mitel could make a transceiver with a significantly smaller die size (lower cost and power disipation) than the other proposed systems.
 
Bellcore’s Joe Lechleider – a member of the T1E1.4 committee (as was I), had suggested asymmetry would allow higher speeds than ISDN’s 160 kbps, perhaps as high as 1.5 Mbps.  The theoretical results wound up being a lot more than 1.5 Mbps, depending on line length, bridge taps, condition of copper, etc.  However, there was no standards group that seemed to be interested.
 
In the early 1990s, there was a new vision of telco TV- initially on fiber optic cable to the home, but some thought it might be feasable to transmit 1 video stream in 1 direction over a 1.5M bit/sec twisted pair, using the higher frequencies above 100K Hz.  In 1992 the T1E1.4 standards committee took on the Asymmetric Digital Subscriber Loop (ADSL) project.  There were several entries in the official T1E1.4 standards competition:
 
  • Stanford/Amati DMT led by John Cioffi and his grad students
  • Bellcore/UCLA/Broadcom QAM
  • ATT’s CAP (Carrier Amplitude Modulation is actually a DSP based version of QAM)
The adaptive multicarrier known as Discrete Multi- Tone or DMT (with bit swapping between bins to track noise changes) won the competition by having much better noise margins.     The closest test was a 11 dB advantage for DMT.  Some of the tests showed 30 dB improvements.  T1E1.4 picked DMT on March 10, 1993.  That standard did not become popular.  Instead,  ATT spin-offs (Globespan, Lucent and Paradyne) started making CAP based DSL chips and equipment, which ATT and other telcos started to deploy.  In 1996, T1E1.4 took on an ADSL version 2 standard (T1.413v2).  Due to a lot of controversy, there was a primary standard based on DMT (which I contributed to and wrote several sections) and an Appendix on CAP.  DMT quickly prevailed as Alcatel started designing and deploying DSLAMs based on it.  THe ADSL Forum only recognized DMT and not CAP.  Carriers all over the world (like Pac Bell, Singapore Tel and many others) signed exclusive deals with Alcatel and DMT based DSL became dominant.  CAP was then dead. 
Aware lead a consortia of groups who did not want to license Stanford/Amati patents by  introducing “G.lite” that tried to remove the very essential bit-swapping of DMT and reduced the number of tones from 256 to 128 to “reduce cost.”  While G.lite became an ITU-T standard (the editor was from Intel, which exited the xDSL business), it failed in the marketplace. Instead, G.dmt (and then ADSL2+) went the other direction of higher speeds and actually increasing the number of tones.  DMT emerged as the worldwide transmission system for ADSL.
 
VDSL was moving on a parallel track, but was a perceived to be a smaller market due to distance limitations.  There was an equally fierce CAP vs DMT standards war such that T1E1.4 could not select a clear winner.  VDSL was envisioned  to use ATM exclusively for layer 2 transport and not Ethernet.  But the IEEE 802.3ah Ethernet First Mile (EFM) committee selected DMT-VDSL as the short range copper interface and defined a convergence sublayer to make Ethernet MAC ride over VDSL (and SHDSL-2).  Despite much fanfare, I don’t believe that standard was ever implemented.  But now, ATT has deployed VDSL-2 as part of its U-Verse triple play transmission system from the Fiber Cabinet in the node to the customer premises.  Unlike the earlier versions of the VDSL and EFM standards, there is a POTS band reserved in VDSL-2 for analog telephony.
Note:  I exited the DSL space over 10 years ago, so have not kept up with its progress.  However, I got the following information from a very credible source that must remain anonymous:
 
“There was a second VDSL Olympics in 2003, also hosted by Bellcore and also BT.   DMT VDSL systems were submitted by Alcatel and by Ikanos, while QAM/CAP systems were submitted by Lucent and by Metalink/Infineon.   The results were similar to the first VDSL Olympics in that the advantage was roughly 10 dB.   That did it, CAP/QAM died at a June 2003 T1E1.4 meeting where DMT was selected for VDSL2.”
 
Yesterday we learned that  ATT is using VDSL2 for its U Verse FTTN transmission system -between the Optical Network Unit (ONU) and the subscribers Network Termination unit (NT) over a copper twisted pair.  They are also using it as a U Verse distribution system within Multi- Dwelling Units.  I was astonished ATT claimed they got very good performance at 5K feet line length and often deployed longer VDSL loops.  They are now testing 4 HD video streams + high speed Internet + POTs over a single VDSL2 loop!  We would expect test results to be announced later this year.
 
We will get filled in on all the gaps by our ASSIA speaker on Sept 8th.  I invited Prof Cioffi to the meeting, but he wrote that he might have to travel to Asia.  Cioffi and I taught DSL classes at IEEE Infocom and we co-authored several T1E1.4 standards contributions.  Later, I taught many ADSL architecture classes at private companies that were implementing or deploying ADSL/SDSL.  My partner was Amati’s chief engineer John Bingham (who I had worked with at Fairchild in the Spring and Summer of 1970).  Bingham and I got so good at teaching the ADSL class that we joked that we would swap sections (he did the modulation and transmission while I did the architecture and OAM section).  He asked me to write the introduction and the second chapter of his book on ADSL Network Architecture.    You can read it on line (isn’t this a copyright violation?):
 
 
You can buy the book here:
 
 
Book Review from IEEE Communications Mag, Sept 2001:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=948376&userType=inst  (IEEE Explore account required to access full text)
End of Story……………………………………………………………………..

Cloud Computing: Impact on IT Architecture, Data Centers and the Network: July 14th IEEE ComSocSCV meeting

Three keynote talks from VMware, Microsoft and Ericsson will be followed by a lively panel/ Q and A session with Juniper Networks also participating. We are all very excited about this comprehensive and well balanced look at cloud computing from both a computiing and communications perspective. It should be one of the best technical meetings of the year!

www.comsocscv.org

Presentation Titles and Abstracts

 

Building Many Bridges to the Cloud, Robin Ren, Director of R&D, Cloud Applications and Services, VMware

Cloud computing is on every CIO’s top priority list nowadays. However, like any “game-changer” technologies in history, today’s Cloud Computing field can appear to be both exciting and chaotic. Most large technology companies claim to have at least one cloud product or service. Many start-up companies are also trying different ideas. In the introduction, we will offer answers to some basic questions:

-What is Cloud Computing?
-Why does Cloud matter?
-How will Cloud change the IT industry?

We’ll look at the major Cloud Computing players, trying to analyze the big trend and compare different approaches. In the end, there are several valid ways to move from traditional IT to the Cloud, targeted at different audiences and workloads. It is important to understand how you can participate and benefit from this new “IT gold rush.”

Cloud Data Centers and Networking Trends, Alan G. Hakimi, Senior Cloud Architect, Microsoft Services Enterprise Strategy and Architecture

The data center is at the heart of cloud computing. It brings dynamic virtualized server and storage environments to users via networks that provide cloud connectivity. The networks used to access cloud services will need more intelligence in several areas. They will have to quickly react to changes in the computing/storage environment, recovering from faults, and increasing or decreasing scale. This session will describe some architectural patterns in IaaS with respect to designing around resiliency and bandwidth. We will discuss the differences between traditional data centers and cloud data centers including intra-data center and inter data center communications. This session will also address networking trends with respect to federating clouds and providing secure, high quality network access to the data center.

Cloud Connectivity – offensive or defensive play? Arpit Joshipura, VP of Strategy and Market Development, Ericsson Silicon Valley
Cloud services and advanced devices are worthless without connectivity. At the same time, cloud services are increasing in value with the addition of mobility. This talk focusses on value of connectivity to the cloud and discusses the mobile aspects that an operator can leverage. With the asset of connectivity, an operator can use Cloud as both an offensive and a defensive strategy. This talk outlines the details of this strategy and identifies requirements on connectivity including type of access, SLA, QOS, Interoperability and standardization.

Additional Panelist: Colin Constable, Chief Enterprise Architect within the office of the CTO, Juniper Networks

Bio’s:

Robin Ren is a Director, R&D at VMware in Palo Alto, California. He manages an engineering team in the new Cloud Applications and Services BU. He is involved in many of the VMware’s cloud initiatives at the Infrastructure-, Platform-, and Application-as-a-Service layers. He is also the ambassador at the headquarters for the VMware R&D Center in Beijing China.

Alan Hakimi joined Microsoft in 1996 as a member of the Microsoft Consulting Services group. Alan is an IEEE member and has MCA and CITA-P architect certifications. He is currently working in Microsoft Services leading efforts on Enterprise Strategy and Cloud Architecture. Alan enjoys cycling, hiking, making music, cooking, and studying philosophy. His blog on Zen and the Art of Enterprise Architecture is located at http://blogs.msdn.com/zen.

Arpit Joshipura heads up Strategy & market development for Ericsson in Silicon Valley. In his role, responsible for network operator architecture strategies including IP, Convergence, Cloud. He is a valley veteran and has worked in several startups and established companies in leadership roles – business and engineering. Arpit is a veteran speaker and panelist at ComSocSCV meetings He gives Indian classic music performances and plays the harmonium.

Colin Constable joined Juniper Networks in September 2008. He previously spent twelve years at Credit Suisse, most recently as the Chief Network Architect & EMEA Infrastructure CTO. In this role he created and published the “Credit Suisse Network Vision 2020” focused on seven sub domains of networking. He built a governance framework leveraging the strategies structure to ensure cross technology tower engagement and decision making, both technical and financial. He also led numerous programs to increase cross-technology, technical knowledge.

July 14th (6pm-9pm) at National Semiconductor, Santa Clara, CA  

Timeline:

6pm-6:30pm Refreshments and Networking

6:30pm-6:40pm Opening Remarks

6:40pm-8pm Presentations (3)

8pm-8:45pm Panel Session + Audience Q and A

8:45pm-9pm Informal Q and A with panelists

ITU Cloud Computing Focus group and IEEE Cloud Computing Standards Study Group- will they fill the standards void?

Introduction- The need for Cloud Computing Standards
 
Cloud computing deployments are being announced on an almost daily basis.  Cloud computing speeds and streamlines application deployment without upfront capital costs for servers and storage. For this reason, many enterprises, governments and network/service providers are now considering adopting cloud computing to provide more efficient and cost effective network services.  The venture capital firm, the Sand Hill Group, has concluded that cloud computing represents one of the largest new investment opportunities on the horizon.  The Cloud  Computing market is forecast to be very big by IDC, Gartner Group, andother market research firms.  But there seems to be a lot of confusion regarding the service delivery method and lack of interoperability.  And there are no solid standards for Infrastructure as a Service, Platform as a Service or Software as a Service. This results in difficulties in exchanging information between cloud service providers and for users that change providers.  It may also present a problem when bursting between a private cloud and different public clouds.  Interoperability facilitates secure information exchange across platforms.

Camille Mendler, Vice President of Research at Yankee Group: “Cloud computing is the future of ICTs. It’s urgent to address interoperability issues which could stall global diffusion of new services. Collaboration between private and public sectors is required..”    That lack of interoperability is a huge problem, was highlighted at the recent Cloud Connect Conference (in Santa Clara, CA in March).  It was a very sobering experience for this author.  At the conference, it was revealed that there was no umbrella set of standards for cloud computing and no single standards body claims ownership of comprehensive cloud computing specifications.  IBM’s VP of Cloud Services Ric Telford was asked what he thought about the huge growth forecast for cloud computing.  Mr. Telford said: “I have no problem with those numbers (40% in three years; 70% in five) as long as you include the caveat, it could be any one of five delivery models.”  So the industry needs to define and standardize on those methods of delivering cloud services and applications to users, he said.

ITU-T Establishes Cloud Computing Focus Group

A new ITU-T Focus Group on Cloud Computing has been formed to enable a global cloud computing ecosystem where interoperability facilitates secure information exchange across platforms. The group will take a global view of standards activity in the field and will define a future path for greatest efficiency, creating new standards where necessary while also taking into account the work of others and proposing them for international standardization.

Malcolm Johnson, Director of ITU’s Telecommunication Standardization Bureau, said: “Cloud is an exciting area of ICTs where there are a lot of protocols to be designed and standards to be adopted that will allow people to best manage their digital assets. Our new Focus Group aims to provide some much needed clarity in the area.”   The new ITU-T Cloud Focus group will take a global view of standards activity in the field and will define a future path for greatest efficiency, creating new standards where necessary while also taking into account the work of others and proposing them for international standardization.

ITU-T study groups were invited to accelerate their work on cloud at the fourth World Telecommunication Policy Forum (Lisbon, 2009) and at an ITU-hosted meeting of CTOs in October 2009. The CTOs highlighted network capabilities as a particular area of concern, where increased services and applications using cloud computing may result in the need for new levels of flexibility in networks to accommodate unforeseen and elastic demands.

Vladimir Belenkovich, Chairman of the ITU Focus Group on Cloud Computing: “The Focus Group will investigate requirements for standardization in cloud computing and suggest future study paths for ITU. Specifically, we will identify potential impacts in standards development in other fields such as NGN, transport layer technologies, ICTs and climate change, and media coding.”

A first brief exploratory phase will determine standardization requirements and suggest how these may be addressed within ITU study groups. Work will then quickly begin on developing the standards necessary to support the global rollout of fully interoperable cloud computing solutions. The Focus Group will, from the standardization view points and within the competences of ITU-T, contribute with the telecommunication aspects, i.e., the transport via telecommunications networks, security aspects of telecommunications, service requirements, etc., in order to support services/applications of “cloud computing” making use of telecommunication networks; specifically:

  • identify potential impacts on standards development and priorities for standards needed to promote and facilitate telecommunication/ICT support for cloud computing
  • investigate the need for future study items for fixed and mobile networks in the scope of ITU-T
  • analyze which components would benefit most from interoperability and standardization
  • familiarize ITU-T and standardization communities with emerging attributes and challenges of telecommunication/ICT support for cloud computing
  • analyze the rate of change for cloud computing attributes, functions and features for the purpose of assessing the appropriate timing of standardization of telecommunication/ICT in support of cloud computing

The Focus Group will collaborate with worldwide cloud computing communities (e.g., research institutes, forums, academia) including other SDOs and consortia.  First meeting of the FG Cloud is  14-16 June 2010 in Geneva, Switzerland.  ITU-T TSAG is the parent group of this Focus Group.  More information on that meeting:

1st Meeting of Cloud Focus Group (ITU-T Members only):

The official combined announcement of ITU-T FG Cloud establishment and first meeting is contained in TSB Circular 114:   
If you wish to participate in the first meeting, note that there is an online registration form posted at:
  
The Focus Group web page:
will be updated as required. I recommend to regularly check whether new information is available.
From the Focus Group web page, you may subscribe to the
mailing list and have access to the meeting documentation for the June meeting.
The deadline for contributions is 7 June 2010.
http://ifa.itu.int/t/fg/cloud/docs/1006-gva/ihttp://ifa.itu.int/t/fg/cloud/docs/1006-gva/[email protected]http://www.itu.int/ITU-T/focusgroups/cloud/http://www.itu.int/cgi-bin/htsh/edrs/ITU-T/studygroup/edrs.registration.form?_eventid=3000151http://www.itu.int/md/T09-TSB-CIR-0114/en

ITU-T Distributed Computing Backgrounder:

A recently published ITU-T Technology Watch Report titled  ‘Distributed Computing: Utilities, Grids and Clouds’ describes the advent of clouds and grids, the applications they enable, and their potential impact on future standardization.

For further information please, please refer to the ITU-T web site:

http://www.itu.int/ITU-T/newslog/ITU+Group+To+Offer+Global+View+Of+Cloud+Standardization.aspx

http://www.itu.int/ITU-T/focusgroups/cloud/

ITU-T contacts:

Sarah Parkes                                                                          
Senior Media Relations Officer
ITU
Tel: +41 22 730 6135
Mobile: +41 79 599 1439
E-mail: [email protected]                                                                              

Toby Johnson
Senior Communications Officer
ITU
Tel: +41 22 730 5877
Mobile: +41 79 249 4868
E-mail: [email protected]

IEEE Cloud Computing Standards Study Group

Call for Participation

This a call for participation in the IEEE Cloud Computing Standards Study Group, sponsored by the IEEE Computer Society Standards Activities Board (SAB). An IEEE Standards Study Group is the initial step in the process of developing a IEEE standard and is open to all interested individuals.

Cloud computing is a new, rapidly growing model of computing which, according to the U.S. National Institute of Standards and Technology, “is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

While there is significant effort on specific cloud computing-related standards on the part of multiple entities, a major impediment to the growth of cloud computing is the lack of comprehensive high-level portability (how applications use clouds) and interoperability (how clouds work with each other) standards.

The mission of the IEEE Cloud Computing Standards Study Group is to determine the feasibility of developing an open standards profile which defines options for portability and interoperability of cloud computing resources. These profiles should address issues such as interfaces to computing, storage, network, and content resources, as well as workload (program and data) interoperability and migration, security, fault-tolerance, agency, legal and regulatory, intra-cloud policy negotiation, and financial relationships. It is expected that there will be multiple architectural approaches from which to choose.

The profiles should also support the Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS) service models, and the private cloud, community cloud, hybrid cloud, and public cloud deployment models.

Existing standards and those under development by Standards Developing Organizations (SDOs), and appropriate industry alliances, community collaboration efforts, and other groups will be used whenever practical. The Study Group will proactively reach out to such groups to facilitate their early involvement.

For further information and/or to be added to the IEEE CCSSG mailing list, please contact Steve Diamond, Chair, IEEE Cloud Computing Standards Study Group, at ieee-ccssg-chair [at] intercloud [dot] org

IEEE to Participate in White House Led Cloud Computing Strategy Discussion Focus is on Strategy to Accelerate the Adoption of Cloud Computing 

WASHINGTON, May 20 /PRNewswire/ — Dr. Alexander Pasik, Chief Information Officer (CIO) of IEEE, the world’s largest technical professional association, has been selected to join industry leaders and key administration officials to discuss the creation and adoption of national standards for cloud computing at a leadership meeting taking place today.

Dr. Pasik will join United States Deputy Secretary of Commerce Dennis Hightower, CIO Vivek Kundra and the Administration’s Cyber Security Coordinator, Howard Schmidt, along with other prominent industry thought leaders to discuss the challenges and opportunities of cloud computing adoption. Dr. Pasik brings to the meeting deep expertise in emerging technologies and their impact on business models, service-oriented architecture (SOA) and the technical and security characteristics of cloud computing models.

“As an IEEE member and CIO of the organization, I am very honored and excited to be a part of this initiative and to collaborate with the highest caliber of technology thought leaders on this critical issue,” Pasik said. “Our discussion is a significant step toward furthering technology standards to advance the implementation of cloud computing operations and establish national standards. It is a historic event in the legacy of U.S. technology.”

In March 2010, IEEE, in partnership with the Cloud Security Alliance, released findings from a survey of IT professionals that revealed overwhelming agreement on the importance and urgency of cloud computing security.

 

Conclusions:

While I knew cloud computing was way overhyped, I thought that there was one or more standardards organizations that claimed ownership.  I also thought that all the functional requirements and specifications done for grids, web services, and SOA (e.g. distributed management, federation, SLA requests and validation,etc) would not have to be re-invented and redone for clouds.  Wow, that’ll be a huge undertaking.   Which standards organization might step in to fill this void?  ITU, IEEE, other?

Without a set of unified cloud computing standards, it’s my belief that for at least the next five years, each cloud provider will define its own set of user interfaces, SLAs, performance parameters, security methods, etc.  The more cloud providers, the more chaos and confusion will reign.  Therefore, we believe an urgent, accelerated standards effort is needed for (at least) the network aspects of cloud computing, e.g. UNI and NNI, SLAs and validation/compliance.  I would’ve thought by now that the major players would’ve gotten together to create such an organization or combine several interested standards bodies/forums/alliances to make one.  We hope that ITU-T be the standards organization to set the reference network architecture for cloud computing.  Other standards bodies and/or forums will be needed to provide the computing framework and related standards.

 
Note that this author is an ITU-T member and has access to all documents, including contributions and meeting reports for the Cloud Computing Focus Group.  Please contact me if your organization might be interested in a consulting arrangement to monitor or research this new activity.

Video Surveillance and Video Analytics: Technologies whose time has come?

Introduction:

The IEEE ComSoc SCV chapter April 14th, 2010 meeting was very well attended with more than 80 people present.  This was our first joint meeting with TiE- The Indus Entrepreneurs organization. The meeting was titled, “Architectures and Applications of Video Surveillance and Video Analytics Systems.” and featured talks plus a panel discussion on those topics

The speakers scheduled to participate in the talks and panel session were Professor Suhas Patil, Chairman and CEO of Cradle Technologies, Basant Khaitan, Co-founder and CEO of Videonetics and Robb Henshaw, VP Marketing & Channels, Proxim Wireless. Robb Henshaw who was scheduled to speak on “A Primer on Wireless Network Architectures and Applications for Video Surveillance” could not attend the meeting due to illness and was replaced for the presentations section of the evening by Alan J Weissberger, IEEE ComSoc SCV chairman. The panel session  was moderated  by Lu Chang, Vice-Chairman of IEEE ComSoc SCV. This article has been co-authored with Alan J. Weissberger, who contributed the comment and analysis section, raised several unaddressed but nevertheless pertinent questions and also provided references to background articles on video surveillance and video analytics.

Presentation Highlights:

While presenting on behalf of Robb Henshaw on Wireless Network Architectures, Alan J. Weissberger noted that several new technologies are now converging which will make video surveillance a growing market and viable business.  These include: higher-quality IP digital cameras, improved and cost-effective video compression technologies (e.g. H.264/MPEG4 and HDTV), fixed broadband point-to-point and point-to-multipoint networks (including fixed WiMAX and proprietary technologies), and mobile broadband (including 3G+, mobile WiMAX and LTE).

To support the claim of a growing market for video surveillance and video analytics, Alan cited several key examples of applications for these technologies such as: security and surveillance applications, emergency and disaster management, asset and community protection by monitoring of buildings and parking lots, public entry/exits, sensitive areas such as ATMs, as well as high-traffic areas like highways, bridges, tunnels, public areas such as parks and walkways, infrastructure like dams and canals and buildings like a cafeteria, halls and libraries. Other applications include securing of sensitive areas like runways and waterways, perimeter security for military installations, remote monitoring of production on factory-floors and tele-medicine/eHealth initiatives.

Alan explained that Proxim believes that HDTV is going to be the technology of choice for video compression because users will be demanding higher quality video images.  Furthermore, Proxim thinks that that the wireless communication networks which convey the video streams are best built in a point-to-point and point-to-multipoint topology, rather than (WiFi) mesh which has fallen out of favor. He noted that Proxim’s broadband wireless transport systems that operate over these point-to-point and point-to-multipoint topologies do so over a private network (as opposed to connecting via the Internet like Cradle’s systems do, covered later in this article). Moreover, 95% of Proxim’s installations use fixed broadband wireless (both fixed WiMAX -IEEE 802.16d-2004 and a proprietary technology to increase speed and/or distance) rather than mobile broadband wireless connections.

Alan’s talk elicited two questions from the audience – the first questioner inquired why analog video surveillance technologies have found favor in practice with deployment and why digital video surveillance technologies were placed on the back burner after seeing initial deployment.  In his answer, Alan pointed out that digital video surveillance technologies need high-quality digital cameras, but also require a reliable transmission network (wired or wireless) which can provide steady bandwidth to transmit the video surveillance data to a point of aggregation like a central video server. In the absence of sufficient constant bit rate bandwidth, the resulting digital video stream quality will be unacceptable due to jitter or freezing of the image (caused by an empty playback buffer). The lack of sufficient network bandwidth was a major cause of digital video surveillance technologies failing to gain a large market share compared to analog systems. The second question related to the impact of electromagnetic interference (EMI) on the video information. Alan explained that the new wireless broadband communication systems (both WiMAX and LTE)  employ a multicarrier modulation scheme such as orthogonal frequency division multiplexing (OFDM) which is fairly resistant to EMI. Furthermore, OFDM can also be combined with multiple input multiple output (MIMO) transmission schemes to minimize the likelihood of errors at the receiver end.

In his talk on “Video Surveillance, Security and Smart Energy Management Systems,” Suhas Patil explained that the recent improvements in semiconductor chip set capabilities and new computer architectures have promoted the growth of digital video surveillance technologies such as those which employ network video recorders to aggregate video from an entire city. He also pointed out that a key contributor to the adoption of video surveillance systems in many parts of the world (despite concerns of invasion of personal privacy) was the possibility of, and actual occurrence of terrorism with the city of London (U.K.) being a pioneer in this regard. Suhas, while describing the structure of the video surveillance system, noted that a critical requirement for these systems is that they be resilient to any fault at anytime. These faults can include network breakdowns, power failures, control room disablement or faults caused by extreme ambient temperatures or extreme climatic conditions. As far as access technologies are concerned, digital video surveillance systems can use WiFi mesh networks, WiMAX networks or proprietary communication systems. According to Suhas, the technology underlying a digital video surveillance system is a highly complex one, employing state-of-the-art hardware such as cameras, storage systems and servers and state-of-the-art software including operating systems, and transmission technologies. Thus, the entire system needs very careful design in order to maximize it’s efficacy. Suhas also briefly explained videoanalytics as a system which can detect an object such as a human being behaving in a manner which would be difficult for a human observer to notice, such as a party guest moving around the room in a random or rapid manner compared to other party guests. Finally, the talk then pointed out several major challenges faced by video surveillance systems including the need to keep abreast of the rapidly changing technologies as well as practical deployment challenges.

In a response to a query on working on still images in a pixel-by-pixel fashion, Suhas explained that it is possible for a basic camera to capture raw information about a scene and then have the data processed pixel by pixel in order to maximize the dynamic range before the image is converted to a JPEG format. During the Q&A, Suhas briefly recounted the history of his company, Cradle Technologies, as being a spinoff from Cirrus Logic and which built multicore processors long before anyone else thought the technology (multicore processors) as being valuable. Additonally, while clarifying his assertion about digital versus analog video surveillance technologies, Suhas noted that while analog technologies are better suited to low-light conditions and offer better dynamic range than their digital counterparts, digital signals allow for better image resolution than analog signals. However, it is difficult to claim one technology as being clearly superior to the other.

The final presentation of the evening, by Basant Khaitan titled, “The Role of Video Analytics in Video Servers and Network Edge Products,” explained the nature of video analytics (VA) as a young field which has also been referred to as video content processing or video content analysis. Basant explained that VA can be defined as the real-time classification and tracking of objects like people and vehicles by using their (objects’) outlines rather than any bodily or facial features. The analytics system can be either co-located with the camera itself (at the network’s edge), or situated at a central server which receives the video streams from the various cameras at the network edges. Additionally, Basant pointed out that while a video frame’s size is of the order of megabytes, the corresponding analytics information for that frame is often no more than a few hundred bytes in size. While explaining the technical details of VA, Basant opined that modern VA systems produce results which are sufficiently reliable for practical use despite the presence of artifacts born from poor ambient light or dust-filled air. Basant then elaborated on a practical VA system built by his company and currently being used by the police in Calcutta, India for traffic management. According to Basant, VA systems built for such purposes as traffic control are highly mission-critical and require 100% reliable operation. In such conditions such as those found in developing countries, VA systems face severe challenges due to the presence of dense populations and poor public compliance with traffic laws. Furthermore, in tropical countries, extreme climatic conditions such as hot weather, rain-flooded streets and dust-filled air can also hamper the quality of the analytics results. In the case of Calcutta, all of the above conditions are faced by the VA system which is being used to control the city’s traffic lights. In this case, network-edge deployed analytics information is sent to a local video server (which, incidentally, was developed in cooperation with Cradle Technologies) from where the information can be remotely retrieved and viewed. Basant pointed out that all several intersections of Calcutta are now monitored by the VA system which has now replaced the previous system which was monitored entirely by human beings.

Panel Discussion Q and A:

The panel session was more of a collaborative Q and A with Suhas Patil and Basant Khaitan.  There were many questions from the meeting attendees, several of which could not be entertained due to a shortage of time (see list of unaddressed questions below).

In a response to a question on the number of video surveillance cameras which are connected wirelessly vs wireline connectivity, Basant mentioned that none of the cameras in their deployed systems are connected wirelessly at this time, and that all camera connections are of the wireline type. Suhas noted that while most cameras are connected via a wired connection like a CAT5 cable, the access to the content can be accomplished wirelessly via a cellular service (such as in the Indian scenario) or by a WiFi connection. WiFi-based access to the video server is also available if the server is connected to the Internet (Cradle’s server is one such example). The sensors in the video surveillance network themselves can also be connected wirelessly via a ZigBee mesh network, although they have not yet been so connected.

A question was raised about whether video processing is ready to see any major innovation such as what Map Reduce technology did for Google’s text processing. Suhas responded by explaining that the video analytics for applications like license-plate recognition can be done on the cloud. When queried about how Cradle’s technology can help mitigate the impact of, or altogether prevent future terrorist attacks, Suhas pointed out that the contemporary video surveillance systems had either failed altogether or had malfunctioned during terrorist attacks. Cradle’s system, on the other hand, continually monitors the deployed system for functional integrity via a central server cloud in order to ensure that it (the video surveillance system) is fully operational at all times.

The panelists were then asked a question about what is the preferred mode of connection for a city-wide array of cameras. Suhas invoked the example of Cradle’s approach to digital video surveillance, where fixed broadband wireless access via WiMAX or WiFi mesh is used to connect their networked video server to the Internet. Furthermore, a IP-VPN client (such as a PC or other screen-based device) is connected to the networked video server through the public Internet via a 3G or mobile WiMAX connection. The panelists, in a response to a question regarding the need for video analytics in countries such as those in Asia where labor is abundant and cheap pointed out that since an average human being’s concentrated attention span is only about 10-15 minutes, and the fact that, for the overwhelming majority of the time, nothing significant occurs which warrants raising an alarm, it is imperative that an automated VA system be put in place for such applications as were mentioned earlier in this article.

The final question of the evening inquired on how real-time bandwidth fluctuations within networks such as the mesh networks affect the VA performance. Suhas Patil mentioned that by placing the video server as close to the network drop-off point (i.e., close to the camera) as possible allows for good quality video to be streamed to the server. Thereafter, special network access techniques which circumvent the fluctuating bandwidth can be used to remotely retrieve the video information stored on the server.

After the panel session concluded at 9pm, several attendees stayed on for one on one interaction with the speakers.  This continued till 9:15pm when the lights were turned off and we were forced to vacate the auditorium

Unaddressed questions (submitted by Alan Weissberger to ComSocSCV Discussion List):

  • Where is video surveillance used now and what are the driving applications?
  • Are most of the video surveillance network architectures fixed point to point or point to multi-point, rather than mobile/wireless broadband? 
  • What role will 3G (EVDO, HSPA), WiMAX (fixed and mobile), and LTE play in delivering video content?  Why is mobile broadband required for video client access?
  • Are proprietary wireless technologies more cost effective for the performance they offer?  Is this a concern for the customer?
  • What type of security and network management is being used in video surveillance systems, e.g. for authentication and to prevent intrusion or monitoring?
  • What role does video analytics play to augment the potential and power of a video surveillance system?  Can it also be used as a stand alone offering?
  • Why are IP VPNs needed to convey and deliver the video content? Why not use a dedicated private network instead?
  • Is there any intersection between high end video conferencing and video surveillance systems? Are the same cameras, video transport facilities, and network management used for each?  What are the key differences?
  • What new technologies or business models are necessary for video surveillance to become a really big market?
  • What are the current barriers/obstacles to success are the video surveillance and video anaytics markets now experiencing?
  • How have terrorist attacks (e.g. Mumbai attack in late 2008) and national disasters (e.g. earthquakes) effected the video surveillance market?  What is the opportunity here?

Comment and Analysis (from Alan J Weissberger):

1.  Proxim’s answer to the question, Why Video Surveillance? Included these bullet points:

·   Perimeter, public monitoring solutions are becoming a key component for enterprises

·   Educational, healthcare and financial institutions are beginning to rely on surveillance systems to ensure safety within their premises

·   Public safety organizations depend on archived data from video monitoring systems to reduce vandalism in troubled neighborhoods

·   Live traffic surveillance is increasingly being used as a tool in community protection

·   Terrorist threats and public safety challenges continue to drive the need for high quality remote surveillance and timely response

Additionally, we’d include production plant and factory floor (remote) monitoring to prevent schedule slips and ensure good quality control.

2.  The role of broadband wireless networks in stimulating video surveillance:

-Fixed broadband wireless point to point and point to multipont networks and equipment (e.g. Motorola Canopy and Proxim’s products) that replace equivalent topology  wireline networks for delivering video over a private network.  Both proprietary fixed broadband wireless technology or IEEE 802.16d fixed WiMAX are used..  Those broadband wireless networks cost a fraction of the equivalent wireline networks and can be provisioned in a much shorter timeframe.  Fixed WiMAX could also be used to access the broadband Internet in an IP VPN scenario.

 -Mobile broadband (3G+, mobile WiMAX, LTE) which adds a whole new dimension to video surveillance and enables many new applications, e.g. IP VPN mobile client observing video images in remote locations, cameras in police cars transmitting video to police HDQ building while moving at high speed, emergency vehicles transmitting videos of natural disasters (hurricances, earhquakes, etc)  to 1st responder locations that will deal with the problem(s).

3.   It’s important to distinguish between the broadband wireless network architectures and topologies of Proxim (a wireless broadband transmission/ backhaul company) and Cradle (a Networked video server/client solutions company) :

a] Proxim makes broadband wireles transport systems that operate over a pt to pt or pt to multi point PRIVATE network.  Those systems backhaul video surveillance and other traffic to one or multiple destinations.  Proxim says that 95% of their installations use fixed (rather than mobile) broadband wireless connections.

b] Cradle uses fixed BWA (Wimax or mesh WiFi ) from their Networked Video Server to access the Internet.  On the client side, Cradle uses 3G or mobile WiMAX to connect the IP VPN client PC or other screen based device to the Networked Video server through the public Internet.  The key issues with that approach is that the end to end IP VPN server to client connection has to be high bandwidth and near constant bit rate, while the client access needs a high bandwidth, steady state mobile broadband connection to observe the MPEG 4 coded video over the IP VPN connection while in motion.  Otherwise the video image will be unacceptable or freeze.

4.  Basant’s example of controlling Calcutta traffic lights using video analytics integrated with a Networked Video server was a great demonstration of the underlying technology and proof of how valuable it is.

References:

Here are a few background articles on video surveillance and analytics:

Video Surveillance and WiMAX- a great marriage or not meant for each other? Four companies weigh in!  (all 3 speaker/panelists+ Sprint were interviewed for this article)

 http://www.wimax360.com/profiles/blogs/video-surveillance-and-wimax-a

The Wireless Video Surveillance Opportunity: Why WiMAX is not just for Broadband Wireless Access  by Robb Henshaw

http://www.wimax.com/commentary/blog/blog-2009/august-2009/the-wireless-…

Video Surveillance Going Fwd, Suhas Patil, ScD

http://0101.netclime.net/1_5/048/174/0e9/scan0046.pdf

Remote Access Video Surveillance & Analytics,  Cradle Technologies

http://cradle.com/about_us.html

INTELLIGENT VIDEO ANALYTICS, a Whitepaper

http://www.videonetics.com/VivaWhitePaper.pdf

Exclusive Interview: Robb Henshaw of Proxim Wireless!http://www.goingwimax.com/exclusive-robb-henshaw-on-proxim-wireless-5857/

Video Surveillance Product Guide

www.video-surveillance-guide.com/

FCC’s National Broadband Plan overview and IEEE ComSoc SCV March 10, 2010 meeting report

Introduction:

The IEEE ComSoc SCV chapter’s March 10th, 2010 meeting featured a very informative talk by William B. Wilhelm Jr., Partner, Telecommunications, Media and Technology Group at Bingham McCutchen LLP titled, “Effects of Broadband Policy and Economic Stimulus on Innovation at the Edge and in the Cloud.” The meeting was chaired by Simon Ma, Secretary, IEEE ComSoc SCV and was attended by approximately 30 chapter members. Despite the relatively low turnout, the number of questions which were raised and discussed during the talk and subsequent Q&A reflected the keen interest amongst the attendees on the broad topic of the Federal Communication Commission’s (FCC) National Broadband (NB) Plan.

Presentation Highlights:

Mr. Wilhelm explained that the FCC (on behalf of the federal government) believes that broadband can form a strong foundation for economic success and has hence drafted the NB plan. The FCC’s primary objective for the NB plan is to spur broadband deployment nationwide through innovation in devices and applications, which in turn, it is hoped, will drive broadband adoption amongst the United States populace. Furthermore, the FCC has designated that the plan must seek “to ensure that all people of the United States have access to broadband capability” and establish benchmarks to meet that goal. In fact, the foregoing statement also delineates the current top internal priority of the FCC. According to Mr. Wilhelm, a data rate of 3 Mbps is regarded as “broadband” within the United States. Underlining the non-trivial nature of the NB plan objectives, Mr. Wilhelm pointed out that several key challenges will need to be overcome to ensure the plan’s success. These challenges include agency and administrative action among the FCC, National Telecommunications and Information Administration (NTIA) and Rural Utilities Service (RUS), legislative action by Congress and a fair competition policy determined by the Federal Trade Commission and protected by the Department of Justice.

Regarding the objective of ensuring that all people of the United States (US) have access to broadband capability, Mr. Wilhelm noted that the American Recovery and Investment Act of 2009 (ARRA) has allocated $7.2 billion in stimulus funds for the expansion of broadband facilities and services to so-called unserved, underserved and rural areas of the country. Additionally, other ARRA-born programs including health care, smart grid and transportation may also promote large-scale broadband adoption. Describing the response to the first round of funding applications, the talk indicated that nearly 2,200 applications were received, requesting a total of $28 billion with $23 billion requisitioned for broadband infrastructure. The presentation also elaborated on the fact that, in addition to the $7.2 billion in stimulus funds for broadband expansion, over $19 billion has been earmarked for Health Information Technology (HIT) including over $16 billion in medical provider incentives for deploying HIT. The aforementioned funding for HIT is aimed towards developing a nationwide health IT infrastructure which allows for electronic storage, transmission and retrieval of healthcare-related information. The talk also provided attendees with a view to the workings of the FCC with regard to new policy generation such as Notice of Inquiry (NOI) release, and the holding of workshops to close gaps in the comments obtained from the NOI release.

Mr. Wilhelm then described the current broadband scenario in the US in terms of deployment, user adoption as well as a qualitative description of the state-of-the-art in hardware and software systems as found in US homes and offices. It was interesting to note that, while the US leads the world in internetworking equipment, semiconductor chipsets, software and internet services and applications, the US suffers from conditions which are fairly unexpected for a country of its economic stature. These latter conditions include the fact that 50-80% of the homes may get broadband speeds which they need from only one service provider, the fact that broadband adoption is lagging in certain customer segments and the fact that deployment costs for various geographies are significantly different. Further elaborating on the shortcomings in the broadband services faced by users in the US, Mr. Wilhelm pointed out that, for the median user during peak hours, actual download speeds are only about half of the advertised speed! Moreover, around 5 million homes get less than the advertised 786 kbps and approximately 35 million homes get less than 10Mbps. Other broadband service drawbacks faced by US-based customers include the fact that several market segments show penetration rates significantly below the 63% average and that the lack of widespread adoption may entail a social cost in the future in terms of lowered access to jobs, education, government services and information. For example, high school and university students who have little to no Internet connectivity will be at a growing disadvantage compared to students who have materially good quality access to the Internet.

Thereupon, the talk pointed out how high-quality broadband connectivity enables innovations across a broad swath of national priorities – for example, health care (electronic health records, telemedicine and remote/mobile monitoring), energy and environment (smart grid, smart home applications and smart transportation), education (STEM, eBooks and content, electronic student data management), government operations (service delivery and efficient administration, transparency in governance and civic engagement), economic opportunity (job creation, job training and placement, and community development) and public safety (next generation 9-1-1, alerts and cybersecurity). On being queried whether retail services are currently the dominant application of broadband communications, Mr. Wilhelm acknowledged the pertinence of the question, but was unable to comment further on the topic since the FCC report had not been released at the time of this talk.

The presentation then delved into topics such as regulation and deregulation of broadband networks, network neutrality, spectrum policy, investment in telecom systems and services, and next-generation 9-1-1 systems. Explaining the significance of internet services like DSL being taken off from under Title II of the Telecommunications Act as a result of deregulation, Mr. Wilhelm pointed out that since the DSL service is no longer under Title II, the FCC cannot protect DSL customers and small DSL companies anymore from being controlled by telcos or network service providers. With regard to net neutrality, the case of Comcast versus the FCC wherein the former is alleging that the Internet was not under the purview of Title II, was briefly touched upon. A question was then posed on whether managed services were expected to crowd out the non-managed services such as best-effort services. An audience member proffered his knowledge that the very same issue is being discussed in the public domain and that no clear consensus has been reached on this topic. On the subject of spectrum policy, the talk reiterated the oft-heard chorus in the telecom circles that the currently allocated spectrum is woefully inadequate to meet projected future demands (especially for the mobile broadband applications). Mr. Wilhelm then elaborated on the need for investment in telecom services and technology since venture capital investments in these sectors has fallen significantly in recent years. According to Mr. Wilhelm, investment in telecom is a key ingredient to promoting innovation across the hardware, software, network and services ecosystem and the absence of strong investment could result in reduced value of services to end-users.

Pointing out that broadband communications can support public safety and homeland security efforts, Mr. Wilhelm then touched upon the prominent areas of public safety which can be improved as a result of a new broadband initiative such as the national broadband plan. These areas are next-generation 9-1-1 systems, cybersecurity, alerting and a nationwide public safety network. For 9-1-1 systems, Mr. Wilhelm suggested the possibility of having an all-IP based system and to also allow users to submit recorded video to the 9-1-1 operators who could then dispatch the user videos to first responders.

Analysis:

The national broadband plan which the FCC will release (which, at the time of the writing of this article, has been released) is a key step in promoting the widespread adoption of broadband connectivity within the US. If a large portion of the US population gains access to broadband communication systems, the US can continue leading the world in technology innovations in telecom hardware, software and services sectors. Indeed, we believe that it is imperative that the FCC’s objectives of widespread broadband adoption be met in order to help meet other national goals such as homeland security, economic opportunity, healthcare and education. However, as was pointed out by Mr. Wilhelm, the adoption and retention of broadband communications among US users will entail significant investment in the telecom services and technology fields by venture capitalists as well as the federal and state governments. The lack of adoption could result in the exacerbation of the digital divide, especially in the education sector where students from schools which are not well-funded may fall behind in acquiring the skills and knowledge necessary to compete in higher education and (subsequent) job markets. On the other hand, the successful adoption of broadband communications could contribute an order of magnitude improvement in the quality of life for American citizens and further their nation’s leadership in the technology arena.

CSO Perspectives and SaaS Con report: Cloud Computing Security Remains a Conundrum

Abstract:

Prospective and existing cloud computing users often site security as one of their biggest concerns, particularly with public or hybrid clouds.  The lack of standards for security, federated identity, and data handling integrity hasn’t done anything to alleviate those worries.  For example,  Software as a Service (SaaS) or Platform as a Service security contracts often lack contingency plans for what would happen if one or more of the companies involved suffer a disruption or data breach. And it’s not generally known, what type of security exists when data passes between clouds (private-to-public or public-to-public).   There’s even talk of Virtual Private Clouds but no one really knows what that is either.

The enterprise customer, cloud providers and vendors are having difficulties in sorting out the many potential problems and resolving the finger pointing of  who is responsible for what in the event of a data breach or other security trouble – especially over a shared infrastructure.  In particular, there is no standard way of gathering the required information or isolating the problem in a multi-vendor cloud envirnoment.  In fact, cascading security breaches are possible.  That would really play havoc with cloud users data and apps.

Users and vendors are just starting to seriously examine these unresolved issues through industry associations, such as the year-old Cloud Security Alliance.   So the Cloud Security related sessions at the co-located CSO Perspectives and SaaSCon conferences took on an increased sense of importance and urgency. 

Conference Highlights:

1.  Panelists at a joint session on Cloud Security made the following observations:

-Security problem isolation and prevention of cascading security breaches must be specified in the Cloud contract or SLA.
-The cloud vendor should log all inappropriate or unauthorized access incidents.
-The cloud security market needs to understand the nuances of data loss due to security breaches.

2.  At a minimum, a Cloud Computing SLA should include:

a] Security of data, e.g. encryption mechanism
b] Up time/ availablity
c] Forensics of each security breach, especially across a shared infrastructure
d] Data portability to accomodate multiple vendor relationships
e] Being able to change the server OS (e.g Windows to Linux) without disrupting existing applications
f] Business continuity and contingency planning in the event of a falure(s)

3.  The following items were said to be needed, but currently missing from the cloud computing environment:

a] Standards or Interoperablity Agreements
b] Benchmarks to compare cloud services with one another
c] Federation of identities to facilitate single sign on procedure for multiple inter-connected clouds.

4. Interesting quotes:

a] Jim Reavis, co-founder of the Cloud Security Alliance, said, “”It’s important we understand there isn’t just one cloud out there. It’s about layers of services,” Reavis said. “We’ve seen an evolution where SaaS providers ride atop the other layers, delivered in public and private clouds.”  I believe the implication was that Infrastructure as a Service was layer 1 (the Data Center layer), Platform as a Service at layer 2 (the Application development/tools layer), and SaaS at layer 3 (or the Application run time layer)
b] Ed Bellis of on-line travel agency Orbitz said, “It’s a challenge, working with partners to get on same page.  Early on there were many things we didn’t expect. Federation of identities in our internal systems became a challenge because of differences between our internal procedures and those of the SaaS provider.”   “In your SLAs, you need to have clear language for how data will be handled and encrypted and, in the event of a security breach, the contract must have clear language on who is responsible for specific aspects of the investigation. Build these considerations into the contract side.”
c] Keith Waldorf, VP of operations at Doctor Dispense, a point-of-care on line medication and e-pharmacy provider, said one of his company’s most painful experiences in this area was on the contract side. “The lack of common standards really surprised us.”  Waldorf said he once was a client of an (anonymous) cloud service provider that upgraded its offerings, but his company was unable to take advantage of the upgraded services because the original SLA locked him in to using only the software and hardware that was available at the time he initially signed the contract.
d]  Jeff Spivey, president of Security Risk Management Inc., said “the vendors are driving the service, rather than the market defining its needs.”  The previous day, Jeff presented on the threat of “black swan-like” security threats and cautioned the security oriented audience to monitor for “weak signals (of potential threats).”

5.  Microsoft reiterates that they “are all in” with respect to Cloud Computing.

Tim O’Brien, Microsoft Platform Strategy Group manager said that what really matters is what cloud service based delivery can do for the customer.  Microsoft will be moving “category leading products and platforms to the cloud.  For example, Exchange Online (e-mail), SharePoint Online (collaboration), Dynamics CRM Online (business apps), SQL Azure (structured storage) and AD/Live ID (Active Directory access) as its lead services for businesses.  All of these are designed to run on Windows Server 2008 in the data center and integrate with the corresponding on-premises applications. They will also work together with standard Microsoft client software, including Windows 7, Windows Phone, Office and Office Mobile. 

In addition, the company is offering its own data centers and its own version of Infrastructure as a Service for hosting client enterprises’ apps and services. It is using Azure—a full online stack consisting of Windows 7, the SQL database and additional Web services—as a platform as a service for developers.  Microsoft Online Services are up and running. They include Business Productivity Online Suite, Exchange Hosted Services, Microsoft Dynamics CRM Online and MS Office Web Apps.  On the consumer side, Microsoft launched a cloud backup service called SkyDrive, soft-launched about two weeks ago. SkyDrive is an online storage repository for files that users can access from anywhere via the Web.  The web edition of MS Office 2010 will be free to all Windows Live account holders this May. (We wonder how that will effect the company’s profits, which have always depended on the desktop sales of MS Office.  

In summary, it’s clear that Microsoft has a comprehensive strategy is in place; users will now have to try the cloud based products and services and decide how integrated they really are.

The following from Tim O’Brien provides additional information and insight on Cloud Security and Web version of MS Office 2010:

Relative to cloud security, there are a number of resources you can access on our technical sites, some of which I’ve included here:

http://technet.microsoft.com/en-us/security/ee519613.aspx

http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=3269a73d-9a74-4cbf-aa6c-11fbafdb8257

http://www.microsoft.com/downloads/details.aspx?FamilyID=7C8507E8-50CA-4693-AA5A-34B7C24F4579&displaylang=en&displaylang=en

http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=2ab57b5c-8c4f-4b8c-a260-0fe77b5b713f

“For Office, you simply sign into http://skydrive.live.com with your Windows Live ID, and you can use the document workspace for your Office docs, and view/edit them in the browser using the Office Web Apps (specifically, Word, Excel, PowerPoint, and OneNote).  To create a file, you can click on “New” for a drop down menu of these four apps, and off you go…”

References:

1. Frustrations with cloud computing mount
– Lack of standards, industry agreements get more attention as industry expands

Cloud computing lacks standards about data handling and security practices, and there’s not even any agreement about whether a vendor has an obligation to tell users if their data is in the U.S. or not. And
The cloud computing industry has some of the characteristics of a Wild West boom town. But the local saloon’s name is Frustration. That’s the one word that seems to be popping up more and more in discussions about the cloud, particularly at the SaaScon 2010 conference here this week.

That frustration about the lack of standards grows as cloud-based services take root in enterprises. Take Orbitz LLC, the big travel company with multiple businesses that offer an increasingly broad range of services, such as scheduling golf tee times and booking concerts and cruises. 

http://www.computerworld.com/s/article/9175102/Frustrations_with_cloud_computing_mount

2.  SaaS, Security and the Cloud: It’s All About the Contract
-Security practitioners have learned the hard way that contract negotiations are critical if their SaaS, cloud and security goals are to work. A report from CSO Perspectives and SaaScon 2010.

Perhaps the most important lesson is that contract negotiations between providers is everything. The problem is that you don’t always know which questions to ask when the paperwork is being written.  Panelists cited key problems in making the SaaS-Cloud-Security formula work: SaaS contracts often lack contingency plans for what would happen if one or more of the companies involved suffer a disruption or data breach. The partners — the enterprise customer and the vendors — rarely find it easy getting on the same page in terms of who is responsible for what in the event of trouble. Meanwhile, they say, there’s a lack of clear standards on how to proceed, especially when it comes to doing things in the cloud.  Add to that the basic misunderstandings companies have on just what the cloud is all about, said Jim Reavis, co-founder of the Cloud Security Alliance.  Somewhere in the mix, plenty can go wrong.

“If you’re in a public cloud situation and Company B is breached, a lot of finger pointing between that company and different partners will ensue,” Reavis said. “If this isn’t covered in the terms of agreement up front, you have no hope of recovering data (or damages).”

Security vendors can be part of the problem as well. In a recent CSO article about five mistakes one such vendor made in the cloud, Nils Puhlmann, co-founder of the Cloud Security Alliance and previously CISO for such entities as Electronic Arts and Robert Half International, noted that the vendor — who was not named — did “everything you can possibly do wrong” when rolling out the latest version of its SaaS product, leading to users uninstalling their solution in large numbers.

http://www.csoonline.com/article/589963/SaaS_Security_and_the_Cloud_It_s_All_About_the_Contract

3.  Microsoft is moving ever deeper into the data center, exploring frontiers it hasn’t frequented in the past.

SANTA CLARA, Calif.—Only a year ago, the idea of Microsoft showing cloud computing services at an event like SaaSCon would not have computed one bit.
The world’s largest software company has been late to the party on a few things—the Internet being a classic example—but times and its corporate attitude have changed. They had to.  Microsoft, whose executives not long ago were often quoted as hating cloud computing because it cuts directly into their core business, already has swallowed its pride to embrace open source—well, to a certain extent. The company also is trying to move deeper into the data center, exploring frontiers it hasn’t frequented in the past.  At SaasCon 2010 here at the Santa Clara Convention Center April 6 and 7, Microsoft had its first booth dedicated strictly to business cloud services.  It’s an ambitious plunge into a market already full of veteran players and bright newcomers alike.

http://www.eweek.com/c/a/Cloud-Computing/Microsoft-Positioning-Itself-for-Cloud-Service-Business-656834/

4.  A Tale of Two Clouds

The cloud is the answer to all our IT problems — from poor performance to lack of scale to high energy costs. The cloud is a sucker’s game that merely shifts responsibility for IT infrastructure to different hands, leads to performance issues of its own and leaves your data more open to theft.   If both of those statements happened to be true — and we won’t know for sure until it starts to amass significant workloads — would that alter your plans to deploy cloud infrastructure in any way? Apparently not, if the latest research is to be believed.

One the one hand, we have reports from groups like Global Industry Analysts that predict the cloud services market is set to top $200 billion in the next five years. That would represent a blazingly fast growth curve, driven largely by enterprise needs to cut costs and expand capabilities in what is likely to be a mediocre economy at best.   But it’s tough to square that level of acceptance with the increasing anecdotal evidence that suggests a large number of IT professionals are hesitant to place too much reliance on the cloud due to security concerns and a lack of interoperable standards.

http://www.itbusinessedge.com/cm/blogs/cole/a-tale-of-two-clouds/?cs=40604

The need for a Unified Set of Cloud Computing Standards within IEEE

The recent Cloud Connect Conference in Santa Clara, CA was a very sobering experience for me.  While I knew cloud computing was way overhyped, I thought that there was one or more standardards organizations that claimed ownership.  I also thought that all the functional requirements and specifications done for grids, web services, and SOA (e.g. distributed management, federation, SLA requests and validation,etc) would not have to be re-invented and redone for clouds.  Wow, that’ll be a huge undertaking.
 
It’s my belief that for at least the next five years, each cloud provider will define its own set of user interfaces, SLAs, performance parameters, security methods, etc.  The more cloud providers, the more chaos and confusion will reign.
 
To a much lesser extent, it reminds me of my 1st experience in standards.  In 1978, each X.25 Public Packet Switched Network had its own specification, which was loosely base on the 1976 CCITT X.25 recommendation.  IBM authored a game changing paper that year, which showed it was not economically practical to build a single X.25 DTE (e.g. host computer or workstation) that would operate on more than one of the four X.25 public packet switched networks studied- Transpac (France), Datex-P (Germany), Telenet (U.S), KDD (Japan).  As a result of that paper, ANSI X3S3.7 in the U.S. and CCITT internationally worked with great urgency to complete a truly global X.25 standard (the ANSI Standard referred to the CCITT recommendation) in 1980.  I was part of that effort as both a U.S. contributor and CCITT SG VII/ WP2 secretary.  I represented the U.S. in the ITU-T as a subject matter expert till 2003 (Optical Networks was my last project there).
 
Maybe such an urgent, accelerated standards effort is needed for (at least) the network aspects of cloud computing, e.g. UNI and NNI, SLAs and validation/compliance.  I would’ve thought by now that the major players would’ve gotten together to create such an organization or combine several interested standards bodies/forums/alliances to make one.
While the Cloud Computing market is forecast to be very big by IDC, Gartner Group, etc, there seems to be a lot of confusion regarding the service delivery method.  For example, the venture capital firm, the Sand Hill Group, has concluded that cloud computing represents one of the largest new investment opportunities on the horizon. At the Cloud Connect conference, a represenative of that firm asked IBM’s VP of Cloud Services Ric Telford what he thought: “I have no problem with those numbers (40% in three years; 70% in five) as long as you include the caveat, it could be any one of five delivery models.”  So maybe we also need to define and standardize on those methods of delivering cloud applications to users,
 
What do readers think? Would love to hear your opinions.  Here are a few comments from conference participants:

From Kevin Walsh of UCSD:

I think that establishing a unified cloud standards framework under the auspices of a standards organization such as the IEEE deserves further discussion. (I know it would be well received by my government customer.)

Pointers below….

See –

http://standards.ieee.org/announcements/2009/pr_cloudsecuritystandards.html

http://www.elasticvapor.com/2009/11/iso-forms-group-for-cloud-computing.html
which is covered by this ISO section –
http://www.iso.org/iso/standards_development/technical_committees/other_bodies/iso_technical_committee.htm?commid=601355

The ACM is also in the mix with a conference planned for early June.  See

http://research.microsoft.com/en-us/um/redmond/events/socc2010/index.htm
see the accepted papers thus far –
http://research.microsoft.com/en-us/um/redmond/events/socc2010/program.htm

In general, I like the IEEE process, and the organization is well respective from my point of view.  Their standardization process is mature.  See
http://standards.ieee.org/guides/opman/index.html
and
http://standards.ieee.org/resources/development/index.html


From Robert Grossman:

The url for the new IEEE Cloud Computing Standards Study Group:

 http://www.computer.org/portal/web/standards/cloud

I’ll send a separate note to Steve Diamond, who is coordinating it.

I am looking forward to following up with you regarding the virtual networks effort.

There is also RTF research group on virtual networks  called:
Virtual
Networks Research Group (VNRG).


From Gary Mazzaferro:

I enjoyed your comments about the need for cloud standard initiatives. I”m of the same opinion and have slowly been moving towards a more collaborative initiative. The largest challenge is balancing time and funding. I have little time because of the lack of funding for the project. 🙂   I do have an idea how to make it work and gain participation by the user community


From InformationWeek:

The bigger message was that there is still much work to do in this area. With a ton of standards bodies emerging today, and vendors coming to market with their own unique APIs, it’s becoming difficult to have one voice.

If cloud is going to gain any kind of traction, let alone achieve the nirvana of the Inter-cloud, then we must have some level of standards in place to make it happen. As we’ve seen historically, not having standards in place has created challenges around interoperability, as well as vendor lock-in. The value proposition around cloud computing is negated if interoperability is not possible. It’s as simple as that. No ifs, ands, or buts.

http://www.informationweek.com/cloud-computing/blog/archives/2010/03/4_thoughts_from.html

NIST, a federal agency that has been instrumental in defining cloud computing, will take on an additional role as a central publisher of cloud use cases accompanied by a recommended reference technology implementation. “But the airing of strong use cases where a technology set is deemed suitable for a particular problem could lead to a specification for a standard, a NIST representative at the Cloud Connect show in Santa Clara, Calif., said Wednesday in an interview.”

http://www.informationweek.com/news/government/cloud-saas/showArticle.jhtml?articleID=224000007


Please feel free to leave a comment below or email me and I’ll include it

[email protected]


Here is a link to view what others have written about the Cloud Connect Conference I attended in Santa Clara, CA:

http://www.cloudconnectevent.com/2010/in-the-news.php.

IDC Market Forecasts for Mobile Broadband and LTE

At the March 10th IDC 2010 Directions Conference in Santa Clara, IDC analysts Amy Lind and Carrie MacGillivray predicted a 32% compound annual growth rate (CAGR) for global mobile broadband connections, which were projected fo reach over 350M by 2013.  [We wonder if those include M2M connections, which are potentially much larger than human held device connections].

More significantly, LTE was predicted to have a CAGR of 471%, with 2012 (and later for some countries) as the critical inflection point for LTE mass adoption.  The technology was said to: offer improved capacity, full mobility (vs “mobile” WiMAX portability), be Initially oriented toward PCs  with pricing In flux as operators continue to rethink their business models.  

By 2013, IDC predicts:

  • Mobile broadband will be ubiquitous and the defacto way of communicating
  • Business models will be focused on revenues per subscriber or device   
  • Global mobile services spending will surpass $975 billion
  • Iconic 4G devices will be critical to success

The two IDC analysts offered their essential guidance to session attendees:

  • Wireless carriers should place emphasis on data services, which are essential for revenue growth.  
  • Detailed market segmentation is required to focus devices (and apps) on relevant audiences.  To achieve this objective, IDC believes that wireless network operators will deepen partnerships with device and application vendors (AT&T and Clearwire are already doing this now).
  • Integration key to staving off wireless displacement and driving mobile broadband adoption

In a separate presentation, IDC Research Manager Godfrey Chua was very optimistic about LTE.  This author was stunned to hear Mr. Chua predict that LTE infrastructure equipment sales would overtake all WiMAX infrastructure sales by 4G 2011!  That’s less than 18 months from now!   According to Mr. Chua, both AT&T and VZW are looking to LTE to effectively deliver high quality mobile broadband service at the lowest cost per bit possible (through the more cost efficient OFDM based modulation and multi-carrier transport).  He sees 2012-2013 as the LTE market inflection point, which is consistent with the opinion of other IDC Analysts.  Why have all the major global cellular operators made such an early committment to LTE?  Here are a few reasons given: 

  • To deliver high quality mobile broadband at the lowest cost per bit
  • To relieve 3G capacity pressure by migrating laptop users to LTE
  • To create a more robust platform for applications and services –that lead to new business models and therefore revenue streams         

Godfrey next compared the rationale and position of LTE (vs WiMAX):

  • To address capacity pressure in 3G networks (vs WiMAX to address underserved broadband connectivity demand)
  • Full mobility is the value proposition (vs WiMAX portablity of netbooks/notebook access)
  • Geared towards developed markets (vs WiMAX orientation toward emerging markets)
  • Relevance to emerging markets not until 2015 (vs WiMAX being always relevent to emerging markets)                                 

2010 will be a critical year for LTE network equipment companies as they all seek to build momentum. in the forthcoming global market.  Mr. Chua sees Ericsson and Huawei as early leaders in providing LTE gear.  He says that Alcatel – Lucent’s Verizon Wireless win is key, but now they must convert trials into contracts.  Meanwhile, Nokia Siemens Networks is  looking to maintain relevance in LTE.  The competitive pressure will surely intensify as other players –Motorola, ZTE, NEC and Fujitsu –seek to up the competitive ante. 

In closing, Godfrey offered the following essential guidance: 

  • Realization of the long-held vision for the network is near
  • Mobile data traffic will continue to explode
  • Network transformation is critical, it is key to remaining competitive
  • Green efforts will persist, it goes hand in hand with the transformation process
  • Vendor positions will continue to shift

Some additional predictions from IDC Analysts:

John Gantz, Chief Research Analyst: 

-By the end of 2010, there will be 1B mobile Internet users and 500K mobile phone apps.   1.2 billion mobile
phones will be sold; 220 million smart phones.  630 million laptops in place; 80 million netbooks. 

–There will be many intellgent devices communicating with machines/computers.  M3M is a potential high growth area.

-Complexity will increase 10X in the next 10 years

-By 2020, there will be 31B connected devices, 2.6 billion phones, 25M apps, 450B interactions per day, 1.3T tags/sensors

Rick Nicholson, Vice President, IDC Energy Insights:

Workshop Report: Clearwire on track with rollouts and app tools, but MSO partners struggle with Business Models

Disclaimer: Unlike many “would be journalists” that are either always negative on WiMAX, or are perennial Pollyannas that produce an endless stream of recycled “happy talk,” this author tries to be balanced and objective of WiMAX in general and the WiMAX events covered in particular. We have been covering WiMAX for over 6 years now, with more than 200 published articles on that technology.  This author has no business relationships with Clearwire or any other WiMAX related company or entity. Please read on……

 Introduction

Clearwire briefed potential application developers at a well attended CLEAR Developer workshop in Santa Clara, CA on March 2, 2010. The key sessions were Upcoming 4G WiMAX APIs and Tools, The 4G WiMAX Business Opportunity for Developers, and the wrap up session revealing where Clearwire is now and where they’re going. You can find all the Sessions and speakers here.

We will skip the discussion of WiMAX APIs and Tools, which was already covered in detail at the Feb 10th IEEE ComSoc SCV meeting (you can access the slides at:  http://www.ewh.ieee.org/r6/scv/comsoc/Talk_021010_CLEARDeveloperOverview.pdf).

Nonetheless, we noticed a lot of keen interest amongst developers who were accessing Clearwire’s Silicon Valley 4G Innovation Network using 4G USB sticks attached to their notebook PCs. It seems indoor coveraged worked fine in the Santa Clara Convention Center, where the workshop was held.

However, we were quite disappointed that neither Comcast or TW Cable had any new services to tell us about, despite the video content and managed networks they each own. More about this later in the article.

The Wholesale Opportunity

Clearwire (CLRW) was said to own more licensed spectrum in major cities than any other wireless network operator. Their “4G”+ network, known as CLEAR, is now covering more than 34 million points of presence (POPs) as of 4Q-2009. It’s also commercially available in 28 different U.S. cities including Seattle, Honolulu and Maui. CLRW plans to build out their mobile WiMAX network to reach 120 million POPs by end of 2010. They’ll have launched CLEAR service in most major U.S. cities by the end of the year including New York, San Francisco, Boston, Houston, Kansas City and Washington, DC. By this time next year, the CLEAR network will stretch from coast to coast and cover all the major U.S. cities.

In addition to selling “4G” fixed and mobile wireless broadband Internet access, Clearwire has MVNO (wholesale) agreeements with three of their large investors –Sprint, Comcast, TW Cable– who are reselling the service under their respective brand names. These partners were said to have a combined customer base of approximately 75M subscribers and their well known brand names would help the combined entities achieve a critical mass of customers much quicker than if only Clearwire was selling WiMAX services. Wholesale resellers will also drive WiMAX ecosystem development and investment, according to Randy Dunbar, Vice President, Wholesale Marketing & Strategy, Clearwire.

Mr. Dunbar told the audience that Clearwire will be signing up more MVNO resellers in the near future. These may include companies involved in: consumer electronics, retailers, CLECs, pre-paid/targeted market segments, smart grid and M2M (least understood by Clearwire, but with tremendous potential). The new resellers will help Mobile WiMAX deployment in diverse market segments such as: mobile consumer, home entertainment, power Internet user, SOHO, small business, large enterprise, vertical business’, road warriors (business travellers).

Currently, there is only one known hand held device available for CLEAR -the Samsung Modi. “4G” access is currently obtained using an external USB modem or “dongle,” embedded WiMAX in a PC, or a “personal” WiFi hotspot (many of which require an external USB dongle to access the WiMAX network). But Mr. Dunbar said that a “range of connected devices” are coming for CLEAR. These devices include: smart phones, STBs, DVR, mobile modems, MIDs, Consumer Electronics gadgets (such as portable media players). Randy hit my hot button when he stated that programmed video and time/place shifted video would be delivered via the 4G CLEAR network (see next section of this article).

+ IEEE 802.16e-2005 based Mobile WiMAX (being deployed by Clearwire and partners) is actually 3G according to the ITU-R;   IEEE 802.16m will be the 4G version of mobile WiMAX, but Clearwire has not committed to that yet.

Cable (MSO) MVNOs reselling Clearwire’s mobile WiMAX network

Comcast, the largest MSO in the U.S., resells the CLEAR network as “Hi Speed to Go.” It’s branded mobile WiMAX service is available in Portland, Atlanta, Chicago, Philadelphia, Seattle/Bellingham area. Katie Graham, Director, Wireless Business Development said there were two ways mobile WiMAX could be purchased from Comcast:

  • Fast pack: Cable Internet (home access) bundled with High Speed to Go
  • Bolt on: 4G mobile WiMAX only or 3G/4G (using Sprint’s EVDO network for 3G)

A free WiFi router is included with a Hi Speed to Go subscription. More details on the Comcast mobile WiMAX service is at: http://www.comcast.com/highspeed2Go/#/highspeed2go

TW Cable has been completely spun off from Time Warner as a separate company (which means they don’t own any video content). Their CEO had recently stated that high speed Internet was replacing video as the firm’s core product. TW Cable currently serves 14.6M customers in 28 states. They claim to be the third largest broadband ISP in the U.S. with 9M subscribers. Brian Coughlin, Manager, Wireless Platforms for TW Cable told the audience that data oriented wireless products and services would be first priority for the company, with voice and mobile phones later. Brian stated that “Digtial media and service must be adaptable” and that an ecosystem would be required for this. I took this to mean that digital media and video services needed to be able to adapt to broadband access via mobile WiMAX, but I was wrong (see below for the reason).

The two MSO behemoths were asked by this author why they haven’t offered any premium video services or VoD over mobile WiMAX and they appeared to be stumped. Some of the explanations given were:

“The technology is ahead of the business models.”  Clearwire

“The industry hasn’t figured out how to monetize the video applications.”  TW Cable and Clearwire

“It’s definitely on our radar screen, but we don’t have anything we can announce at this time.”  TW Cable

“Digital content rights are based on a given device, not on a service.”  Comcast

We were preplexed by these statements. In particular, we do not understand why Comcast can offer On Demand Digital Video* over their managed network and cable Internet service, but not over mobile WiMAX.

* For details on Comcast On Demand On line service please visit:

http://www.comcast.net/on-demand-online/

Kittar Nagesh, Service Provider Marketing Manager at Cisco also participated in this panel. He made three statements I thought were very important:

  • “Video will be 66% of mobile video traffic by 2013.”
  • “The spectrum Clearwire owns is remarkably important. It’s important to make use of the spectrum (a wireless network operator) you have. It doesn’t matter if it’s used for WiMAX or LTE.”
  • “M2M applications will be phenomenally important. It will be an inflection point (for the broadband wireless industry). Innovation will explode in an unbounded fashion.”

Shortly after this event, Cisco withdrew from the WiMAX RAN equipment market. They had been selling WiMAX base stations (from the Navini acquisitiion), but they now think there are better opportunities in the mobile packet core via their acquisition of Starent Networks).

Wrap Up Session: Clearwire now and in the near future

Dow Draper, Clearwire Vice President for Product Development and Innovation, told the audience that the average Clearwire customer is using 7G bytes of downloaded data per month — a number that Clearwire only expects to increase over time. That compares with an average 3G data card download of 1.4G bytes/month and an iPhone 3G average download of 200 M bytes/month.

Mr. Draper also said that the S.F. Bay Area can expect commercial WiMAX service by “late 2010,” and that “multiple smart phones” would be running on the Clearwire network before year’s end. Dow also hinted at other upcoming devices for CLEAR: MIDs, Portable Media Players (PMPs), tablets and embedded devices. He distinguished between category 1 devices which are tested and sold by Clearwire and category 2 devices which are sold through channels (and presumably retail stores).

“Clearwire will support multiple Operating Systems, especially Android,” said Mr. Draper. In summing up he said that thrid party developers, differentated devices, services, and applications are all critical in attracting customers for Clearwire and their MVNO resellers. While we completely agree with that statement, we think that the devices need to come to market very quickly (they’ve been promised for quite some time by Intel but haven’t materialized). But even more important are the differentiated services, such as video- either for entertainment, education, or surveillance.

Next Clearwire workshop:

4-G WiMAX Developers Symposium, Jun 15 10:00AM to 5:00PM Stanford University

Topics Include:

  • The latest on 4G WiMAX API’s and tools
  • 4G WiMAX 101 basics for developers & network and device architects
  • Market opportunities for 4G developers with symposium sponsors: Clear, Time Warner Cable, Sprint, Intel, Comcast, Cisco
  • Business sessions from leading 4G industry executives
  • 4G trends and forecasts
  • Open discussion on the future of mobile internet innovation

Details at: http://scpd.stanford.edu/search/publicCourseSearchDetails.do?method=load&courseId=6650469

Page 307 of 310
1 305 306 307 308 309 310