ComSocSCV Meeting Report: 40/100 Gigabit Ethernet – Market Needs, Applications, and Standards

ComSocSCV Meeting Report:  40/100 Gigabit Ethernet – Market Needs, Applications, and Standards

 

Introduction

 

At its October 13, 2010 meeting, IEEE ComSocSCV was most fortunate to have three subject matter experts present and discuss 40G/100G Ethernet- the first dual speed IEEE 802.3 Ethernet standard.  The market drivers, targeted applications, architectecture and overview of the the recently ratified IEEE 802.3ba standard, and the important PHY layer were all explored in detail.  A lively panel discussion followed the three presentations, In addtion to pre-planned questions from the moderator (ComSocSCV Emerging Applications Director Prasanta De), there were many relevent questions from the audience.  Of the 74 meeting attendees, 52 were IEEE members.

 

The presentation titles and speakers were as follows:

1. Ethernet’s Next Evolution – 40GbE and 100GbE by John D’Ambrosia of Force10 Networks

2. The IEEE Std 802.3ba-2010 40Gb/s and 100Gb/s Architecture by Ilango Ganga of Intel Corp

3. Physical Layer (PCS/PMA) Overview by Mark Gustlin of Cisco Systems

Note:  All three presentation pdf’s may be downloaded from the IEEE ComSocSCV web site – 2010 Meeting Archives section (http://www.ewh.ieee.org/r6/scv/comsoc/ComSoc_2010_Presentations.php)

 

Summary of Presentations 

 

1.  The IEEE 802.3ba standard was ratified on June 17, 2010 after several years of hard work.  What drove the market need for this standard?  According to John D’Ambrosia, the “bandwidth explosion” has created bottlenecks eveywhere.  In particular, Increased number of users, faster access rates and methods, new video based services have created the need for higher speeds in the core network.  Mr D’Ambrosia stated,  “IEEE 802.3ba standard for  40G/ 100G Ethernet will eliminate these bottlenecks by providing a robust, scalable architecture for meeting current bandwidth requirements and laying a solid foundation for future Ethernet speed increases.”   John sees 40G/ 100G Ethernet as an enabler of many new network architectures and high bandwidth/ low latencey applications.

 

Three such core networks were seen as likely candidates for higher speed Ethernet penetration: campus/ enterprise, data center, and service provider networks  John showed many illustrative graphs that corroborated the need for higher speeds in each of these application areas.  The “Many Roles and Options for Ethernet Interconnects (in the Data Center),”  “Ethernet 802.3 Umbrella,” and “Looking Ahead -Growing the 40GbE / 100GbE Family” charts were especially enlightening.  We were surprised to learn of the breadth and depth of the 40G/100G Ethernet standard, which can be used to reduce the number of links for: Chip-to-Chip / Modules, Backplane, Twin Ax, Twisted Pair (Data Center), MMF, SMF.  This also improves energy efficiency according to Mr. D’Ambrosia.

 

Looking Beyond 100GbE,  John noted that the industry is being challenged on two fronts: Low cost, high density 100GbE and the Next Rate of Ethernet (?).  To be sure, the IEEE 802.3ba Task Force co-operated with ITU-T Study Group 15 to ensure the new 40G/ 100G  Ethernet rates are transportable over optical transport networks (i.e. the OTN),  But what about higher fiber optic data rates?  Mr. Ambrosia identified the key higher speed market drivers as Data Centers, Internet Exchanges, Carrier’s Optical Backbone Networks.  He predicted that the economics of the application will dictate the solution.

 

2.  Ilango Ganga presented an Overview of the IEEE 802.3ba standard, which has the following characteristics:

 

  • Addresses the needs of computing, network aggregation and core networking applications
  • Uses a Common architecture for both 40 Gb/s and 100 Gb/s Ethernet
  • Uses IEEE 802.3 Ethernet MAC frame format
  • The architecture is flexible and scalable
  • Leverages existing 10 Gb/s technology where possible
  • Defines physical layer technologies for backplane, copper cable assembly and optical fiber medium

 

Mr. Ganga noted there were several sublayers that comprise the IEEE 802.3ba standard:

 

  • MAC  (Medium Access Control) –Data Encapsulation, Ethernet framing, addressing, error detection (e.g. CRC).  The term “Medium Access Control” is a carryover from the days when Ethernet used CSMA/CD to transmit on a shared medium.  Today, most all Ethernet MACs just use the Ethernet frame format and operate over non shared point to point physical media.
  • RS (Reconciliation sublayer) – converts the MAC serial data stream to the parallel data paths of XLGMII (40 Gb/s) or CGMII (100 Gb/s).  It also provides alignment at the beginning frame, while maintaining total MAC transmit IPG
  • 40GBASE-R and 100GBASE-R PCS (Physical Coding sublayer) – Encodes 64 bit data & 8 bit control of XLGMII or CGMII to 66 bit code groups for communication with 40GBASE-R and 100GBASE-R PMA (64B/66B encoding).  Distributes data to multiple physical lanes, provides lane alignment and deskew (due to different receiver arrival times of signals on each lane).  There’s also a Management interface to control and report status
  • Forward Error Correction (FEC) sublayer – Optional sublayer for 40GBASE-R and 100GBASE-R to improve the BER performance of copper and backplane PHYs.  FEC operates on a per PCS lane basis at a rate of 10.3125 GBd for 40G and 5.15625 GBd for 100G
  • 40GBASE-R and 100GBASE-R PMA (Physical Medium Attachment) –  Adapts PCS to a range of PMDs.  Provides: bit level multiplexing or mapping from n lane to m lanes;  clock and data recovery; optional loopback and test pattern geneneration/checking functions
  • 40GBASE-R and 100GBASE-R PMD (Physical Medium Dependent) –  Interfaces to various transmission medium (e.g., backplane, copper or optical fiber medium)/  Transmission/reception of data streams to/from the underlying wireline physical medium.  Provides signal detect and fault function to detect fault conditions.  There are different PMDs for each of the two speeds (40G and 100G bits/sec)

            -40G PMDs:  40GBASE-KR4, 40GBASE-CR4, 40GBASE-SR4, 40GBASE-LR4  

            -100G PMDs: 100GBASE-CR10, 100GBASE-SR10, 100GBASE-LR4, 100GBASE-ER4

  • Auto-Negotiation –  used for copper and backplane PHYs to detect the capabilities of the link partners and configure the link to the appropriate mode.  Allows FEC capability negotiation, and provides parallel detection capability to detect legacy PHYs
  • Management interface – Uses the optional MDIO/MDC management data interface specified for management of 40G and 100G Ethernet Physical layer devices

 

These were illustrated for both 40G and 100G Ethernet with several layer diagrams showing each functional block and inter- sublayer interfaces.  For the electrical interfaces, both Chip- to -Chip or Chip- to- Module electrical specifications might be implemented.  It was noted that PMD specification definesthe MDI electrical characteristics.  Next, 40G and 100 G Ethernet functional block diagram implementation examples were shown.  Finally, Ilango identified two future standards related to IEEE Std 802.3ba:

 

  • IEEE P802.3bg task force is developing a std for 40 Gb/s serial single mode fiber PMD
  • 100 Gb/s backplane and copper cable assemblies Call For Interest scheduled for Nov’10

 

3.  Mark Gustin explained the all important PHY layer, which is the heart of the 802.3ba standard.  The two key PHY sublayers are the  PCS = Physical Coding Sublayer and the PMA = Physical Medium Attachment.

 

  • The PCS performs the following functions:  Delineates Ethernet frames.  Supports the transport of fault information. Provides the data transitions which are needed for clock recovery on SerDes and optical interfaces. It bonds multiple lanes together through a striping/distribution mechanism. Supports data reassembly in the receive PCS – even in the face of significant parallel skew and with multiple multiplexing locations
  • The PMA performs the following functions: Bit level multiplexing from M lanes to N lanes. Clock recovery, clock generation and data drivers.  Loopbacks and test pattern generation and detection

 

Mark drilled down to detail important multi-lane PHY functions of transmit data striping and receiver data alignment.  These mechanisms are necessary because all 40G/ 100G Ethernet PMDs have multiple physical paths or “lanes.”  These are either multiple fibers, coax cables, wavelengths or backplane traces.  Individual bit rates of 10.3125 Gb/s or 25.78125 Gb/s (new PMD will have a rate of 41.25 Gb/s).  Module interfaces are also multiple lanes, which are not always the same number of lanes as the PMD interface.  Therefore the PCS must support a mechanism to distribute data to multiple lanes on the transmit side, and then reassemble the data in the face of skew on the receiver side before passing up to the MAC sublayer.

 

Like Ilango, Mark touched on the topic of higher speed (than 100G) Ethernet.   He speculated that the next higher speed might be 400 Gb/s, or even 1Tb/s?  Mr. Gustin opined that it was too early to tell.  He noted that the IEEE 802.3ba architecture is designed to be scaleable.  In the future, it can support higher data rates by increasing the bandwidth per PCS lane and the number of PCS lanes.  He suggested that for 400 Gb/s, the architecture could be 16 lanes @25 Gb/s for example, with the same block distribution and alignment marker methodology.   Mark summed up by reminding us that the 40G/100G Ethernet standard supports an evolution of optics and electrical interfaces (for example, a new Single-mode PMD will not need a change to the PCS), and that the same architecture (sublayers and interface between them) can support future faster Ethernet speeds.

 

Panel Discussion/ Audience Q and A Session 

 

The ensuing panel session covered 40G/ 100G Ethernet market segments, applications (data center, Internet exchanges, WAN aggregation on the backbone, campus/enterprise, etc),competing technologies (e.g. Infiniband for the data center), timing of implementations (e.g. on servers, switches, network controllers. There were also a few technical questions for clarification and research related to single lane high speed links.  It was noted by this author that almost 10 years after standardization, servers in the data center only recently have included 10G Ethernet port interfaces while 10G Ethernet switches only now can switch multiple ports at wire-line rates.  So how long will it take for 40G/ 100G Ethernet to be widely deployed in its targeted markets?  The panelists concurred that more and more traffic is being aggregated onto 10G Ethernet links and that will drive the need for 40G Ethernet in the data center.  Mark Gustin said, “100GE is needed today for uplinks in various layers of the network.”. But the timing is uncertain. Higher speed uplinks on Ethernet switches, high performance data centers (e.g. Google), Internet exchanges, wide area network aggregation, and box to box communications were seen as the first real markets for 40G/ 100G Ethernet.  Each market segment/ application area will evolve at its own pace, but for sure the 40G/ 100G Ethernet standard will be an enabler of all of them.

 

The final question was asked by former IEEE 802.3 Chair, Geoff Thompson.  Geoff first noted that 40G/ 100 G Ethernet standard and all the higher speed Ethernet studies being worked in IEEE 802.3 are for the core enterprise or carrier backbone network.  He then asked the panelists when would there be big enough technological advances in the access or edge network to enable higher speeds there, i,e, the on ramps/ off ranps to the core network.  The panelists could not answer this question as it was too far from their areas of expertise.  In particular, nothing was said about the very slow- to- improve telco wireline access network (DSL or fiber) and the need to build out fiber closer to the business and residential customers to achieve higher access rates.  Nonetheless, the audience was very pleased to learn the 802.3ba architecture was scalable and seems to be future proof for higher speed Ethernet.


Author Notes on 40G/ 100G Ethernet Market: 

 

  • The 802.3ba standard also complements efforts aimed at delivering greater broadband access.  An example is the Federal Communication Commission’s “Connecting America” National Broadband Plan, which calls for 100 M bit/sec access for a minimum of 100 million homes across the U.S.  If that were to happen, higher speed optical links would be needed between telco central offices and in the core and backbone networks.
  • We think that this standard will accelerate the adoption of 10G Ethernet now that higher-speed 40G/100G pipes are available to aggregate scores of 10G Ethernet links. By simplifying current link aggregation schemes, it will provide concrete benefits such as lowered operating expense costs and improved energy efficiencies.
  • Key stakeholders for IEEE 802.3ba will include users as well as makers of systems and components for servers, network storage, networking systems, high-performance computing, data centers.  Telecommunications carriers, and multiple system operators (MSOs) should also benefit as they can offer much better cost/ performance to their customers.

References:

 

1.  For further discussion and comments on 40G/ 100 G Ethernet, such as server virtualization and converged networks driving the need for higher network data rates, please refer to this article: When will 40G/100G Ethernet be a real market? https://techblog.comsoc.org/2010/09/09/when-will-40g100g-ethernet-be-a-r…

 

2.  IEEE ComSocSCV web site – 2010 Meeting Archives section (http://www.ewh.ieee.org/r6/scv/comsoc/ComSoc_2010_Presentations.php) for presentation slides.

A Perspective of Triple Play Services: AT&T U-Verse vs Verizon FiOS vs Comcast Xfinity

Note:  This article is co-authored by IEEE ComSocSCV officers Sameer Herlekar and Alan J Weissberger.  Some information used in this article was gathered during a July visit of ATT Labs in San Ramon, CA.

With the recent proliferation of triple-play (high-speed Internet, high-definition television, and phone) services being offered by telcos (such as Verizon and AT&T) and MSOs/ cable operators (including Comcast and Time Warner Cable), subscribers may be able to choose among an array of telecommunications services to meet their needs.  In some geographical areas, the MSO is only one choice for true triple play services, because the telco has not built out their advanced network to cover every U.S. city.  For example, if one lives in Santa Clara, CA- the heart of silicon valley- you can only get triple play services from Comcast.   In fact, if you are not a U-Verse customer, the ADSL based Internet service you can obtain is much lower speed than the VDSL2 based High Speed Internet AT&T offers as part of U-Verse.

Many questions arise as to the efficacy of these triple-play services delivered by the telcos and MSOs?  Are these services accessible to all potential subscribers and what do subscribers think about the services?

A recent thread on the IEEE ComSoc SCV email Discussion group (free registration for all IEEE members at www.comsocscv.org) yielded a wealth of first-hand information on precisely the aforementioned issues.

Given that telecommunications service provisioning, like any other business, is driven by customer demand, the latter, in turn, is determined by the subscribers’ perceived need for the service(s), quality of the offered service(s), and subscriber awareness of the availability of the services (determined by the marketing of the services by their respective providers).

The explosive growth of social networking sites including Facebook, Twitter and MySpace, video-sharing websites like YouTube and online gaming websites such as Final Fantasy and World of Warcraft indicates that subscriber demand for high-bandwidth internet services is at an all-time high. Combined with the growing demand for high-definition (HD) television programming, overall subscriber demand for bandwidth is growing exponentially. Consequently, both telcos and cable operators have been forced to upgrade their network hardware and architectures to accommodate the ever-burgeoning demand for bandwidth. At the same time, the key business objective to stay profitable has not been lost on the service providers who have responded by offering customers the so-called triple-play services of high-speed internet, HD television and digital phone service.

The two principal telcos in the telecommunications services sector are Verizon (VZ) and AT&T. According to a report released by Information Gatekeepers Inc. (IGI) on July 15, 2010 the two companies in a recent year combined for 76% of total capital expenditure by major phone companies and over 46% of the total capital spent that year by all telecommunication carriers.

According to a Wall Street Journal article in July titled “Verizon’s fiber optic hole” by Martin Peers, VZ has invested $23 billion on their triple-play service offering FiOS which is based on fiber-to-the-home (FTTH) technology. On the other hand, AT&T’s U-verse service features fiber-to-the-curb (FTTC)1 with copper cables reaching individual subscriber premises over a digital subscriber line (DSL) access line.

Footnote 1.  FTTC is often referred to as Fiber to the Node (FTTN) or Fiber to the Cabinet.

On a recent visit to AT&T Labs in San Ramon, CA, several IEEE ComSoc SCV officers learned that AT&T is pouring money into U-verse as it foresees tremendous growth potential for the DSL-FTTC market.  The ComSocSCV officers went on a very impressive tour of AT&T’s U-Verse Lab, which appeared to be much bigger than most telco Central Offices!  AT&T is testing a FTTC/VDLS2 arrangement that will deliver three HD TV channels, High Speed Internet and either digital voice (VoIP) or POTs.

In terms of technology, VZ’s FiOS represents a significant telco plant upgrade compared to U-verse, since the high-bandwidth capable fibers are terminated at the subscriber premises rather than at the curb or cabinet.  For AT&T’s U-verse, it is the quality of the DSL link (from the network node to the subscriber premises) which determines the perceived quality of the overall service. 

Therefore, one would be led to believe that FiOS, built on Fiber to the Premises (FTTP) technology and backed by a major telco (VZ), would be holding a large, if not the largest, portion of the telecommunications services market. However, it is surprising to note that U-verse has, in fact, been outselling FiOS by a whopping 35-40% according to the report by IGI (http://www.igigroup.com/st/pages/FIOS_UVERSE.html).

Sameer Herlekar, IEEE ComSoc SCV Technical Activities Director (and a co-author of this article), believes that the reason for the discrepancy is the larger per-connection cost entailed in deploying FiOS compared to the per-connection cost of U-verse deployments. Moreover, according to WSJ’s Martin Peers, VZ has recently down-sized its promotions and added only 174,000 net connections to the FiOS network in Q02/2010 compared to 300,000 a year earlier. On the other hand, according to Todd Spangler of Multichannel News, AT&T’s revenues from U-verse TV, Internet and voice services nearly tripled over 2009 and is approaching an annual run rate of $3 billion as it “continues to pack on video and broadband subscribers.”

However, not all potential subscribers for U-Verse can get it, while other that have just had it installed “like it a lot, when it was working.”  A recent thread on the IEEE ComSoc SCV discussion group indicated that U-Verse is simply not available in parts of Santa Clara, CA despite U-verse cabinets being installed in the area.  The installation problems experienced by some Discussion Group members seem to have been resolved, but highlight the “growing pains” AT&T is experiencing to make it work reliability and correctly.”

Mr Herlekar states that “according to AT&T network planners, those subscribers served directly from the central office (CO) receive, at present, limited bandwidths sometimes in the order of just hundreds of bits per second. Furthermore, while some subscribers have high-speed connectivity via ADSL2 (newer installations) others have a slower connection with ADSL (older installations), both of which are slower than the state-of-the-art VDSL2 technology.”

Another key issue is technical support and customer service – troubleshooting problems and resolving them.  From the perspective of co-author Alan J. Weissberger, AT&T seems to do a much better job in this area.  Again, from the IEEE ComSocSCV Discussion list, we read of a U-Verse customer who received excellent tech support from AT&T – including customer care from an AT&T Labs Executive in San Ramon, to resolve his installation problems with TV service.  Perhaps, because AT&T is the new kid on the triple play service delivery block, it seems “they try harder.”

Yet, we’ve read that Comcast is gaining market share over the telcos in the broadband Internet market.  We suspect this is because non- triple play telco customers can’t get the higher speeds offered by the MSOs.  Those unlucky customers have to live with older and much slower wireline access technologies (ADSL or ADSL2) from the telcos, rather than the much higher speed Internet available with VDSL2 for U-Verse or FTTP/BPON/GPON for FiOS).  How fast will AT&T and VZ build out their triple play delivery systems?  We suspect that they are not now available in a majority of geographical areas in the U.S.

Dave Burstein of DSL Prime

U.S. Cable Clobbers DSL, U-Verse, FiOS

“Comcast added 399K new cable modem customers in 2010 Q1 to 16.329M. That’s more adds than the total of AT&T (255K to 16,044K) combined with Verizon (90K to 9.3M). Time Warner was also far ahead of Verizon with 212K to 9,206K.  John Hodulik of UBS estimates 67% of the Q! net adds will go to cable, a remarkable change from less than 50% a year ago. This is not because of DOCSIS 3.0, which at $99+ is not selling well,

Overall, cable added about 1M to over 40M. Telcos added about half a million to 33M. Add between 5% and 10% for the companies too small to appear in the chart below. While this could be the start of a precipitous decline, for now we might just be seeing the effect of price increases (Verizon, +12% in one key measure according to Bank of America) and the dramatic cut in U-Verse and now FiOS deployment.

My take is that the telcos would be damned fools not to hold more of the market so that femtocells/WiFi will provide them more robust and profitable wireless networks. Blair Levin came to a similar conclusion, that it’s too early to claim cable is the inevitable winner. But Verizon cutting FiOS by 2-4M homes is exactly the kind of damned fool move that will hurt them in the long run.  U.S. broadband is a two player game with many different possible strategies I can’t predict.”

For the complete article, including graphs and tables, please see:

http://www.dslprime.com/docsisreport/163-c/2957-us-cable-clobbers-dsl-u-verse-fios


In closing, Mr. Weissberger would like to make two key points:

1.  If U Verse or FiOS is not offered in your geographical area, you will have to go to Comcast, TW Cable (or other MSO in your area) by default to get high speed Internet and digital cable TV with On Demand.  Those non triple play reachable customers can NOT get high speed Internet access from ATT or VZ, because those telcos  haven’t upgraded their cable plant in many areas, e.g. from ADSL to FTTN with VDSL2 for U Verse; or from ADSL to FTTP for FiOS.   In my opinion, those non reachable triple play customers are being neglected or even discriminated against by the two big telcos.

Hence, Comcast (or TW Cable or whomever is the cable franchise holder in their geographical area) wins by default.  Perhaps that’s why Comcast is signing up many more high speed Internet (above 5M or 6M b/sec Downstream) customers than AT&T or VZ.   

2.  All triple play customers are in danger of losing all three services on an outage (cable break, power failure, CO/Head end server failure, etc).  The exception is U Verse with POTS where you’d still be able to make voice calls (but U- Verse- VoIP customers would be dead in the water!).  Hence, you need to have a working cell phone if your access or ISP fails.  And that’s not always possible if you are in a remote area, or the hills where cell phone coverage is bad.

When will 40G/100G Ethernet be a real market? Oct 13 ComSocSCV meeting + Postscript

10Gig E took almost 10 years from the time the standard was ratified till it was deployed in large quantities within campuses, data centers and for WAN aggregation.  What are the driving applications/business needs for 40G/100G Ethernet?  And when will we see line rate multi-port switches as commercial products?

The biggest market for 40G Ethernet is in data centers.  The 100G Ethernet is more for core network aggregation of 10G and 40G Ethernet links.  Ithin the data center, the main competition for 40G E is Infiniband.  By comparison, higher-speed Ethernet will capitalize on the large installed base of Gigabit (and 10G) Ethernet. New 40G and 100G products will become less expensive and more available over time, and will be supported by many silicon and equipment vendors. The Ethernet standard also is universally understood by data center network administrators, so the relative costs for managing and troubleshooting Ethernet are much lower than for a niche fabric such as Infiniband.

We will explore these issues and many others at the Oct 13 IEEE ComSocSCV meeting “40/100 Gigabit Ethernet – Market needs, Applications, and Standards.”  There will be 3 presentations followed by a panel session to be moderated by ComSocSCV officer Prasanta De.  Here is a summary of the talks:

1. Ethernet’s Next Evolution – 40GbE and 100GbE by John D’Ambrosia

This talk will provide an overview of the Ethernet Eco-system and the applications within that drove the need for the development of IEEE Std. 802.3baTM-2010 40Gb/s and 100Gb/s Ethernet Standard. Technology trends in computing and network aggregation and their role in driving the market need for 40GbE and 100GbE will be discussed.

2. The IEEE Std 802.3ba-2010 40Gb/s and 100Gb/s Architecture by Ilango Ganga

This session provides an overview of IEEE Std 802.3ba-2010 40Gb/s and 100Gb/s Ethernet specifications, objectives, architecture and interfaces.
The next generation higher speed Ethernet addresses the needs of computing, aggregation and core networking applications with dual data rates of 40Gb/s and 100 Gb/s. The 40/100 Gigabit Ethernet (GbE) architecture allows flexibility, scalability and leverages existing 10 Gigabit standards and technology where possible. The IEEE Std 802.3ba-2010 provides physical layer specifications for Ethernet communication across copper backplane, copper cabling, single-mode and multi-mode optical cabling systems.
The 40/100 Gigabit Ethernet utilizes the IEEE 802.3 Media Access Control sublayer (MAC) coupled to a family of 40 and 100 Gigabit physical layer devices (PHY). The layered architecture includes multilane physical coding sublayer (PCS), physical medium attachment sublayer (PMA) and physical medium dependant sublayers (PMD) for interfacing to various physical media. It also includes an Auto-Negotiation sublayer (AN) and an optional forward error correction sublayer (FEC) for backplane and copper cabling PHYs. The optional management data input/output interface (MDIO) is used for connection between 40/100 GbE physical layer devices and station management entities. The architecture includes optional 40 and 100 Gigabit Media Independent Interfaces (XLGMII and CGMII) to provide a logical interconnection between the MAC and the Physical Layer entities. It includes 40 and 100 Gigabit attachment unit interfaces (XLAUI and CAUI), four or ten lane interface, intended for use in chip-to-chip or chip-to-module applications. It also includes a 40 and 100 Gigabit parallel physical interface (XLPPI and CPPI), four or ten lane non-retimed interface, intended for use in chip-to-module applications with certain optical PHYs. The presentation will also outline the applications for some of the above interfaces.

3. Physical Layer (PCS/PMA) Overview by Mark Gustlin, Principal Engineer, Cisco Systems

This paper describes the Physical Coding Sublayer (PCS) and the Physical Medium Attachment (PMA) for the 40-Gb/s and 100-Gb/s Ethernet interfaces currently under standardization within the IEEE 802.3ba task force. Both of these speeds will initially be realized with a parallel PMD approach which requires bonding multiple lanes together through a striping methodology. The PCS protocol has the following attributes: Re-uses the 10GBASE-R PCS (64B/66B encoding and scrambling), just running at 4x or 10x as fast to provide for all of the required PCS functions for the data which will traverse multiple PMD lanes. Part of the PCS is a striping protocol which stripes the data to the PMD lanes on a per 66 bit block basis in a round robin fashion. Periodically an alignment block is added to each PMD lane. This alignment block acts as a marker which allows the receive side to deskew all lanes in order to compensate for any differential delay that the individual PMD lanes experience. The PMA sublayer provides the following functions: Provides per input-lane clock and data recovery, bit level multiplexing to change the number of lanes, clock generation, signal drivers and optionally provides loopbacks and test pattern generation/checking.


Presentations are posted at 2010 Meeting archives (top left) of  www.comsocscv.org

Meeting summary is at:  https://techblog.comsoc.org/2010/10/27/comsocscv-meeting-report-40100-gigabit-ethernet-market-needs-applications-and-standards

Please see numerous comments which update the market status.

Sept 25 Workshop on Smart Grids, M2M Platforms and "the Internet of Things"

Smart Grids, M2M Platforms, “the Internet of Things” and Other Networks for Smart Devices

12:30pm – 8:00pm, September 25, 2010, Saturday at Benson Center (Bldg 301), Santa Clara University

The applications and communications aspects of smart grids, Machine to Machine (M2M) platforms and smart/ embedded devices are the focus areas of this workshop. “The Internet of things” is often used to denote the wide variety and huge number of networked devices that are now emerging. We will explore that as well as other networks (e.g. WiFi/IEEE 802.11n and home networks) which provide connectivity for emerging devices and the smart grid.

We will also examine how M2M networks will be managed and provisioned for so many embedded devices (Ericsson has predicted 20B connected devices by 2020 and other companies predict even more). IEEE ComSocSCV and NATEA are very fortunate to have so many well known speakers, including the Chair of the TIA Smart Devices Standards Committee, Sprint’s M2M Platform Manager, and the IEEE ComSoc officer who is a world expert on Power Line Communications.

There will be four presentations in each of two workshop Tracks.  The talks are followed by a panel session for each track.  The program is as follows:

Track I: Communications aspects of the Smart Grid

Smart Grid Communications: Enabling a Smarter Grid
Claudio Lima, Vice Chair of IEEE P2030 Smart Grid Architecture Standards WG

Power Line Communications and the Smart Grid
Stefano Galli, Lead Scientist Panasonic R&D

Wireless Communications for Smart Grid
Kuor-Hsin Chang, Principal System Engineer – Standards, Elster Solutions

Role of WiFi/ IEEE 802.11n and Related Protocols in Smart Grid
Venkat Kalkunte, CTO, Datasat Technologies

Track II: Smart Devices, M2M platforms, and Home Networks

Standardization as a Catalyst of M2M Market Expansion
Jeffrey Smith, CTO, Numerex Corp & Chairman, TIA TR-50 Smart Device Communications Standard Committee

Operations and Management of Mobility Applications and M2M Networks
Jason Porter, AVP, AT&T

Sprint’s Machine-to-Machine and Service Enablement Platform
Michael Finegan, West Area M2M Manager of Solutions Engineering, Sprint Emerging Solutions Group

Noise and Interference in Home Networks
Arvind Mallya, Lead Member of Technical Staff, AT&T Network Operations

There will be a panel session at the end of each Track, after the presentations


More info at:  http://m2m.natea.org/

Postscript:

As a follow on to our very successful Smart Grid/M2M workshop at SCU, here is an article just published on the M2M market forecasts and an assessment of the network changes that are needed to realize those forecasts

Success! Aug 24 ComSocSCV Social with tech discussions, networking and inter-personal communications

Aug 24 6pm-9pm  China Stix,  Santa Clara, CA  www.comsocscv.org

During our 6pm -7:15pm networking session we had breakout groups to discuss:

  • Sprint’s M2M platform and initiatives (Led by Sprint’s M2M product mgr)
  • High speed transmission on twisted pair:  10G BaseT (LANs) and xDSL  (Sept 8 meeting topic)
  • Smart Grid, Smart Devices, Internet of Things (Sept 25 workshop topic)
  • 40G/100G Ethernet (Oct 13 meeting topic)
16 people (2 from Sprint) attended our gala dinner at China Stix in Santa Clara.  In addition, two came for the free networking session that preceded the dinner.  We had attendees driving to Santa Clara from all over the SF Bay Area, Monterey, Sacramento/Folsom and Orinda.  Lots of great conversation, camraderie, fine food and wine.  Two lucky individuals took home a bottle of premium wine as a gift.

Here are a few of the comments/ testimonials received via email with my acknowledgement at the end of this chain:

Alan —

Thanks again for the wonderful time last Tuesday–I will make every one of these events from now on.  I hope to make the 25 Sep event, though I think may be on a plane coming in to San Jose at the time.  You run a great meeting and social event, my friend–many thanks!

Cheers,

Karl

— KARL D. PFEIFFER, PhD, Lt Col, USAF
Assistant Professor


Thanks for a very well-organized Aug 24th social!

Alan, thanks a million from my side as well. It was well-organized and every one enjoyed the evening. I finally got to meet with Alan Earman and within minutes found out that we know several people in common. The world of connectivity does these wonders and Comsoc is a big part of it!

MP Divakar


Hello Alan and everyone,

Thank you for a great event last night.  Thank you for having us there.  We had a great time meeting everyone, the food was delicious, and the wines were tasteful.  I hope we can meet again soon.  Have a great day everyone!


Yes Alan, I agree with Sameer – this was great! I wish more people came and enjoyed the social.
Thanks,
Prasanta


Hi Alan,
Just wanted to say thank you once again for a very well-organized social yesterday at China Stix. My wife Sumi and I enjoyed meeting with everyone, the stimulating conversations and the very good food & drink.
Great job and thanks again!
Sincerely,
Sameer


Hi Alan,

Thank you so much for a great evening. I enjoyed meet with everyone there as well as your good choice of food. Appreciate all your good words about me. I look forward to future collaborations.

Alice


Hi Alan

I had a wonderful time at the ComSoc social. The food, wine, and conversation were fantastic. Thank you for inviting me to attend.

Thank you for sharing the panel questions with me. Perhaps we should discussing security and or privacy aspects as it relates to regulatory issues.

I’m sure there will be a few folk interested in IPv4 addressing and connecting billion of machines at some point we will run out of IPv4 addresses.

Kind regards,

Michael Finegan

Manager, M2M Solutions Engineering

Emerging Solutions Group at

Sprint Nextel


Chairman’s Response:

Dear All

Thanks for your compliments on the social.  Alice had earlier acknowledged she had a great time and I hope everyone else did too!
We need more of these events to make people feel good, improve their networking/ inter-personal communications skills, and exchange information and opinions.  It gets your mind off all the financial/economic/political problems of the day.  And there are sure a lot of those!
I encourage all of the attendees (see To: list) to communicate with one another and build your personal network of contacts.  You can continue the living dinner conversation via email and phone conversations.  Just reach out to those people you’d like to know better or exchange ideas/ proposals with.  Let the round two communications begin upon receipt of this email

Thanks again for coming last night.  Especially appreciate the dedication of those who had very long commutes- Olu from Sacramento/Folsom, Karl from Monterey, Michael from Orinda, Erin from?  We really appreciate your attendence at our gala social!

Warmest regards and best wishes
Alan J Weissberger, ScD, retired Prof SCU EE Dept
IEEE ComSoc SCV Chairman www.comsocscv.org

Sept 8 ComSocSCV meeting backgrounder: High Speed Transmission on Twisted Pair in LANs and xDSL

IEEE ComSocSCV Sept 8 meeting:   High Speed Transmission on Twisted Pair in LANs and DSL
 
Our Sept 9th meeting features talks on high speed transmission on both LAN/data centers and DSL access networks.  Details at
 
Several of your ComSocSCV officers spent yesterday afternoon and early evening at ATT Labs in San Ramon.  We were surprised to learn of ATTs extensive use of VDSL2 in FTTN deployments of U-Verse (their triple play bundled servie that includes TV/VoD via IPTV, HIgh speed Internet access, and Voice (either POTS or VoIP).  They are also using VDSL to reach subscribers in Multi Dwelling Units (MDUs).  Separate from U-Verse, ADSL2 is being used for single point Internet access.
 
Our other talk will be the status of 10G Base T for LANs and data centers.  It’s amazing that in 1993, High Speed Internet was only 100M bits/sec.  Now 1G BaseT Ethernet is widely deployed and 10G Base T is coming along fast (the standard has been completed)

 
Here is a Brief History of Twisted pair based Ehternet and xDSL, based on prsonal observations in the 1980s and 1990s:

 
Twisted Pair based LANs
 
 In the mid 1980s, ATT had a 1M b/s twisted pair transmission system named “STAR-LAN.”  It never went anywhere as a cheaper version of coax based Ethernet (10Base2) was more popular.  Then in the late 1980s Manchester coded 10BaseT became very popular.

Notes on nomenclature: 
 
“BASE” is short for baseband, meaning that there is no frequency-division multiplexing (FDM) or other frequency shifting modulation in use; each signal has full control of wire, on a single frequency. 
 
“T” designates twisted pair cable, where the pair of wires for each signal is twisted together to reduce radio frequency interference and crosstalk between pairs (FEXT and NEXT).
 
“UTP” = Unshielded twisted pair, as in UTP-3 (voice grade) and UTP-5 data grade) twisted pair.
 
“PMD” is the lowest sublayer in the IEEE 802.3 PHY layer.  It stands for Physical Medium Dependent.  Any coax, twisted pair, or fiber optic transmission system is the essence of the PMD sublayer.

Continuing the story……………….
 
Sometime in 1992, ANSI X3T9.5 began developing a standard for 100M b/sec FDDI on Twisted Pair.  It was called “TP-PMD.”  Discussion Group member (and former IEEE 802.3 Chair/ Vice-Chair) Geoff Thompson and I participated in that committee.  It was chaired by a guy from DEC.  In 1993 there was a performance test between the two competing twisted pair transmission technologies that were candidates for the TP-PMD standard.  It was conducted at an independent test lab in New Hampshire.  Crescendo’s technology (based on a 3 state Pulse Amplitude Modulation code called “MLT-3”) beat out National Semiconductor’s and was chosen as the TP-PMD standard.
 
Also during 1993, there was a “Fast Ethernet” standards war, with HP’s 100 VG AnyLAN (new MAC and PHY) battling 100BaseT (where the Ethernet MAC was not changed).  100BaseT had one version for UTP-3 and another for UTP-5.  It was that latter version, known as 100 Base TX that dominated the market.  Grand Junction seemed to be the ring leader of that camp, although Intel was a staunch supporter.  100Base TX used the PMD from TP-PMD without any changes.  Ironically, both Crescendo and Grand Junction (as well as Kalpana) were all acquired by Cisco and that’s how Cisco came to dominate the LAN switching market.
 
Years later, 1G Base T and now 10G Base T became IEEE 802.3 PMD standards.  I have not followed the market acceptance of those, but I’m sure our Sept 8 speakers from Teranetics will fill us in.

Digital Subscriber Loop  (xDSL)
 
The first version of DSL was for Basic Rate ISDN U interface (between the Network Terminating Unit and the voice grade twisted pair access network.  In North America, it was based on the 2B1Q line code (Pulse Amplitude Modulation) which was selected by the T1E1.4 committee in August 1986 as a compromise, because it couldn’t decide between 3 completely different transmission systems (I was actually at that meeting in Monterrey, CA and head the “dark horse” presentation by Andrew Siroka of Mitel Semiconductor.  He claimed BT had done extensive tests that showed 2B1Q outperformed the other systems and that Mitel could make a transceiver with a significantly smaller die size (lower cost and power disipation) than the other proposed systems.
 
Bellcore’s Joe Lechleider – a member of the T1E1.4 committee (as was I), had suggested asymmetry would allow higher speeds than ISDN’s 160 kbps, perhaps as high as 1.5 Mbps.  The theoretical results wound up being a lot more than 1.5 Mbps, depending on line length, bridge taps, condition of copper, etc.  However, there was no standards group that seemed to be interested.
 
In the early 1990s, there was a new vision of telco TV- initially on fiber optic cable to the home, but some thought it might be feasable to transmit 1 video stream in 1 direction over a 1.5M bit/sec twisted pair, using the higher frequencies above 100K Hz.  In 1992 the T1E1.4 standards committee took on the Asymmetric Digital Subscriber Loop (ADSL) project.  There were several entries in the official T1E1.4 standards competition:
 
  • Stanford/Amati DMT led by John Cioffi and his grad students
  • Bellcore/UCLA/Broadcom QAM
  • ATT’s CAP (Carrier Amplitude Modulation is actually a DSP based version of QAM)
The adaptive multicarrier known as Discrete Multi- Tone or DMT (with bit swapping between bins to track noise changes) won the competition by having much better noise margins.     The closest test was a 11 dB advantage for DMT.  Some of the tests showed 30 dB improvements.  T1E1.4 picked DMT on March 10, 1993.  That standard did not become popular.  Instead,  ATT spin-offs (Globespan, Lucent and Paradyne) started making CAP based DSL chips and equipment, which ATT and other telcos started to deploy.  In 1996, T1E1.4 took on an ADSL version 2 standard (T1.413v2).  Due to a lot of controversy, there was a primary standard based on DMT (which I contributed to and wrote several sections) and an Appendix on CAP.  DMT quickly prevailed as Alcatel started designing and deploying DSLAMs based on it.  THe ADSL Forum only recognized DMT and not CAP.  Carriers all over the world (like Pac Bell, Singapore Tel and many others) signed exclusive deals with Alcatel and DMT based DSL became dominant.  CAP was then dead. 
Aware lead a consortia of groups who did not want to license Stanford/Amati patents by  introducing “G.lite” that tried to remove the very essential bit-swapping of DMT and reduced the number of tones from 256 to 128 to “reduce cost.”  While G.lite became an ITU-T standard (the editor was from Intel, which exited the xDSL business), it failed in the marketplace. Instead, G.dmt (and then ADSL2+) went the other direction of higher speeds and actually increasing the number of tones.  DMT emerged as the worldwide transmission system for ADSL.
 
VDSL was moving on a parallel track, but was a perceived to be a smaller market due to distance limitations.  There was an equally fierce CAP vs DMT standards war such that T1E1.4 could not select a clear winner.  VDSL was envisioned  to use ATM exclusively for layer 2 transport and not Ethernet.  But the IEEE 802.3ah Ethernet First Mile (EFM) committee selected DMT-VDSL as the short range copper interface and defined a convergence sublayer to make Ethernet MAC ride over VDSL (and SHDSL-2).  Despite much fanfare, I don’t believe that standard was ever implemented.  But now, ATT has deployed VDSL-2 as part of its U-Verse triple play transmission system from the Fiber Cabinet in the node to the customer premises.  Unlike the earlier versions of the VDSL and EFM standards, there is a POTS band reserved in VDSL-2 for analog telephony.
Note:  I exited the DSL space over 10 years ago, so have not kept up with its progress.  However, I got the following information from a very credible source that must remain anonymous:
 
“There was a second VDSL Olympics in 2003, also hosted by Bellcore and also BT.   DMT VDSL systems were submitted by Alcatel and by Ikanos, while QAM/CAP systems were submitted by Lucent and by Metalink/Infineon.   The results were similar to the first VDSL Olympics in that the advantage was roughly 10 dB.   That did it, CAP/QAM died at a June 2003 T1E1.4 meeting where DMT was selected for VDSL2.”
 
Yesterday we learned that  ATT is using VDSL2 for its U Verse FTTN transmission system -between the Optical Network Unit (ONU) and the subscribers Network Termination unit (NT) over a copper twisted pair.  They are also using it as a U Verse distribution system within Multi- Dwelling Units.  I was astonished ATT claimed they got very good performance at 5K feet line length and often deployed longer VDSL loops.  They are now testing 4 HD video streams + high speed Internet + POTs over a single VDSL2 loop!  We would expect test results to be announced later this year.
 
We will get filled in on all the gaps by our ASSIA speaker on Sept 8th.  I invited Prof Cioffi to the meeting, but he wrote that he might have to travel to Asia.  Cioffi and I taught DSL classes at IEEE Infocom and we co-authored several T1E1.4 standards contributions.  Later, I taught many ADSL architecture classes at private companies that were implementing or deploying ADSL/SDSL.  My partner was Amati’s chief engineer John Bingham (who I had worked with at Fairchild in the Spring and Summer of 1970).  Bingham and I got so good at teaching the ADSL class that we joked that we would swap sections (he did the modulation and transmission while I did the architecture and OAM section).  He asked me to write the introduction and the second chapter of his book on ADSL Network Architecture.    You can read it on line (isn’t this a copyright violation?):
 
 
You can buy the book here:
 
 
Book Review from IEEE Communications Mag, Sept 2001:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=948376&userType=inst  (IEEE Explore account required to access full text)
End of Story……………………………………………………………………..

Cloud Computing: Impact on IT Architecture, Data Centers and the Network: July 14th IEEE ComSocSCV meeting

Three keynote talks from VMware, Microsoft and Ericsson will be followed by a lively panel/ Q and A session with Juniper Networks also participating. We are all very excited about this comprehensive and well balanced look at cloud computing from both a computiing and communications perspective. It should be one of the best technical meetings of the year!

www.comsocscv.org

Presentation Titles and Abstracts

 

Building Many Bridges to the Cloud, Robin Ren, Director of R&D, Cloud Applications and Services, VMware

Cloud computing is on every CIO’s top priority list nowadays. However, like any “game-changer” technologies in history, today’s Cloud Computing field can appear to be both exciting and chaotic. Most large technology companies claim to have at least one cloud product or service. Many start-up companies are also trying different ideas. In the introduction, we will offer answers to some basic questions:

-What is Cloud Computing?
-Why does Cloud matter?
-How will Cloud change the IT industry?

We’ll look at the major Cloud Computing players, trying to analyze the big trend and compare different approaches. In the end, there are several valid ways to move from traditional IT to the Cloud, targeted at different audiences and workloads. It is important to understand how you can participate and benefit from this new “IT gold rush.”

Cloud Data Centers and Networking Trends, Alan G. Hakimi, Senior Cloud Architect, Microsoft Services Enterprise Strategy and Architecture

The data center is at the heart of cloud computing. It brings dynamic virtualized server and storage environments to users via networks that provide cloud connectivity. The networks used to access cloud services will need more intelligence in several areas. They will have to quickly react to changes in the computing/storage environment, recovering from faults, and increasing or decreasing scale. This session will describe some architectural patterns in IaaS with respect to designing around resiliency and bandwidth. We will discuss the differences between traditional data centers and cloud data centers including intra-data center and inter data center communications. This session will also address networking trends with respect to federating clouds and providing secure, high quality network access to the data center.

Cloud Connectivity – offensive or defensive play? Arpit Joshipura, VP of Strategy and Market Development, Ericsson Silicon Valley
Cloud services and advanced devices are worthless without connectivity. At the same time, cloud services are increasing in value with the addition of mobility. This talk focusses on value of connectivity to the cloud and discusses the mobile aspects that an operator can leverage. With the asset of connectivity, an operator can use Cloud as both an offensive and a defensive strategy. This talk outlines the details of this strategy and identifies requirements on connectivity including type of access, SLA, QOS, Interoperability and standardization.

Additional Panelist: Colin Constable, Chief Enterprise Architect within the office of the CTO, Juniper Networks

Bio’s:

Robin Ren is a Director, R&D at VMware in Palo Alto, California. He manages an engineering team in the new Cloud Applications and Services BU. He is involved in many of the VMware’s cloud initiatives at the Infrastructure-, Platform-, and Application-as-a-Service layers. He is also the ambassador at the headquarters for the VMware R&D Center in Beijing China.

Alan Hakimi joined Microsoft in 1996 as a member of the Microsoft Consulting Services group. Alan is an IEEE member and has MCA and CITA-P architect certifications. He is currently working in Microsoft Services leading efforts on Enterprise Strategy and Cloud Architecture. Alan enjoys cycling, hiking, making music, cooking, and studying philosophy. His blog on Zen and the Art of Enterprise Architecture is located at http://blogs.msdn.com/zen.

Arpit Joshipura heads up Strategy & market development for Ericsson in Silicon Valley. In his role, responsible for network operator architecture strategies including IP, Convergence, Cloud. He is a valley veteran and has worked in several startups and established companies in leadership roles – business and engineering. Arpit is a veteran speaker and panelist at ComSocSCV meetings He gives Indian classic music performances and plays the harmonium.

Colin Constable joined Juniper Networks in September 2008. He previously spent twelve years at Credit Suisse, most recently as the Chief Network Architect & EMEA Infrastructure CTO. In this role he created and published the “Credit Suisse Network Vision 2020” focused on seven sub domains of networking. He built a governance framework leveraging the strategies structure to ensure cross technology tower engagement and decision making, both technical and financial. He also led numerous programs to increase cross-technology, technical knowledge.

July 14th (6pm-9pm) at National Semiconductor, Santa Clara, CA  

Timeline:

6pm-6:30pm Refreshments and Networking

6:30pm-6:40pm Opening Remarks

6:40pm-8pm Presentations (3)

8pm-8:45pm Panel Session + Audience Q and A

8:45pm-9pm Informal Q and A with panelists

ITU Cloud Computing Focus group and IEEE Cloud Computing Standards Study Group- will they fill the standards void?

Introduction- The need for Cloud Computing Standards
 
Cloud computing deployments are being announced on an almost daily basis.  Cloud computing speeds and streamlines application deployment without upfront capital costs for servers and storage. For this reason, many enterprises, governments and network/service providers are now considering adopting cloud computing to provide more efficient and cost effective network services.  The venture capital firm, the Sand Hill Group, has concluded that cloud computing represents one of the largest new investment opportunities on the horizon.  The Cloud  Computing market is forecast to be very big by IDC, Gartner Group, andother market research firms.  But there seems to be a lot of confusion regarding the service delivery method and lack of interoperability.  And there are no solid standards for Infrastructure as a Service, Platform as a Service or Software as a Service. This results in difficulties in exchanging information between cloud service providers and for users that change providers.  It may also present a problem when bursting between a private cloud and different public clouds.  Interoperability facilitates secure information exchange across platforms.

Camille Mendler, Vice President of Research at Yankee Group: “Cloud computing is the future of ICTs. It’s urgent to address interoperability issues which could stall global diffusion of new services. Collaboration between private and public sectors is required..”    That lack of interoperability is a huge problem, was highlighted at the recent Cloud Connect Conference (in Santa Clara, CA in March).  It was a very sobering experience for this author.  At the conference, it was revealed that there was no umbrella set of standards for cloud computing and no single standards body claims ownership of comprehensive cloud computing specifications.  IBM’s VP of Cloud Services Ric Telford was asked what he thought about the huge growth forecast for cloud computing.  Mr. Telford said: “I have no problem with those numbers (40% in three years; 70% in five) as long as you include the caveat, it could be any one of five delivery models.”  So the industry needs to define and standardize on those methods of delivering cloud services and applications to users, he said.

ITU-T Establishes Cloud Computing Focus Group

A new ITU-T Focus Group on Cloud Computing has been formed to enable a global cloud computing ecosystem where interoperability facilitates secure information exchange across platforms. The group will take a global view of standards activity in the field and will define a future path for greatest efficiency, creating new standards where necessary while also taking into account the work of others and proposing them for international standardization.

Malcolm Johnson, Director of ITU’s Telecommunication Standardization Bureau, said: “Cloud is an exciting area of ICTs where there are a lot of protocols to be designed and standards to be adopted that will allow people to best manage their digital assets. Our new Focus Group aims to provide some much needed clarity in the area.”   The new ITU-T Cloud Focus group will take a global view of standards activity in the field and will define a future path for greatest efficiency, creating new standards where necessary while also taking into account the work of others and proposing them for international standardization.

ITU-T study groups were invited to accelerate their work on cloud at the fourth World Telecommunication Policy Forum (Lisbon, 2009) and at an ITU-hosted meeting of CTOs in October 2009. The CTOs highlighted network capabilities as a particular area of concern, where increased services and applications using cloud computing may result in the need for new levels of flexibility in networks to accommodate unforeseen and elastic demands.

Vladimir Belenkovich, Chairman of the ITU Focus Group on Cloud Computing: “The Focus Group will investigate requirements for standardization in cloud computing and suggest future study paths for ITU. Specifically, we will identify potential impacts in standards development in other fields such as NGN, transport layer technologies, ICTs and climate change, and media coding.”

A first brief exploratory phase will determine standardization requirements and suggest how these may be addressed within ITU study groups. Work will then quickly begin on developing the standards necessary to support the global rollout of fully interoperable cloud computing solutions. The Focus Group will, from the standardization view points and within the competences of ITU-T, contribute with the telecommunication aspects, i.e., the transport via telecommunications networks, security aspects of telecommunications, service requirements, etc., in order to support services/applications of “cloud computing” making use of telecommunication networks; specifically:

  • identify potential impacts on standards development and priorities for standards needed to promote and facilitate telecommunication/ICT support for cloud computing
  • investigate the need for future study items for fixed and mobile networks in the scope of ITU-T
  • analyze which components would benefit most from interoperability and standardization
  • familiarize ITU-T and standardization communities with emerging attributes and challenges of telecommunication/ICT support for cloud computing
  • analyze the rate of change for cloud computing attributes, functions and features for the purpose of assessing the appropriate timing of standardization of telecommunication/ICT in support of cloud computing

The Focus Group will collaborate with worldwide cloud computing communities (e.g., research institutes, forums, academia) including other SDOs and consortia.  First meeting of the FG Cloud is  14-16 June 2010 in Geneva, Switzerland.  ITU-T TSAG is the parent group of this Focus Group.  More information on that meeting:

1st Meeting of Cloud Focus Group (ITU-T Members only):

The official combined announcement of ITU-T FG Cloud establishment and first meeting is contained in TSB Circular 114:   
If you wish to participate in the first meeting, note that there is an online registration form posted at:
  
The Focus Group web page:
will be updated as required. I recommend to regularly check whether new information is available.
From the Focus Group web page, you may subscribe to the
mailing list and have access to the meeting documentation for the June meeting.
The deadline for contributions is 7 June 2010.
http://ifa.itu.int/t/fg/cloud/docs/1006-gva/ihttp://ifa.itu.int/t/fg/cloud/docs/1006-gva/[email protected]http://www.itu.int/ITU-T/focusgroups/cloud/http://www.itu.int/cgi-bin/htsh/edrs/ITU-T/studygroup/edrs.registration.form?_eventid=3000151http://www.itu.int/md/T09-TSB-CIR-0114/en

ITU-T Distributed Computing Backgrounder:

A recently published ITU-T Technology Watch Report titled  ‘Distributed Computing: Utilities, Grids and Clouds’ describes the advent of clouds and grids, the applications they enable, and their potential impact on future standardization.

For further information please, please refer to the ITU-T web site:

http://www.itu.int/ITU-T/newslog/ITU+Group+To+Offer+Global+View+Of+Cloud+Standardization.aspx

http://www.itu.int/ITU-T/focusgroups/cloud/

ITU-T contacts:

Sarah Parkes                                                                          
Senior Media Relations Officer
ITU
Tel: +41 22 730 6135
Mobile: +41 79 599 1439
E-mail: [email protected]                                                                              

Toby Johnson
Senior Communications Officer
ITU
Tel: +41 22 730 5877
Mobile: +41 79 249 4868
E-mail: [email protected]

IEEE Cloud Computing Standards Study Group

Call for Participation

This a call for participation in the IEEE Cloud Computing Standards Study Group, sponsored by the IEEE Computer Society Standards Activities Board (SAB). An IEEE Standards Study Group is the initial step in the process of developing a IEEE standard and is open to all interested individuals.

Cloud computing is a new, rapidly growing model of computing which, according to the U.S. National Institute of Standards and Technology, “is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

While there is significant effort on specific cloud computing-related standards on the part of multiple entities, a major impediment to the growth of cloud computing is the lack of comprehensive high-level portability (how applications use clouds) and interoperability (how clouds work with each other) standards.

The mission of the IEEE Cloud Computing Standards Study Group is to determine the feasibility of developing an open standards profile which defines options for portability and interoperability of cloud computing resources. These profiles should address issues such as interfaces to computing, storage, network, and content resources, as well as workload (program and data) interoperability and migration, security, fault-tolerance, agency, legal and regulatory, intra-cloud policy negotiation, and financial relationships. It is expected that there will be multiple architectural approaches from which to choose.

The profiles should also support the Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS) service models, and the private cloud, community cloud, hybrid cloud, and public cloud deployment models.

Existing standards and those under development by Standards Developing Organizations (SDOs), and appropriate industry alliances, community collaboration efforts, and other groups will be used whenever practical. The Study Group will proactively reach out to such groups to facilitate their early involvement.

For further information and/or to be added to the IEEE CCSSG mailing list, please contact Steve Diamond, Chair, IEEE Cloud Computing Standards Study Group, at ieee-ccssg-chair [at] intercloud [dot] org

IEEE to Participate in White House Led Cloud Computing Strategy Discussion Focus is on Strategy to Accelerate the Adoption of Cloud Computing 

WASHINGTON, May 20 /PRNewswire/ — Dr. Alexander Pasik, Chief Information Officer (CIO) of IEEE, the world’s largest technical professional association, has been selected to join industry leaders and key administration officials to discuss the creation and adoption of national standards for cloud computing at a leadership meeting taking place today.

Dr. Pasik will join United States Deputy Secretary of Commerce Dennis Hightower, CIO Vivek Kundra and the Administration’s Cyber Security Coordinator, Howard Schmidt, along with other prominent industry thought leaders to discuss the challenges and opportunities of cloud computing adoption. Dr. Pasik brings to the meeting deep expertise in emerging technologies and their impact on business models, service-oriented architecture (SOA) and the technical and security characteristics of cloud computing models.

“As an IEEE member and CIO of the organization, I am very honored and excited to be a part of this initiative and to collaborate with the highest caliber of technology thought leaders on this critical issue,” Pasik said. “Our discussion is a significant step toward furthering technology standards to advance the implementation of cloud computing operations and establish national standards. It is a historic event in the legacy of U.S. technology.”

In March 2010, IEEE, in partnership with the Cloud Security Alliance, released findings from a survey of IT professionals that revealed overwhelming agreement on the importance and urgency of cloud computing security.

 

Conclusions:

While I knew cloud computing was way overhyped, I thought that there was one or more standardards organizations that claimed ownership.  I also thought that all the functional requirements and specifications done for grids, web services, and SOA (e.g. distributed management, federation, SLA requests and validation,etc) would not have to be re-invented and redone for clouds.  Wow, that’ll be a huge undertaking.   Which standards organization might step in to fill this void?  ITU, IEEE, other?

Without a set of unified cloud computing standards, it’s my belief that for at least the next five years, each cloud provider will define its own set of user interfaces, SLAs, performance parameters, security methods, etc.  The more cloud providers, the more chaos and confusion will reign.  Therefore, we believe an urgent, accelerated standards effort is needed for (at least) the network aspects of cloud computing, e.g. UNI and NNI, SLAs and validation/compliance.  I would’ve thought by now that the major players would’ve gotten together to create such an organization or combine several interested standards bodies/forums/alliances to make one.  We hope that ITU-T be the standards organization to set the reference network architecture for cloud computing.  Other standards bodies and/or forums will be needed to provide the computing framework and related standards.

 
Note that this author is an ITU-T member and has access to all documents, including contributions and meeting reports for the Cloud Computing Focus Group.  Please contact me if your organization might be interested in a consulting arrangement to monitor or research this new activity.

Video Surveillance and Video Analytics: Technologies whose time has come?

Introduction:

The IEEE ComSoc SCV chapter April 14th, 2010 meeting was very well attended with more than 80 people present.  This was our first joint meeting with TiE- The Indus Entrepreneurs organization. The meeting was titled, “Architectures and Applications of Video Surveillance and Video Analytics Systems.” and featured talks plus a panel discussion on those topics

The speakers scheduled to participate in the talks and panel session were Professor Suhas Patil, Chairman and CEO of Cradle Technologies, Basant Khaitan, Co-founder and CEO of Videonetics and Robb Henshaw, VP Marketing & Channels, Proxim Wireless. Robb Henshaw who was scheduled to speak on “A Primer on Wireless Network Architectures and Applications for Video Surveillance” could not attend the meeting due to illness and was replaced for the presentations section of the evening by Alan J Weissberger, IEEE ComSoc SCV chairman. The panel session  was moderated  by Lu Chang, Vice-Chairman of IEEE ComSoc SCV. This article has been co-authored with Alan J. Weissberger, who contributed the comment and analysis section, raised several unaddressed but nevertheless pertinent questions and also provided references to background articles on video surveillance and video analytics.

Presentation Highlights:

While presenting on behalf of Robb Henshaw on Wireless Network Architectures, Alan J. Weissberger noted that several new technologies are now converging which will make video surveillance a growing market and viable business.  These include: higher-quality IP digital cameras, improved and cost-effective video compression technologies (e.g. H.264/MPEG4 and HDTV), fixed broadband point-to-point and point-to-multipoint networks (including fixed WiMAX and proprietary technologies), and mobile broadband (including 3G+, mobile WiMAX and LTE).

To support the claim of a growing market for video surveillance and video analytics, Alan cited several key examples of applications for these technologies such as: security and surveillance applications, emergency and disaster management, asset and community protection by monitoring of buildings and parking lots, public entry/exits, sensitive areas such as ATMs, as well as high-traffic areas like highways, bridges, tunnels, public areas such as parks and walkways, infrastructure like dams and canals and buildings like a cafeteria, halls and libraries. Other applications include securing of sensitive areas like runways and waterways, perimeter security for military installations, remote monitoring of production on factory-floors and tele-medicine/eHealth initiatives.

Alan explained that Proxim believes that HDTV is going to be the technology of choice for video compression because users will be demanding higher quality video images.  Furthermore, Proxim thinks that that the wireless communication networks which convey the video streams are best built in a point-to-point and point-to-multipoint topology, rather than (WiFi) mesh which has fallen out of favor. He noted that Proxim’s broadband wireless transport systems that operate over these point-to-point and point-to-multipoint topologies do so over a private network (as opposed to connecting via the Internet like Cradle’s systems do, covered later in this article). Moreover, 95% of Proxim’s installations use fixed broadband wireless (both fixed WiMAX -IEEE 802.16d-2004 and a proprietary technology to increase speed and/or distance) rather than mobile broadband wireless connections.

Alan’s talk elicited two questions from the audience – the first questioner inquired why analog video surveillance technologies have found favor in practice with deployment and why digital video surveillance technologies were placed on the back burner after seeing initial deployment.  In his answer, Alan pointed out that digital video surveillance technologies need high-quality digital cameras, but also require a reliable transmission network (wired or wireless) which can provide steady bandwidth to transmit the video surveillance data to a point of aggregation like a central video server. In the absence of sufficient constant bit rate bandwidth, the resulting digital video stream quality will be unacceptable due to jitter or freezing of the image (caused by an empty playback buffer). The lack of sufficient network bandwidth was a major cause of digital video surveillance technologies failing to gain a large market share compared to analog systems. The second question related to the impact of electromagnetic interference (EMI) on the video information. Alan explained that the new wireless broadband communication systems (both WiMAX and LTE)  employ a multicarrier modulation scheme such as orthogonal frequency division multiplexing (OFDM) which is fairly resistant to EMI. Furthermore, OFDM can also be combined with multiple input multiple output (MIMO) transmission schemes to minimize the likelihood of errors at the receiver end.

In his talk on “Video Surveillance, Security and Smart Energy Management Systems,” Suhas Patil explained that the recent improvements in semiconductor chip set capabilities and new computer architectures have promoted the growth of digital video surveillance technologies such as those which employ network video recorders to aggregate video from an entire city. He also pointed out that a key contributor to the adoption of video surveillance systems in many parts of the world (despite concerns of invasion of personal privacy) was the possibility of, and actual occurrence of terrorism with the city of London (U.K.) being a pioneer in this regard. Suhas, while describing the structure of the video surveillance system, noted that a critical requirement for these systems is that they be resilient to any fault at anytime. These faults can include network breakdowns, power failures, control room disablement or faults caused by extreme ambient temperatures or extreme climatic conditions. As far as access technologies are concerned, digital video surveillance systems can use WiFi mesh networks, WiMAX networks or proprietary communication systems. According to Suhas, the technology underlying a digital video surveillance system is a highly complex one, employing state-of-the-art hardware such as cameras, storage systems and servers and state-of-the-art software including operating systems, and transmission technologies. Thus, the entire system needs very careful design in order to maximize it’s efficacy. Suhas also briefly explained videoanalytics as a system which can detect an object such as a human being behaving in a manner which would be difficult for a human observer to notice, such as a party guest moving around the room in a random or rapid manner compared to other party guests. Finally, the talk then pointed out several major challenges faced by video surveillance systems including the need to keep abreast of the rapidly changing technologies as well as practical deployment challenges.

In a response to a query on working on still images in a pixel-by-pixel fashion, Suhas explained that it is possible for a basic camera to capture raw information about a scene and then have the data processed pixel by pixel in order to maximize the dynamic range before the image is converted to a JPEG format. During the Q&A, Suhas briefly recounted the history of his company, Cradle Technologies, as being a spinoff from Cirrus Logic and which built multicore processors long before anyone else thought the technology (multicore processors) as being valuable. Additonally, while clarifying his assertion about digital versus analog video surveillance technologies, Suhas noted that while analog technologies are better suited to low-light conditions and offer better dynamic range than their digital counterparts, digital signals allow for better image resolution than analog signals. However, it is difficult to claim one technology as being clearly superior to the other.

The final presentation of the evening, by Basant Khaitan titled, “The Role of Video Analytics in Video Servers and Network Edge Products,” explained the nature of video analytics (VA) as a young field which has also been referred to as video content processing or video content analysis. Basant explained that VA can be defined as the real-time classification and tracking of objects like people and vehicles by using their (objects’) outlines rather than any bodily or facial features. The analytics system can be either co-located with the camera itself (at the network’s edge), or situated at a central server which receives the video streams from the various cameras at the network edges. Additionally, Basant pointed out that while a video frame’s size is of the order of megabytes, the corresponding analytics information for that frame is often no more than a few hundred bytes in size. While explaining the technical details of VA, Basant opined that modern VA systems produce results which are sufficiently reliable for practical use despite the presence of artifacts born from poor ambient light or dust-filled air. Basant then elaborated on a practical VA system built by his company and currently being used by the police in Calcutta, India for traffic management. According to Basant, VA systems built for such purposes as traffic control are highly mission-critical and require 100% reliable operation. In such conditions such as those found in developing countries, VA systems face severe challenges due to the presence of dense populations and poor public compliance with traffic laws. Furthermore, in tropical countries, extreme climatic conditions such as hot weather, rain-flooded streets and dust-filled air can also hamper the quality of the analytics results. In the case of Calcutta, all of the above conditions are faced by the VA system which is being used to control the city’s traffic lights. In this case, network-edge deployed analytics information is sent to a local video server (which, incidentally, was developed in cooperation with Cradle Technologies) from where the information can be remotely retrieved and viewed. Basant pointed out that all several intersections of Calcutta are now monitored by the VA system which has now replaced the previous system which was monitored entirely by human beings.

Panel Discussion Q and A:

The panel session was more of a collaborative Q and A with Suhas Patil and Basant Khaitan.  There were many questions from the meeting attendees, several of which could not be entertained due to a shortage of time (see list of unaddressed questions below).

In a response to a question on the number of video surveillance cameras which are connected wirelessly vs wireline connectivity, Basant mentioned that none of the cameras in their deployed systems are connected wirelessly at this time, and that all camera connections are of the wireline type. Suhas noted that while most cameras are connected via a wired connection like a CAT5 cable, the access to the content can be accomplished wirelessly via a cellular service (such as in the Indian scenario) or by a WiFi connection. WiFi-based access to the video server is also available if the server is connected to the Internet (Cradle’s server is one such example). The sensors in the video surveillance network themselves can also be connected wirelessly via a ZigBee mesh network, although they have not yet been so connected.

A question was raised about whether video processing is ready to see any major innovation such as what Map Reduce technology did for Google’s text processing. Suhas responded by explaining that the video analytics for applications like license-plate recognition can be done on the cloud. When queried about how Cradle’s technology can help mitigate the impact of, or altogether prevent future terrorist attacks, Suhas pointed out that the contemporary video surveillance systems had either failed altogether or had malfunctioned during terrorist attacks. Cradle’s system, on the other hand, continually monitors the deployed system for functional integrity via a central server cloud in order to ensure that it (the video surveillance system) is fully operational at all times.

The panelists were then asked a question about what is the preferred mode of connection for a city-wide array of cameras. Suhas invoked the example of Cradle’s approach to digital video surveillance, where fixed broadband wireless access via WiMAX or WiFi mesh is used to connect their networked video server to the Internet. Furthermore, a IP-VPN client (such as a PC or other screen-based device) is connected to the networked video server through the public Internet via a 3G or mobile WiMAX connection. The panelists, in a response to a question regarding the need for video analytics in countries such as those in Asia where labor is abundant and cheap pointed out that since an average human being’s concentrated attention span is only about 10-15 minutes, and the fact that, for the overwhelming majority of the time, nothing significant occurs which warrants raising an alarm, it is imperative that an automated VA system be put in place for such applications as were mentioned earlier in this article.

The final question of the evening inquired on how real-time bandwidth fluctuations within networks such as the mesh networks affect the VA performance. Suhas Patil mentioned that by placing the video server as close to the network drop-off point (i.e., close to the camera) as possible allows for good quality video to be streamed to the server. Thereafter, special network access techniques which circumvent the fluctuating bandwidth can be used to remotely retrieve the video information stored on the server.

After the panel session concluded at 9pm, several attendees stayed on for one on one interaction with the speakers.  This continued till 9:15pm when the lights were turned off and we were forced to vacate the auditorium

Unaddressed questions (submitted by Alan Weissberger to ComSocSCV Discussion List):

  • Where is video surveillance used now and what are the driving applications?
  • Are most of the video surveillance network architectures fixed point to point or point to multi-point, rather than mobile/wireless broadband? 
  • What role will 3G (EVDO, HSPA), WiMAX (fixed and mobile), and LTE play in delivering video content?  Why is mobile broadband required for video client access?
  • Are proprietary wireless technologies more cost effective for the performance they offer?  Is this a concern for the customer?
  • What type of security and network management is being used in video surveillance systems, e.g. for authentication and to prevent intrusion or monitoring?
  • What role does video analytics play to augment the potential and power of a video surveillance system?  Can it also be used as a stand alone offering?
  • Why are IP VPNs needed to convey and deliver the video content? Why not use a dedicated private network instead?
  • Is there any intersection between high end video conferencing and video surveillance systems? Are the same cameras, video transport facilities, and network management used for each?  What are the key differences?
  • What new technologies or business models are necessary for video surveillance to become a really big market?
  • What are the current barriers/obstacles to success are the video surveillance and video anaytics markets now experiencing?
  • How have terrorist attacks (e.g. Mumbai attack in late 2008) and national disasters (e.g. earthquakes) effected the video surveillance market?  What is the opportunity here?

Comment and Analysis (from Alan J Weissberger):

1.  Proxim’s answer to the question, Why Video Surveillance? Included these bullet points:

·   Perimeter, public monitoring solutions are becoming a key component for enterprises

·   Educational, healthcare and financial institutions are beginning to rely on surveillance systems to ensure safety within their premises

·   Public safety organizations depend on archived data from video monitoring systems to reduce vandalism in troubled neighborhoods

·   Live traffic surveillance is increasingly being used as a tool in community protection

·   Terrorist threats and public safety challenges continue to drive the need for high quality remote surveillance and timely response

Additionally, we’d include production plant and factory floor (remote) monitoring to prevent schedule slips and ensure good quality control.

2.  The role of broadband wireless networks in stimulating video surveillance:

-Fixed broadband wireless point to point and point to multipont networks and equipment (e.g. Motorola Canopy and Proxim’s products) that replace equivalent topology  wireline networks for delivering video over a private network.  Both proprietary fixed broadband wireless technology or IEEE 802.16d fixed WiMAX are used..  Those broadband wireless networks cost a fraction of the equivalent wireline networks and can be provisioned in a much shorter timeframe.  Fixed WiMAX could also be used to access the broadband Internet in an IP VPN scenario.

 -Mobile broadband (3G+, mobile WiMAX, LTE) which adds a whole new dimension to video surveillance and enables many new applications, e.g. IP VPN mobile client observing video images in remote locations, cameras in police cars transmitting video to police HDQ building while moving at high speed, emergency vehicles transmitting videos of natural disasters (hurricances, earhquakes, etc)  to 1st responder locations that will deal with the problem(s).

3.   It’s important to distinguish between the broadband wireless network architectures and topologies of Proxim (a wireless broadband transmission/ backhaul company) and Cradle (a Networked video server/client solutions company) :

a] Proxim makes broadband wireles transport systems that operate over a pt to pt or pt to multi point PRIVATE network.  Those systems backhaul video surveillance and other traffic to one or multiple destinations.  Proxim says that 95% of their installations use fixed (rather than mobile) broadband wireless connections.

b] Cradle uses fixed BWA (Wimax or mesh WiFi ) from their Networked Video Server to access the Internet.  On the client side, Cradle uses 3G or mobile WiMAX to connect the IP VPN client PC or other screen based device to the Networked Video server through the public Internet.  The key issues with that approach is that the end to end IP VPN server to client connection has to be high bandwidth and near constant bit rate, while the client access needs a high bandwidth, steady state mobile broadband connection to observe the MPEG 4 coded video over the IP VPN connection while in motion.  Otherwise the video image will be unacceptable or freeze.

4.  Basant’s example of controlling Calcutta traffic lights using video analytics integrated with a Networked Video server was a great demonstration of the underlying technology and proof of how valuable it is.

References:

Here are a few background articles on video surveillance and analytics:

Video Surveillance and WiMAX- a great marriage or not meant for each other? Four companies weigh in!  (all 3 speaker/panelists+ Sprint were interviewed for this article)

 http://www.wimax360.com/profiles/blogs/video-surveillance-and-wimax-a

The Wireless Video Surveillance Opportunity: Why WiMAX is not just for Broadband Wireless Access  by Robb Henshaw

http://www.wimax.com/commentary/blog/blog-2009/august-2009/the-wireless-…

Video Surveillance Going Fwd, Suhas Patil, ScD

http://0101.netclime.net/1_5/048/174/0e9/scan0046.pdf

Remote Access Video Surveillance & Analytics,  Cradle Technologies

http://cradle.com/about_us.html

INTELLIGENT VIDEO ANALYTICS, a Whitepaper

http://www.videonetics.com/VivaWhitePaper.pdf

Exclusive Interview: Robb Henshaw of Proxim Wireless!http://www.goingwimax.com/exclusive-robb-henshaw-on-proxim-wireless-5857/

Video Surveillance Product Guide

www.video-surveillance-guide.com/

FCC’s National Broadband Plan overview and IEEE ComSoc SCV March 10, 2010 meeting report

Introduction:

The IEEE ComSoc SCV chapter’s March 10th, 2010 meeting featured a very informative talk by William B. Wilhelm Jr., Partner, Telecommunications, Media and Technology Group at Bingham McCutchen LLP titled, “Effects of Broadband Policy and Economic Stimulus on Innovation at the Edge and in the Cloud.” The meeting was chaired by Simon Ma, Secretary, IEEE ComSoc SCV and was attended by approximately 30 chapter members. Despite the relatively low turnout, the number of questions which were raised and discussed during the talk and subsequent Q&A reflected the keen interest amongst the attendees on the broad topic of the Federal Communication Commission’s (FCC) National Broadband (NB) Plan.

Presentation Highlights:

Mr. Wilhelm explained that the FCC (on behalf of the federal government) believes that broadband can form a strong foundation for economic success and has hence drafted the NB plan. The FCC’s primary objective for the NB plan is to spur broadband deployment nationwide through innovation in devices and applications, which in turn, it is hoped, will drive broadband adoption amongst the United States populace. Furthermore, the FCC has designated that the plan must seek “to ensure that all people of the United States have access to broadband capability” and establish benchmarks to meet that goal. In fact, the foregoing statement also delineates the current top internal priority of the FCC. According to Mr. Wilhelm, a data rate of 3 Mbps is regarded as “broadband” within the United States. Underlining the non-trivial nature of the NB plan objectives, Mr. Wilhelm pointed out that several key challenges will need to be overcome to ensure the plan’s success. These challenges include agency and administrative action among the FCC, National Telecommunications and Information Administration (NTIA) and Rural Utilities Service (RUS), legislative action by Congress and a fair competition policy determined by the Federal Trade Commission and protected by the Department of Justice.

Regarding the objective of ensuring that all people of the United States (US) have access to broadband capability, Mr. Wilhelm noted that the American Recovery and Investment Act of 2009 (ARRA) has allocated $7.2 billion in stimulus funds for the expansion of broadband facilities and services to so-called unserved, underserved and rural areas of the country. Additionally, other ARRA-born programs including health care, smart grid and transportation may also promote large-scale broadband adoption. Describing the response to the first round of funding applications, the talk indicated that nearly 2,200 applications were received, requesting a total of $28 billion with $23 billion requisitioned for broadband infrastructure. The presentation also elaborated on the fact that, in addition to the $7.2 billion in stimulus funds for broadband expansion, over $19 billion has been earmarked for Health Information Technology (HIT) including over $16 billion in medical provider incentives for deploying HIT. The aforementioned funding for HIT is aimed towards developing a nationwide health IT infrastructure which allows for electronic storage, transmission and retrieval of healthcare-related information. The talk also provided attendees with a view to the workings of the FCC with regard to new policy generation such as Notice of Inquiry (NOI) release, and the holding of workshops to close gaps in the comments obtained from the NOI release.

Mr. Wilhelm then described the current broadband scenario in the US in terms of deployment, user adoption as well as a qualitative description of the state-of-the-art in hardware and software systems as found in US homes and offices. It was interesting to note that, while the US leads the world in internetworking equipment, semiconductor chipsets, software and internet services and applications, the US suffers from conditions which are fairly unexpected for a country of its economic stature. These latter conditions include the fact that 50-80% of the homes may get broadband speeds which they need from only one service provider, the fact that broadband adoption is lagging in certain customer segments and the fact that deployment costs for various geographies are significantly different. Further elaborating on the shortcomings in the broadband services faced by users in the US, Mr. Wilhelm pointed out that, for the median user during peak hours, actual download speeds are only about half of the advertised speed! Moreover, around 5 million homes get less than the advertised 786 kbps and approximately 35 million homes get less than 10Mbps. Other broadband service drawbacks faced by US-based customers include the fact that several market segments show penetration rates significantly below the 63% average and that the lack of widespread adoption may entail a social cost in the future in terms of lowered access to jobs, education, government services and information. For example, high school and university students who have little to no Internet connectivity will be at a growing disadvantage compared to students who have materially good quality access to the Internet.

Thereupon, the talk pointed out how high-quality broadband connectivity enables innovations across a broad swath of national priorities – for example, health care (electronic health records, telemedicine and remote/mobile monitoring), energy and environment (smart grid, smart home applications and smart transportation), education (STEM, eBooks and content, electronic student data management), government operations (service delivery and efficient administration, transparency in governance and civic engagement), economic opportunity (job creation, job training and placement, and community development) and public safety (next generation 9-1-1, alerts and cybersecurity). On being queried whether retail services are currently the dominant application of broadband communications, Mr. Wilhelm acknowledged the pertinence of the question, but was unable to comment further on the topic since the FCC report had not been released at the time of this talk.

The presentation then delved into topics such as regulation and deregulation of broadband networks, network neutrality, spectrum policy, investment in telecom systems and services, and next-generation 9-1-1 systems. Explaining the significance of internet services like DSL being taken off from under Title II of the Telecommunications Act as a result of deregulation, Mr. Wilhelm pointed out that since the DSL service is no longer under Title II, the FCC cannot protect DSL customers and small DSL companies anymore from being controlled by telcos or network service providers. With regard to net neutrality, the case of Comcast versus the FCC wherein the former is alleging that the Internet was not under the purview of Title II, was briefly touched upon. A question was then posed on whether managed services were expected to crowd out the non-managed services such as best-effort services. An audience member proffered his knowledge that the very same issue is being discussed in the public domain and that no clear consensus has been reached on this topic. On the subject of spectrum policy, the talk reiterated the oft-heard chorus in the telecom circles that the currently allocated spectrum is woefully inadequate to meet projected future demands (especially for the mobile broadband applications). Mr. Wilhelm then elaborated on the need for investment in telecom services and technology since venture capital investments in these sectors has fallen significantly in recent years. According to Mr. Wilhelm, investment in telecom is a key ingredient to promoting innovation across the hardware, software, network and services ecosystem and the absence of strong investment could result in reduced value of services to end-users.

Pointing out that broadband communications can support public safety and homeland security efforts, Mr. Wilhelm then touched upon the prominent areas of public safety which can be improved as a result of a new broadband initiative such as the national broadband plan. These areas are next-generation 9-1-1 systems, cybersecurity, alerting and a nationwide public safety network. For 9-1-1 systems, Mr. Wilhelm suggested the possibility of having an all-IP based system and to also allow users to submit recorded video to the 9-1-1 operators who could then dispatch the user videos to first responders.

Analysis:

The national broadband plan which the FCC will release (which, at the time of the writing of this article, has been released) is a key step in promoting the widespread adoption of broadband connectivity within the US. If a large portion of the US population gains access to broadband communication systems, the US can continue leading the world in technology innovations in telecom hardware, software and services sectors. Indeed, we believe that it is imperative that the FCC’s objectives of widespread broadband adoption be met in order to help meet other national goals such as homeland security, economic opportunity, healthcare and education. However, as was pointed out by Mr. Wilhelm, the adoption and retention of broadband communications among US users will entail significant investment in the telecom services and technology fields by venture capitalists as well as the federal and state governments. The lack of adoption could result in the exacerbation of the digital divide, especially in the education sector where students from schools which are not well-funded may fall behind in acquiring the skills and knowledge necessary to compete in higher education and (subsequent) job markets. On the other hand, the successful adoption of broadband communications could contribute an order of magnitude improvement in the quality of life for American citizens and further their nation’s leadership in the technology arena.

Page 308 of 311
1 306 307 308 309 310 311