ITU-T SG13 Focus Group on Future Networks (FG-FN) concludes 7th meeting in Busan, Korea

Introduction
The final report of the FG-FN 7th meeting in Busan, Korea is now available to ITU-T member companies.   The 8th meeting in Ljubljana, Slovenia is from 29 November to December 3rd and includes a mini-workshop (see comments below).  Here is some background info on this FG-FN:
 
The Focus Group on Future Networks (FN), by collaborating with worldwide communities (e.g., research institutes, forums, academia and etc), aims to:
  • collect and identify visions of future networks, based on new technologies,
  • assess the interactions between future networks and new services,
  • familiarize ITU-T and standardization communities with emerging attributes of future networks, and
  • encourage collaboration between ITU-T and FN communities.
The objective of the Focus Group is to document results that would be helpful for developing Recommendations for future networks.

To achieve this objective the Focus Group will

  • gather new ideas relevant to Future Networks and identify potential study areas on Future Networks,
  • describe visions of the Future Networks,
  • identify a timeframe of Future Networks,
  • identify potential impacts on standards development, and
  • suggest future ITU-T study items and related actions.
 
More Detail on Objectives
  
New network requirements are emerging thanks to the new social environment, new application areas, etc. Considering these backgrounds, we selected the following four objectives as the ones that are not considered or not satisfactory satisfied in current networks. These objectives can be the candidate characteristics that clearly differentiate FNs and motivate the development and investigation for FNs.
 
Environment awareness
Future Networks should be environment friendly. The architecture design, the resulting implementation and operation of Future Networks should minimize its environmental impact, e.g., to minimize the usage of materials, energy consumption and Greenhouse Gas (GHG) emissions. Future Networks should be designed and implemented so that it can easily be used to reduce other sector’s environmental impact, e.g., by making it machine-to-machine ready.
(Comment: ‘environmental footprint’ is a wrong phrase. ‘Ecological footprint’ or ‘environmental impact’ are correct wording.
Service awareness
Future Networks should provide services that are customized for users with the appropriate functionalities to meet the needs of applications under consideration. Service explosion, i.e., creation and distribution of enormous amount and wide range of services, will occur, and Future Networks should accommodate these services by enabling creation of multiple networks that has optimal or customized functions to realize these services efficiently.
Comment: network virtualization should be an example because there are other methods, e.g., in-service management to solve this complicated situation.
creation and distribution: only these two?
Future Networks should provide services that are customized for users with appropriate functionalities to meet the needs of applications. This leads to service explosion, i.e., the number and the range of services will explode. Future Networks should provide means to accommodate these services without drastic increase of OPEX, e.g., by enabling creation of multiple networks that has optimal or customized functions to realize these services in efficient manner.
 
Data awareness
Future Networks should have architecture that is optimized to handling enormous amount of data. Main objective of the current networks are to establish connection between terminals, and so the architectures were designed as location base network. The essential demand of the users of the network, however, is to retrieve desired information or data from the network. Therefore, Future Networks should be designed so that the user can easily retrieve data regardless of its location.
Comments: the text in section 8.4 seems more generic and better.
Comments: is location-free essential? Or an example method that makes data-access easier?
Comments: the text does not flow. For example, the 1st sentence do not link with the rest.
FNs should enable users to access desired data easily, quickly, and accurately considering the fact that contemporary and future networks are used mainly to access specific data or contents, not specific node or location. Since the amount of data or contents FNs need to maintain and to reach is becoming enormous, FNs should provide efficient and safe means to handle them.
 
Social-economic awareness
Future Networks should have social-economic incentives to reduce barriers to entry for the various participants of telecommunication sector. Also, each participant should be able to receive proper return according to their contribution.
Comments: I have no idea what to do, but the text here is still vague… and should we say something on other issues, e.g., network neutrality, ossification of technology because of the lack of economic incentives?
FN architecture and its technology should be designed and selected to make them deployable and sustainable in social and economic sense, e.g., easiness of cost and maintenance for service universalization, barrier reduction for the entry of various participants into telecommunication sector, proper interface or reference point design for sustainable competition.
# reference point? Demarcation point?
 
Economic Incentives
FN should provide mechanisms to exchange incentives between various participants, e.g., users, telecommunication providers, governments, IPR holders.
Explanation: Participants of FN could be grouped in terms of industrial fields and/or nations. Firstly, in terms of industrial fields, participants could include users, commercial ISPs, private sectors network providers, governments, intellectual property rights holders, and providers of content and/or higher level services.[1] Secondly, in terms of nations, participants could include the telecommunications sector in not only developed but also developing countries so that operation, provisioning and management capabilities of FN would be simple enough to be supported by all participants. This also means that FN should be deployable as well as operable even in a less economic attractive area. Barriers to entry for participants would be almost nonexistent in FN.
 
FN should be designed to provide sustainable competition environment to vaious participants in ecosystem of telecommunication, e.g., users, various providers, governments, IPR holders by providing e.g, proper economic incentives or freedom of selection.
 
Explanation: Many technologies have failed to be deployed, to flourish, or to be sustainable because of their insufficient or inappropriate decision on economic or social aspects of them. Lack of QoS mechanisms had blocked streaming services such as IPTV on TCP/IP networks. One reason for this failure comes from the simple interface between IP and TCP. IPv4 did not provide appropriate QoS abstraction model of lower layer, e.g., it did not provide a method for TCP to know if QoS was guaranteed from end-to-end. It erased the possibility for providers to compete with sophisticated QoS mechanisms on this TCP/IP layer interface, and destroyed the freedom of QoS mechanism selection for customers. The other reason had been lack of proper economic incentives. Various QoS mechanisms e.g., intserv, diffserv or RSVP had been developed and standardized but had failed to be deployed because they had not been accompanied with proper economic incentives for network providers to implement them. Together with various other reasons, these had blocked the introduction of QoS guarantee mechanisms and streaming services in TCP/IP network even when a participant in telecommunication ecosystem tried to customize networks, or ask others to provide customized networks to start a new service and to share its benefit with others. It is therefore important to pay enough attention to economic and social aspects such as economic incentives in designing and implementing the requirements, architecture and protocol of FNs.

What role if any should IEEE ComSoc play in studying Future Networks?  Should we leverage of this ITU-T FG-FN or start our own project?  Please respond by commenting in the box below.

What is the role of local chapters in the Internet Era?

These days, any COMSOC member can learn various topics from online resources such as digital library, tutorials now, and industry now, besides You-Tube and IEEE TV. The question is why should IEEE members take time for commuting to a local chapter meeting if they can get the same content with a more professional delivery method online at comfort of their office or home. Can we offer services and values that will attract members to local chapter events? Which paramets are more important? Topic? Or networking opportunities? Or may be speaker? Can we combine online and local activities to provide even more benefits for our members? I will be eagerly waiting to see what you think. Post your comments and thoughts here in this blog.

ComSocSCV Meeting Report: 40/100 Gigabit Ethernet – Market Needs, Applications, and Standards

ComSocSCV Meeting Report:  40/100 Gigabit Ethernet – Market Needs, Applications, and Standards

 

Introduction

 

At its October 13, 2010 meeting, IEEE ComSocSCV was most fortunate to have three subject matter experts present and discuss 40G/100G Ethernet- the first dual speed IEEE 802.3 Ethernet standard.  The market drivers, targeted applications, architectecture and overview of the the recently ratified IEEE 802.3ba standard, and the important PHY layer were all explored in detail.  A lively panel discussion followed the three presentations, In addtion to pre-planned questions from the moderator (ComSocSCV Emerging Applications Director Prasanta De), there were many relevent questions from the audience.  Of the 74 meeting attendees, 52 were IEEE members.

 

The presentation titles and speakers were as follows:

1. Ethernet’s Next Evolution – 40GbE and 100GbE by John D’Ambrosia of Force10 Networks

2. The IEEE Std 802.3ba-2010 40Gb/s and 100Gb/s Architecture by Ilango Ganga of Intel Corp

3. Physical Layer (PCS/PMA) Overview by Mark Gustlin of Cisco Systems

Note:  All three presentation pdf’s may be downloaded from the IEEE ComSocSCV web site – 2010 Meeting Archives section (http://www.ewh.ieee.org/r6/scv/comsoc/ComSoc_2010_Presentations.php)

 

Summary of Presentations 

 

1.  The IEEE 802.3ba standard was ratified on June 17, 2010 after several years of hard work.  What drove the market need for this standard?  According to John D’Ambrosia, the “bandwidth explosion” has created bottlenecks eveywhere.  In particular, Increased number of users, faster access rates and methods, new video based services have created the need for higher speeds in the core network.  Mr D’Ambrosia stated,  “IEEE 802.3ba standard for  40G/ 100G Ethernet will eliminate these bottlenecks by providing a robust, scalable architecture for meeting current bandwidth requirements and laying a solid foundation for future Ethernet speed increases.”   John sees 40G/ 100G Ethernet as an enabler of many new network architectures and high bandwidth/ low latencey applications.

 

Three such core networks were seen as likely candidates for higher speed Ethernet penetration: campus/ enterprise, data center, and service provider networks  John showed many illustrative graphs that corroborated the need for higher speeds in each of these application areas.  The “Many Roles and Options for Ethernet Interconnects (in the Data Center),”  “Ethernet 802.3 Umbrella,” and “Looking Ahead -Growing the 40GbE / 100GbE Family” charts were especially enlightening.  We were surprised to learn of the breadth and depth of the 40G/100G Ethernet standard, which can be used to reduce the number of links for: Chip-to-Chip / Modules, Backplane, Twin Ax, Twisted Pair (Data Center), MMF, SMF.  This also improves energy efficiency according to Mr. D’Ambrosia.

 

Looking Beyond 100GbE,  John noted that the industry is being challenged on two fronts: Low cost, high density 100GbE and the Next Rate of Ethernet (?).  To be sure, the IEEE 802.3ba Task Force co-operated with ITU-T Study Group 15 to ensure the new 40G/ 100G  Ethernet rates are transportable over optical transport networks (i.e. the OTN),  But what about higher fiber optic data rates?  Mr. Ambrosia identified the key higher speed market drivers as Data Centers, Internet Exchanges, Carrier’s Optical Backbone Networks.  He predicted that the economics of the application will dictate the solution.

 

2.  Ilango Ganga presented an Overview of the IEEE 802.3ba standard, which has the following characteristics:

 

  • Addresses the needs of computing, network aggregation and core networking applications
  • Uses a Common architecture for both 40 Gb/s and 100 Gb/s Ethernet
  • Uses IEEE 802.3 Ethernet MAC frame format
  • The architecture is flexible and scalable
  • Leverages existing 10 Gb/s technology where possible
  • Defines physical layer technologies for backplane, copper cable assembly and optical fiber medium

 

Mr. Ganga noted there were several sublayers that comprise the IEEE 802.3ba standard:

 

  • MAC  (Medium Access Control) –Data Encapsulation, Ethernet framing, addressing, error detection (e.g. CRC).  The term “Medium Access Control” is a carryover from the days when Ethernet used CSMA/CD to transmit on a shared medium.  Today, most all Ethernet MACs just use the Ethernet frame format and operate over non shared point to point physical media.
  • RS (Reconciliation sublayer) – converts the MAC serial data stream to the parallel data paths of XLGMII (40 Gb/s) or CGMII (100 Gb/s).  It also provides alignment at the beginning frame, while maintaining total MAC transmit IPG
  • 40GBASE-R and 100GBASE-R PCS (Physical Coding sublayer) – Encodes 64 bit data & 8 bit control of XLGMII or CGMII to 66 bit code groups for communication with 40GBASE-R and 100GBASE-R PMA (64B/66B encoding).  Distributes data to multiple physical lanes, provides lane alignment and deskew (due to different receiver arrival times of signals on each lane).  There’s also a Management interface to control and report status
  • Forward Error Correction (FEC) sublayer – Optional sublayer for 40GBASE-R and 100GBASE-R to improve the BER performance of copper and backplane PHYs.  FEC operates on a per PCS lane basis at a rate of 10.3125 GBd for 40G and 5.15625 GBd for 100G
  • 40GBASE-R and 100GBASE-R PMA (Physical Medium Attachment) –  Adapts PCS to a range of PMDs.  Provides: bit level multiplexing or mapping from n lane to m lanes;  clock and data recovery; optional loopback and test pattern geneneration/checking functions
  • 40GBASE-R and 100GBASE-R PMD (Physical Medium Dependent) –  Interfaces to various transmission medium (e.g., backplane, copper or optical fiber medium)/  Transmission/reception of data streams to/from the underlying wireline physical medium.  Provides signal detect and fault function to detect fault conditions.  There are different PMDs for each of the two speeds (40G and 100G bits/sec)

            -40G PMDs:  40GBASE-KR4, 40GBASE-CR4, 40GBASE-SR4, 40GBASE-LR4  

            -100G PMDs: 100GBASE-CR10, 100GBASE-SR10, 100GBASE-LR4, 100GBASE-ER4

  • Auto-Negotiation –  used for copper and backplane PHYs to detect the capabilities of the link partners and configure the link to the appropriate mode.  Allows FEC capability negotiation, and provides parallel detection capability to detect legacy PHYs
  • Management interface – Uses the optional MDIO/MDC management data interface specified for management of 40G and 100G Ethernet Physical layer devices

 

These were illustrated for both 40G and 100G Ethernet with several layer diagrams showing each functional block and inter- sublayer interfaces.  For the electrical interfaces, both Chip- to -Chip or Chip- to- Module electrical specifications might be implemented.  It was noted that PMD specification definesthe MDI electrical characteristics.  Next, 40G and 100 G Ethernet functional block diagram implementation examples were shown.  Finally, Ilango identified two future standards related to IEEE Std 802.3ba:

 

  • IEEE P802.3bg task force is developing a std for 40 Gb/s serial single mode fiber PMD
  • 100 Gb/s backplane and copper cable assemblies Call For Interest scheduled for Nov’10

 

3.  Mark Gustin explained the all important PHY layer, which is the heart of the 802.3ba standard.  The two key PHY sublayers are the  PCS = Physical Coding Sublayer and the PMA = Physical Medium Attachment.

 

  • The PCS performs the following functions:  Delineates Ethernet frames.  Supports the transport of fault information. Provides the data transitions which are needed for clock recovery on SerDes and optical interfaces. It bonds multiple lanes together through a striping/distribution mechanism. Supports data reassembly in the receive PCS – even in the face of significant parallel skew and with multiple multiplexing locations
  • The PMA performs the following functions: Bit level multiplexing from M lanes to N lanes. Clock recovery, clock generation and data drivers.  Loopbacks and test pattern generation and detection

 

Mark drilled down to detail important multi-lane PHY functions of transmit data striping and receiver data alignment.  These mechanisms are necessary because all 40G/ 100G Ethernet PMDs have multiple physical paths or “lanes.”  These are either multiple fibers, coax cables, wavelengths or backplane traces.  Individual bit rates of 10.3125 Gb/s or 25.78125 Gb/s (new PMD will have a rate of 41.25 Gb/s).  Module interfaces are also multiple lanes, which are not always the same number of lanes as the PMD interface.  Therefore the PCS must support a mechanism to distribute data to multiple lanes on the transmit side, and then reassemble the data in the face of skew on the receiver side before passing up to the MAC sublayer.

 

Like Ilango, Mark touched on the topic of higher speed (than 100G) Ethernet.   He speculated that the next higher speed might be 400 Gb/s, or even 1Tb/s?  Mr. Gustin opined that it was too early to tell.  He noted that the IEEE 802.3ba architecture is designed to be scaleable.  In the future, it can support higher data rates by increasing the bandwidth per PCS lane and the number of PCS lanes.  He suggested that for 400 Gb/s, the architecture could be 16 lanes @25 Gb/s for example, with the same block distribution and alignment marker methodology.   Mark summed up by reminding us that the 40G/100G Ethernet standard supports an evolution of optics and electrical interfaces (for example, a new Single-mode PMD will not need a change to the PCS), and that the same architecture (sublayers and interface between them) can support future faster Ethernet speeds.

 

Panel Discussion/ Audience Q and A Session 

 

The ensuing panel session covered 40G/ 100G Ethernet market segments, applications (data center, Internet exchanges, WAN aggregation on the backbone, campus/enterprise, etc),competing technologies (e.g. Infiniband for the data center), timing of implementations (e.g. on servers, switches, network controllers. There were also a few technical questions for clarification and research related to single lane high speed links.  It was noted by this author that almost 10 years after standardization, servers in the data center only recently have included 10G Ethernet port interfaces while 10G Ethernet switches only now can switch multiple ports at wire-line rates.  So how long will it take for 40G/ 100G Ethernet to be widely deployed in its targeted markets?  The panelists concurred that more and more traffic is being aggregated onto 10G Ethernet links and that will drive the need for 40G Ethernet in the data center.  Mark Gustin said, “100GE is needed today for uplinks in various layers of the network.”. But the timing is uncertain. Higher speed uplinks on Ethernet switches, high performance data centers (e.g. Google), Internet exchanges, wide area network aggregation, and box to box communications were seen as the first real markets for 40G/ 100G Ethernet.  Each market segment/ application area will evolve at its own pace, but for sure the 40G/ 100G Ethernet standard will be an enabler of all of them.

 

The final question was asked by former IEEE 802.3 Chair, Geoff Thompson.  Geoff first noted that 40G/ 100 G Ethernet standard and all the higher speed Ethernet studies being worked in IEEE 802.3 are for the core enterprise or carrier backbone network.  He then asked the panelists when would there be big enough technological advances in the access or edge network to enable higher speeds there, i,e, the on ramps/ off ranps to the core network.  The panelists could not answer this question as it was too far from their areas of expertise.  In particular, nothing was said about the very slow- to- improve telco wireline access network (DSL or fiber) and the need to build out fiber closer to the business and residential customers to achieve higher access rates.  Nonetheless, the audience was very pleased to learn the 802.3ba architecture was scalable and seems to be future proof for higher speed Ethernet.


Author Notes on 40G/ 100G Ethernet Market: 

 

  • The 802.3ba standard also complements efforts aimed at delivering greater broadband access.  An example is the Federal Communication Commission’s “Connecting America” National Broadband Plan, which calls for 100 M bit/sec access for a minimum of 100 million homes across the U.S.  If that were to happen, higher speed optical links would be needed between telco central offices and in the core and backbone networks.
  • We think that this standard will accelerate the adoption of 10G Ethernet now that higher-speed 40G/100G pipes are available to aggregate scores of 10G Ethernet links. By simplifying current link aggregation schemes, it will provide concrete benefits such as lowered operating expense costs and improved energy efficiencies.
  • Key stakeholders for IEEE 802.3ba will include users as well as makers of systems and components for servers, network storage, networking systems, high-performance computing, data centers.  Telecommunications carriers, and multiple system operators (MSOs) should also benefit as they can offer much better cost/ performance to their customers.

References:

 

1.  For further discussion and comments on 40G/ 100 G Ethernet, such as server virtualization and converged networks driving the need for higher network data rates, please refer to this article: When will 40G/100G Ethernet be a real market? https://techblog.comsoc.org/2010/09/09/when-will-40g100g-ethernet-be-a-r…

 

2.  IEEE ComSocSCV web site – 2010 Meeting Archives section (http://www.ewh.ieee.org/r6/scv/comsoc/ComSoc_2010_Presentations.php) for presentation slides.

A Perspective of Triple Play Services: AT&T U-Verse vs Verizon FiOS vs Comcast Xfinity

Note:  This article is co-authored by IEEE ComSocSCV officers Sameer Herlekar and Alan J Weissberger.  Some information used in this article was gathered during a July visit of ATT Labs in San Ramon, CA.

With the recent proliferation of triple-play (high-speed Internet, high-definition television, and phone) services being offered by telcos (such as Verizon and AT&T) and MSOs/ cable operators (including Comcast and Time Warner Cable), subscribers may be able to choose among an array of telecommunications services to meet their needs.  In some geographical areas, the MSO is only one choice for true triple play services, because the telco has not built out their advanced network to cover every U.S. city.  For example, if one lives in Santa Clara, CA- the heart of silicon valley- you can only get triple play services from Comcast.   In fact, if you are not a U-Verse customer, the ADSL based Internet service you can obtain is much lower speed than the VDSL2 based High Speed Internet AT&T offers as part of U-Verse.

Many questions arise as to the efficacy of these triple-play services delivered by the telcos and MSOs?  Are these services accessible to all potential subscribers and what do subscribers think about the services?

A recent thread on the IEEE ComSoc SCV email Discussion group (free registration for all IEEE members at www.comsocscv.org) yielded a wealth of first-hand information on precisely the aforementioned issues.

Given that telecommunications service provisioning, like any other business, is driven by customer demand, the latter, in turn, is determined by the subscribers’ perceived need for the service(s), quality of the offered service(s), and subscriber awareness of the availability of the services (determined by the marketing of the services by their respective providers).

The explosive growth of social networking sites including Facebook, Twitter and MySpace, video-sharing websites like YouTube and online gaming websites such as Final Fantasy and World of Warcraft indicates that subscriber demand for high-bandwidth internet services is at an all-time high. Combined with the growing demand for high-definition (HD) television programming, overall subscriber demand for bandwidth is growing exponentially. Consequently, both telcos and cable operators have been forced to upgrade their network hardware and architectures to accommodate the ever-burgeoning demand for bandwidth. At the same time, the key business objective to stay profitable has not been lost on the service providers who have responded by offering customers the so-called triple-play services of high-speed internet, HD television and digital phone service.

The two principal telcos in the telecommunications services sector are Verizon (VZ) and AT&T. According to a report released by Information Gatekeepers Inc. (IGI) on July 15, 2010 the two companies in a recent year combined for 76% of total capital expenditure by major phone companies and over 46% of the total capital spent that year by all telecommunication carriers.

According to a Wall Street Journal article in July titled “Verizon’s fiber optic hole” by Martin Peers, VZ has invested $23 billion on their triple-play service offering FiOS which is based on fiber-to-the-home (FTTH) technology. On the other hand, AT&T’s U-verse service features fiber-to-the-curb (FTTC)1 with copper cables reaching individual subscriber premises over a digital subscriber line (DSL) access line.

Footnote 1.  FTTC is often referred to as Fiber to the Node (FTTN) or Fiber to the Cabinet.

On a recent visit to AT&T Labs in San Ramon, CA, several IEEE ComSoc SCV officers learned that AT&T is pouring money into U-verse as it foresees tremendous growth potential for the DSL-FTTC market.  The ComSocSCV officers went on a very impressive tour of AT&T’s U-Verse Lab, which appeared to be much bigger than most telco Central Offices!  AT&T is testing a FTTC/VDLS2 arrangement that will deliver three HD TV channels, High Speed Internet and either digital voice (VoIP) or POTs.

In terms of technology, VZ’s FiOS represents a significant telco plant upgrade compared to U-verse, since the high-bandwidth capable fibers are terminated at the subscriber premises rather than at the curb or cabinet.  For AT&T’s U-verse, it is the quality of the DSL link (from the network node to the subscriber premises) which determines the perceived quality of the overall service. 

Therefore, one would be led to believe that FiOS, built on Fiber to the Premises (FTTP) technology and backed by a major telco (VZ), would be holding a large, if not the largest, portion of the telecommunications services market. However, it is surprising to note that U-verse has, in fact, been outselling FiOS by a whopping 35-40% according to the report by IGI (http://www.igigroup.com/st/pages/FIOS_UVERSE.html).

Sameer Herlekar, IEEE ComSoc SCV Technical Activities Director (and a co-author of this article), believes that the reason for the discrepancy is the larger per-connection cost entailed in deploying FiOS compared to the per-connection cost of U-verse deployments. Moreover, according to WSJ’s Martin Peers, VZ has recently down-sized its promotions and added only 174,000 net connections to the FiOS network in Q02/2010 compared to 300,000 a year earlier. On the other hand, according to Todd Spangler of Multichannel News, AT&T’s revenues from U-verse TV, Internet and voice services nearly tripled over 2009 and is approaching an annual run rate of $3 billion as it “continues to pack on video and broadband subscribers.”

However, not all potential subscribers for U-Verse can get it, while other that have just had it installed “like it a lot, when it was working.”  A recent thread on the IEEE ComSoc SCV discussion group indicated that U-Verse is simply not available in parts of Santa Clara, CA despite U-verse cabinets being installed in the area.  The installation problems experienced by some Discussion Group members seem to have been resolved, but highlight the “growing pains” AT&T is experiencing to make it work reliability and correctly.”

Mr Herlekar states that “according to AT&T network planners, those subscribers served directly from the central office (CO) receive, at present, limited bandwidths sometimes in the order of just hundreds of bits per second. Furthermore, while some subscribers have high-speed connectivity via ADSL2 (newer installations) others have a slower connection with ADSL (older installations), both of which are slower than the state-of-the-art VDSL2 technology.”

Another key issue is technical support and customer service – troubleshooting problems and resolving them.  From the perspective of co-author Alan J. Weissberger, AT&T seems to do a much better job in this area.  Again, from the IEEE ComSocSCV Discussion list, we read of a U-Verse customer who received excellent tech support from AT&T – including customer care from an AT&T Labs Executive in San Ramon, to resolve his installation problems with TV service.  Perhaps, because AT&T is the new kid on the triple play service delivery block, it seems “they try harder.”

Yet, we’ve read that Comcast is gaining market share over the telcos in the broadband Internet market.  We suspect this is because non- triple play telco customers can’t get the higher speeds offered by the MSOs.  Those unlucky customers have to live with older and much slower wireline access technologies (ADSL or ADSL2) from the telcos, rather than the much higher speed Internet available with VDSL2 for U-Verse or FTTP/BPON/GPON for FiOS).  How fast will AT&T and VZ build out their triple play delivery systems?  We suspect that they are not now available in a majority of geographical areas in the U.S.

Dave Burstein of DSL Prime

U.S. Cable Clobbers DSL, U-Verse, FiOS

“Comcast added 399K new cable modem customers in 2010 Q1 to 16.329M. That’s more adds than the total of AT&T (255K to 16,044K) combined with Verizon (90K to 9.3M). Time Warner was also far ahead of Verizon with 212K to 9,206K.  John Hodulik of UBS estimates 67% of the Q! net adds will go to cable, a remarkable change from less than 50% a year ago. This is not because of DOCSIS 3.0, which at $99+ is not selling well,

Overall, cable added about 1M to over 40M. Telcos added about half a million to 33M. Add between 5% and 10% for the companies too small to appear in the chart below. While this could be the start of a precipitous decline, for now we might just be seeing the effect of price increases (Verizon, +12% in one key measure according to Bank of America) and the dramatic cut in U-Verse and now FiOS deployment.

My take is that the telcos would be damned fools not to hold more of the market so that femtocells/WiFi will provide them more robust and profitable wireless networks. Blair Levin came to a similar conclusion, that it’s too early to claim cable is the inevitable winner. But Verizon cutting FiOS by 2-4M homes is exactly the kind of damned fool move that will hurt them in the long run.  U.S. broadband is a two player game with many different possible strategies I can’t predict.”

For the complete article, including graphs and tables, please see:

http://www.dslprime.com/docsisreport/163-c/2957-us-cable-clobbers-dsl-u-verse-fios


In closing, Mr. Weissberger would like to make two key points:

1.  If U Verse or FiOS is not offered in your geographical area, you will have to go to Comcast, TW Cable (or other MSO in your area) by default to get high speed Internet and digital cable TV with On Demand.  Those non triple play reachable customers can NOT get high speed Internet access from ATT or VZ, because those telcos  haven’t upgraded their cable plant in many areas, e.g. from ADSL to FTTN with VDSL2 for U Verse; or from ADSL to FTTP for FiOS.   In my opinion, those non reachable triple play customers are being neglected or even discriminated against by the two big telcos.

Hence, Comcast (or TW Cable or whomever is the cable franchise holder in their geographical area) wins by default.  Perhaps that’s why Comcast is signing up many more high speed Internet (above 5M or 6M b/sec Downstream) customers than AT&T or VZ.   

2.  All triple play customers are in danger of losing all three services on an outage (cable break, power failure, CO/Head end server failure, etc).  The exception is U Verse with POTS where you’d still be able to make voice calls (but U- Verse- VoIP customers would be dead in the water!).  Hence, you need to have a working cell phone if your access or ISP fails.  And that’s not always possible if you are in a remote area, or the hills where cell phone coverage is bad.

When will 40G/100G Ethernet be a real market? Oct 13 ComSocSCV meeting + Postscript

10Gig E took almost 10 years from the time the standard was ratified till it was deployed in large quantities within campuses, data centers and for WAN aggregation.  What are the driving applications/business needs for 40G/100G Ethernet?  And when will we see line rate multi-port switches as commercial products?

The biggest market for 40G Ethernet is in data centers.  The 100G Ethernet is more for core network aggregation of 10G and 40G Ethernet links.  Ithin the data center, the main competition for 40G E is Infiniband.  By comparison, higher-speed Ethernet will capitalize on the large installed base of Gigabit (and 10G) Ethernet. New 40G and 100G products will become less expensive and more available over time, and will be supported by many silicon and equipment vendors. The Ethernet standard also is universally understood by data center network administrators, so the relative costs for managing and troubleshooting Ethernet are much lower than for a niche fabric such as Infiniband.

We will explore these issues and many others at the Oct 13 IEEE ComSocSCV meeting “40/100 Gigabit Ethernet – Market needs, Applications, and Standards.”  There will be 3 presentations followed by a panel session to be moderated by ComSocSCV officer Prasanta De.  Here is a summary of the talks:

1. Ethernet’s Next Evolution – 40GbE and 100GbE by John D’Ambrosia

This talk will provide an overview of the Ethernet Eco-system and the applications within that drove the need for the development of IEEE Std. 802.3baTM-2010 40Gb/s and 100Gb/s Ethernet Standard. Technology trends in computing and network aggregation and their role in driving the market need for 40GbE and 100GbE will be discussed.

2. The IEEE Std 802.3ba-2010 40Gb/s and 100Gb/s Architecture by Ilango Ganga

This session provides an overview of IEEE Std 802.3ba-2010 40Gb/s and 100Gb/s Ethernet specifications, objectives, architecture and interfaces.
The next generation higher speed Ethernet addresses the needs of computing, aggregation and core networking applications with dual data rates of 40Gb/s and 100 Gb/s. The 40/100 Gigabit Ethernet (GbE) architecture allows flexibility, scalability and leverages existing 10 Gigabit standards and technology where possible. The IEEE Std 802.3ba-2010 provides physical layer specifications for Ethernet communication across copper backplane, copper cabling, single-mode and multi-mode optical cabling systems.
The 40/100 Gigabit Ethernet utilizes the IEEE 802.3 Media Access Control sublayer (MAC) coupled to a family of 40 and 100 Gigabit physical layer devices (PHY). The layered architecture includes multilane physical coding sublayer (PCS), physical medium attachment sublayer (PMA) and physical medium dependant sublayers (PMD) for interfacing to various physical media. It also includes an Auto-Negotiation sublayer (AN) and an optional forward error correction sublayer (FEC) for backplane and copper cabling PHYs. The optional management data input/output interface (MDIO) is used for connection between 40/100 GbE physical layer devices and station management entities. The architecture includes optional 40 and 100 Gigabit Media Independent Interfaces (XLGMII and CGMII) to provide a logical interconnection between the MAC and the Physical Layer entities. It includes 40 and 100 Gigabit attachment unit interfaces (XLAUI and CAUI), four or ten lane interface, intended for use in chip-to-chip or chip-to-module applications. It also includes a 40 and 100 Gigabit parallel physical interface (XLPPI and CPPI), four or ten lane non-retimed interface, intended for use in chip-to-module applications with certain optical PHYs. The presentation will also outline the applications for some of the above interfaces.

3. Physical Layer (PCS/PMA) Overview by Mark Gustlin, Principal Engineer, Cisco Systems

This paper describes the Physical Coding Sublayer (PCS) and the Physical Medium Attachment (PMA) for the 40-Gb/s and 100-Gb/s Ethernet interfaces currently under standardization within the IEEE 802.3ba task force. Both of these speeds will initially be realized with a parallel PMD approach which requires bonding multiple lanes together through a striping methodology. The PCS protocol has the following attributes: Re-uses the 10GBASE-R PCS (64B/66B encoding and scrambling), just running at 4x or 10x as fast to provide for all of the required PCS functions for the data which will traverse multiple PMD lanes. Part of the PCS is a striping protocol which stripes the data to the PMD lanes on a per 66 bit block basis in a round robin fashion. Periodically an alignment block is added to each PMD lane. This alignment block acts as a marker which allows the receive side to deskew all lanes in order to compensate for any differential delay that the individual PMD lanes experience. The PMA sublayer provides the following functions: Provides per input-lane clock and data recovery, bit level multiplexing to change the number of lanes, clock generation, signal drivers and optionally provides loopbacks and test pattern generation/checking.


Presentations are posted at 2010 Meeting archives (top left) of  www.comsocscv.org

Meeting summary is at:  https://techblog.comsoc.org/2010/10/27/comsocscv-meeting-report-40100-gigabit-ethernet-market-needs-applications-and-standards

Please see numerous comments which update the market status.

Sept 25 Workshop on Smart Grids, M2M Platforms and "the Internet of Things"

Smart Grids, M2M Platforms, “the Internet of Things” and Other Networks for Smart Devices

12:30pm – 8:00pm, September 25, 2010, Saturday at Benson Center (Bldg 301), Santa Clara University

The applications and communications aspects of smart grids, Machine to Machine (M2M) platforms and smart/ embedded devices are the focus areas of this workshop. “The Internet of things” is often used to denote the wide variety and huge number of networked devices that are now emerging. We will explore that as well as other networks (e.g. WiFi/IEEE 802.11n and home networks) which provide connectivity for emerging devices and the smart grid.

We will also examine how M2M networks will be managed and provisioned for so many embedded devices (Ericsson has predicted 20B connected devices by 2020 and other companies predict even more). IEEE ComSocSCV and NATEA are very fortunate to have so many well known speakers, including the Chair of the TIA Smart Devices Standards Committee, Sprint’s M2M Platform Manager, and the IEEE ComSoc officer who is a world expert on Power Line Communications.

There will be four presentations in each of two workshop Tracks.  The talks are followed by a panel session for each track.  The program is as follows:

Track I: Communications aspects of the Smart Grid

Smart Grid Communications: Enabling a Smarter Grid
Claudio Lima, Vice Chair of IEEE P2030 Smart Grid Architecture Standards WG

Power Line Communications and the Smart Grid
Stefano Galli, Lead Scientist Panasonic R&D

Wireless Communications for Smart Grid
Kuor-Hsin Chang, Principal System Engineer – Standards, Elster Solutions

Role of WiFi/ IEEE 802.11n and Related Protocols in Smart Grid
Venkat Kalkunte, CTO, Datasat Technologies

Track II: Smart Devices, M2M platforms, and Home Networks

Standardization as a Catalyst of M2M Market Expansion
Jeffrey Smith, CTO, Numerex Corp & Chairman, TIA TR-50 Smart Device Communications Standard Committee

Operations and Management of Mobility Applications and M2M Networks
Jason Porter, AVP, AT&T

Sprint’s Machine-to-Machine and Service Enablement Platform
Michael Finegan, West Area M2M Manager of Solutions Engineering, Sprint Emerging Solutions Group

Noise and Interference in Home Networks
Arvind Mallya, Lead Member of Technical Staff, AT&T Network Operations

There will be a panel session at the end of each Track, after the presentations


More info at:  http://m2m.natea.org/

Postscript:

As a follow on to our very successful Smart Grid/M2M workshop at SCU, here is an article just published on the M2M market forecasts and an assessment of the network changes that are needed to realize those forecasts

Success! Aug 24 ComSocSCV Social with tech discussions, networking and inter-personal communications

Aug 24 6pm-9pm  China Stix,  Santa Clara, CA  www.comsocscv.org

During our 6pm -7:15pm networking session we had breakout groups to discuss:

  • Sprint’s M2M platform and initiatives (Led by Sprint’s M2M product mgr)
  • High speed transmission on twisted pair:  10G BaseT (LANs) and xDSL  (Sept 8 meeting topic)
  • Smart Grid, Smart Devices, Internet of Things (Sept 25 workshop topic)
  • 40G/100G Ethernet (Oct 13 meeting topic)
16 people (2 from Sprint) attended our gala dinner at China Stix in Santa Clara.  In addition, two came for the free networking session that preceded the dinner.  We had attendees driving to Santa Clara from all over the SF Bay Area, Monterey, Sacramento/Folsom and Orinda.  Lots of great conversation, camraderie, fine food and wine.  Two lucky individuals took home a bottle of premium wine as a gift.

Here are a few of the comments/ testimonials received via email with my acknowledgement at the end of this chain:

Alan —

Thanks again for the wonderful time last Tuesday–I will make every one of these events from now on.  I hope to make the 25 Sep event, though I think may be on a plane coming in to San Jose at the time.  You run a great meeting and social event, my friend–many thanks!

Cheers,

Karl

— KARL D. PFEIFFER, PhD, Lt Col, USAF
Assistant Professor


Thanks for a very well-organized Aug 24th social!

Alan, thanks a million from my side as well. It was well-organized and every one enjoyed the evening. I finally got to meet with Alan Earman and within minutes found out that we know several people in common. The world of connectivity does these wonders and Comsoc is a big part of it!

MP Divakar


Hello Alan and everyone,

Thank you for a great event last night.  Thank you for having us there.  We had a great time meeting everyone, the food was delicious, and the wines were tasteful.  I hope we can meet again soon.  Have a great day everyone!


Yes Alan, I agree with Sameer – this was great! I wish more people came and enjoyed the social.
Thanks,
Prasanta


Hi Alan,
Just wanted to say thank you once again for a very well-organized social yesterday at China Stix. My wife Sumi and I enjoyed meeting with everyone, the stimulating conversations and the very good food & drink.
Great job and thanks again!
Sincerely,
Sameer


Hi Alan,

Thank you so much for a great evening. I enjoyed meet with everyone there as well as your good choice of food. Appreciate all your good words about me. I look forward to future collaborations.

Alice


Hi Alan

I had a wonderful time at the ComSoc social. The food, wine, and conversation were fantastic. Thank you for inviting me to attend.

Thank you for sharing the panel questions with me. Perhaps we should discussing security and or privacy aspects as it relates to regulatory issues.

I’m sure there will be a few folk interested in IPv4 addressing and connecting billion of machines at some point we will run out of IPv4 addresses.

Kind regards,

Michael Finegan

Manager, M2M Solutions Engineering

Emerging Solutions Group at

Sprint Nextel


Chairman’s Response:

Dear All

Thanks for your compliments on the social.  Alice had earlier acknowledged she had a great time and I hope everyone else did too!
We need more of these events to make people feel good, improve their networking/ inter-personal communications skills, and exchange information and opinions.  It gets your mind off all the financial/economic/political problems of the day.  And there are sure a lot of those!
I encourage all of the attendees (see To: list) to communicate with one another and build your personal network of contacts.  You can continue the living dinner conversation via email and phone conversations.  Just reach out to those people you’d like to know better or exchange ideas/ proposals with.  Let the round two communications begin upon receipt of this email

Thanks again for coming last night.  Especially appreciate the dedication of those who had very long commutes- Olu from Sacramento/Folsom, Karl from Monterey, Michael from Orinda, Erin from?  We really appreciate your attendence at our gala social!

Warmest regards and best wishes
Alan J Weissberger, ScD, retired Prof SCU EE Dept
IEEE ComSoc SCV Chairman www.comsocscv.org

Sept 8 ComSocSCV meeting backgrounder: High Speed Transmission on Twisted Pair in LANs and xDSL

IEEE ComSocSCV Sept 8 meeting:   High Speed Transmission on Twisted Pair in LANs and DSL
 
Our Sept 9th meeting features talks on high speed transmission on both LAN/data centers and DSL access networks.  Details at
 
Several of your ComSocSCV officers spent yesterday afternoon and early evening at ATT Labs in San Ramon.  We were surprised to learn of ATTs extensive use of VDSL2 in FTTN deployments of U-Verse (their triple play bundled servie that includes TV/VoD via IPTV, HIgh speed Internet access, and Voice (either POTS or VoIP).  They are also using VDSL to reach subscribers in Multi Dwelling Units (MDUs).  Separate from U-Verse, ADSL2 is being used for single point Internet access.
 
Our other talk will be the status of 10G Base T for LANs and data centers.  It’s amazing that in 1993, High Speed Internet was only 100M bits/sec.  Now 1G BaseT Ethernet is widely deployed and 10G Base T is coming along fast (the standard has been completed)

 
Here is a Brief History of Twisted pair based Ehternet and xDSL, based on prsonal observations in the 1980s and 1990s:

 
Twisted Pair based LANs
 
 In the mid 1980s, ATT had a 1M b/s twisted pair transmission system named “STAR-LAN.”  It never went anywhere as a cheaper version of coax based Ethernet (10Base2) was more popular.  Then in the late 1980s Manchester coded 10BaseT became very popular.

Notes on nomenclature: 
 
“BASE” is short for baseband, meaning that there is no frequency-division multiplexing (FDM) or other frequency shifting modulation in use; each signal has full control of wire, on a single frequency. 
 
“T” designates twisted pair cable, where the pair of wires for each signal is twisted together to reduce radio frequency interference and crosstalk between pairs (FEXT and NEXT).
 
“UTP” = Unshielded twisted pair, as in UTP-3 (voice grade) and UTP-5 data grade) twisted pair.
 
“PMD” is the lowest sublayer in the IEEE 802.3 PHY layer.  It stands for Physical Medium Dependent.  Any coax, twisted pair, or fiber optic transmission system is the essence of the PMD sublayer.

Continuing the story……………….
 
Sometime in 1992, ANSI X3T9.5 began developing a standard for 100M b/sec FDDI on Twisted Pair.  It was called “TP-PMD.”  Discussion Group member (and former IEEE 802.3 Chair/ Vice-Chair) Geoff Thompson and I participated in that committee.  It was chaired by a guy from DEC.  In 1993 there was a performance test between the two competing twisted pair transmission technologies that were candidates for the TP-PMD standard.  It was conducted at an independent test lab in New Hampshire.  Crescendo’s technology (based on a 3 state Pulse Amplitude Modulation code called “MLT-3”) beat out National Semiconductor’s and was chosen as the TP-PMD standard.
 
Also during 1993, there was a “Fast Ethernet” standards war, with HP’s 100 VG AnyLAN (new MAC and PHY) battling 100BaseT (where the Ethernet MAC was not changed).  100BaseT had one version for UTP-3 and another for UTP-5.  It was that latter version, known as 100 Base TX that dominated the market.  Grand Junction seemed to be the ring leader of that camp, although Intel was a staunch supporter.  100Base TX used the PMD from TP-PMD without any changes.  Ironically, both Crescendo and Grand Junction (as well as Kalpana) were all acquired by Cisco and that’s how Cisco came to dominate the LAN switching market.
 
Years later, 1G Base T and now 10G Base T became IEEE 802.3 PMD standards.  I have not followed the market acceptance of those, but I’m sure our Sept 8 speakers from Teranetics will fill us in.

Digital Subscriber Loop  (xDSL)
 
The first version of DSL was for Basic Rate ISDN U interface (between the Network Terminating Unit and the voice grade twisted pair access network.  In North America, it was based on the 2B1Q line code (Pulse Amplitude Modulation) which was selected by the T1E1.4 committee in August 1986 as a compromise, because it couldn’t decide between 3 completely different transmission systems (I was actually at that meeting in Monterrey, CA and head the “dark horse” presentation by Andrew Siroka of Mitel Semiconductor.  He claimed BT had done extensive tests that showed 2B1Q outperformed the other systems and that Mitel could make a transceiver with a significantly smaller die size (lower cost and power disipation) than the other proposed systems.
 
Bellcore’s Joe Lechleider – a member of the T1E1.4 committee (as was I), had suggested asymmetry would allow higher speeds than ISDN’s 160 kbps, perhaps as high as 1.5 Mbps.  The theoretical results wound up being a lot more than 1.5 Mbps, depending on line length, bridge taps, condition of copper, etc.  However, there was no standards group that seemed to be interested.
 
In the early 1990s, there was a new vision of telco TV- initially on fiber optic cable to the home, but some thought it might be feasable to transmit 1 video stream in 1 direction over a 1.5M bit/sec twisted pair, using the higher frequencies above 100K Hz.  In 1992 the T1E1.4 standards committee took on the Asymmetric Digital Subscriber Loop (ADSL) project.  There were several entries in the official T1E1.4 standards competition:
 
  • Stanford/Amati DMT led by John Cioffi and his grad students
  • Bellcore/UCLA/Broadcom QAM
  • ATT’s CAP (Carrier Amplitude Modulation is actually a DSP based version of QAM)
The adaptive multicarrier known as Discrete Multi- Tone or DMT (with bit swapping between bins to track noise changes) won the competition by having much better noise margins.     The closest test was a 11 dB advantage for DMT.  Some of the tests showed 30 dB improvements.  T1E1.4 picked DMT on March 10, 1993.  That standard did not become popular.  Instead,  ATT spin-offs (Globespan, Lucent and Paradyne) started making CAP based DSL chips and equipment, which ATT and other telcos started to deploy.  In 1996, T1E1.4 took on an ADSL version 2 standard (T1.413v2).  Due to a lot of controversy, there was a primary standard based on DMT (which I contributed to and wrote several sections) and an Appendix on CAP.  DMT quickly prevailed as Alcatel started designing and deploying DSLAMs based on it.  THe ADSL Forum only recognized DMT and not CAP.  Carriers all over the world (like Pac Bell, Singapore Tel and many others) signed exclusive deals with Alcatel and DMT based DSL became dominant.  CAP was then dead. 
Aware lead a consortia of groups who did not want to license Stanford/Amati patents by  introducing “G.lite” that tried to remove the very essential bit-swapping of DMT and reduced the number of tones from 256 to 128 to “reduce cost.”  While G.lite became an ITU-T standard (the editor was from Intel, which exited the xDSL business), it failed in the marketplace. Instead, G.dmt (and then ADSL2+) went the other direction of higher speeds and actually increasing the number of tones.  DMT emerged as the worldwide transmission system for ADSL.
 
VDSL was moving on a parallel track, but was a perceived to be a smaller market due to distance limitations.  There was an equally fierce CAP vs DMT standards war such that T1E1.4 could not select a clear winner.  VDSL was envisioned  to use ATM exclusively for layer 2 transport and not Ethernet.  But the IEEE 802.3ah Ethernet First Mile (EFM) committee selected DMT-VDSL as the short range copper interface and defined a convergence sublayer to make Ethernet MAC ride over VDSL (and SHDSL-2).  Despite much fanfare, I don’t believe that standard was ever implemented.  But now, ATT has deployed VDSL-2 as part of its U-Verse triple play transmission system from the Fiber Cabinet in the node to the customer premises.  Unlike the earlier versions of the VDSL and EFM standards, there is a POTS band reserved in VDSL-2 for analog telephony.
Note:  I exited the DSL space over 10 years ago, so have not kept up with its progress.  However, I got the following information from a very credible source that must remain anonymous:
 
“There was a second VDSL Olympics in 2003, also hosted by Bellcore and also BT.   DMT VDSL systems were submitted by Alcatel and by Ikanos, while QAM/CAP systems were submitted by Lucent and by Metalink/Infineon.   The results were similar to the first VDSL Olympics in that the advantage was roughly 10 dB.   That did it, CAP/QAM died at a June 2003 T1E1.4 meeting where DMT was selected for VDSL2.”
 
Yesterday we learned that  ATT is using VDSL2 for its U Verse FTTN transmission system -between the Optical Network Unit (ONU) and the subscribers Network Termination unit (NT) over a copper twisted pair.  They are also using it as a U Verse distribution system within Multi- Dwelling Units.  I was astonished ATT claimed they got very good performance at 5K feet line length and often deployed longer VDSL loops.  They are now testing 4 HD video streams + high speed Internet + POTs over a single VDSL2 loop!  We would expect test results to be announced later this year.
 
We will get filled in on all the gaps by our ASSIA speaker on Sept 8th.  I invited Prof Cioffi to the meeting, but he wrote that he might have to travel to Asia.  Cioffi and I taught DSL classes at IEEE Infocom and we co-authored several T1E1.4 standards contributions.  Later, I taught many ADSL architecture classes at private companies that were implementing or deploying ADSL/SDSL.  My partner was Amati’s chief engineer John Bingham (who I had worked with at Fairchild in the Spring and Summer of 1970).  Bingham and I got so good at teaching the ADSL class that we joked that we would swap sections (he did the modulation and transmission while I did the architecture and OAM section).  He asked me to write the introduction and the second chapter of his book on ADSL Network Architecture.    You can read it on line (isn’t this a copyright violation?):
 
 
You can buy the book here:
 
 
Book Review from IEEE Communications Mag, Sept 2001:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=948376&userType=inst  (IEEE Explore account required to access full text)
End of Story……………………………………………………………………..

Cloud Computing: Impact on IT Architecture, Data Centers and the Network: July 14th IEEE ComSocSCV meeting

Three keynote talks from VMware, Microsoft and Ericsson will be followed by a lively panel/ Q and A session with Juniper Networks also participating. We are all very excited about this comprehensive and well balanced look at cloud computing from both a computiing and communications perspective. It should be one of the best technical meetings of the year!

www.comsocscv.org

Presentation Titles and Abstracts

 

Building Many Bridges to the Cloud, Robin Ren, Director of R&D, Cloud Applications and Services, VMware

Cloud computing is on every CIO’s top priority list nowadays. However, like any “game-changer” technologies in history, today’s Cloud Computing field can appear to be both exciting and chaotic. Most large technology companies claim to have at least one cloud product or service. Many start-up companies are also trying different ideas. In the introduction, we will offer answers to some basic questions:

-What is Cloud Computing?
-Why does Cloud matter?
-How will Cloud change the IT industry?

We’ll look at the major Cloud Computing players, trying to analyze the big trend and compare different approaches. In the end, there are several valid ways to move from traditional IT to the Cloud, targeted at different audiences and workloads. It is important to understand how you can participate and benefit from this new “IT gold rush.”

Cloud Data Centers and Networking Trends, Alan G. Hakimi, Senior Cloud Architect, Microsoft Services Enterprise Strategy and Architecture

The data center is at the heart of cloud computing. It brings dynamic virtualized server and storage environments to users via networks that provide cloud connectivity. The networks used to access cloud services will need more intelligence in several areas. They will have to quickly react to changes in the computing/storage environment, recovering from faults, and increasing or decreasing scale. This session will describe some architectural patterns in IaaS with respect to designing around resiliency and bandwidth. We will discuss the differences between traditional data centers and cloud data centers including intra-data center and inter data center communications. This session will also address networking trends with respect to federating clouds and providing secure, high quality network access to the data center.

Cloud Connectivity – offensive or defensive play? Arpit Joshipura, VP of Strategy and Market Development, Ericsson Silicon Valley
Cloud services and advanced devices are worthless without connectivity. At the same time, cloud services are increasing in value with the addition of mobility. This talk focusses on value of connectivity to the cloud and discusses the mobile aspects that an operator can leverage. With the asset of connectivity, an operator can use Cloud as both an offensive and a defensive strategy. This talk outlines the details of this strategy and identifies requirements on connectivity including type of access, SLA, QOS, Interoperability and standardization.

Additional Panelist: Colin Constable, Chief Enterprise Architect within the office of the CTO, Juniper Networks

Bio’s:

Robin Ren is a Director, R&D at VMware in Palo Alto, California. He manages an engineering team in the new Cloud Applications and Services BU. He is involved in many of the VMware’s cloud initiatives at the Infrastructure-, Platform-, and Application-as-a-Service layers. He is also the ambassador at the headquarters for the VMware R&D Center in Beijing China.

Alan Hakimi joined Microsoft in 1996 as a member of the Microsoft Consulting Services group. Alan is an IEEE member and has MCA and CITA-P architect certifications. He is currently working in Microsoft Services leading efforts on Enterprise Strategy and Cloud Architecture. Alan enjoys cycling, hiking, making music, cooking, and studying philosophy. His blog on Zen and the Art of Enterprise Architecture is located at http://blogs.msdn.com/zen.

Arpit Joshipura heads up Strategy & market development for Ericsson in Silicon Valley. In his role, responsible for network operator architecture strategies including IP, Convergence, Cloud. He is a valley veteran and has worked in several startups and established companies in leadership roles – business and engineering. Arpit is a veteran speaker and panelist at ComSocSCV meetings He gives Indian classic music performances and plays the harmonium.

Colin Constable joined Juniper Networks in September 2008. He previously spent twelve years at Credit Suisse, most recently as the Chief Network Architect & EMEA Infrastructure CTO. In this role he created and published the “Credit Suisse Network Vision 2020” focused on seven sub domains of networking. He built a governance framework leveraging the strategies structure to ensure cross technology tower engagement and decision making, both technical and financial. He also led numerous programs to increase cross-technology, technical knowledge.

July 14th (6pm-9pm) at National Semiconductor, Santa Clara, CA  

Timeline:

6pm-6:30pm Refreshments and Networking

6:30pm-6:40pm Opening Remarks

6:40pm-8pm Presentations (3)

8pm-8:45pm Panel Session + Audience Q and A

8:45pm-9pm Informal Q and A with panelists

ITU Cloud Computing Focus group and IEEE Cloud Computing Standards Study Group- will they fill the standards void?

Introduction- The need for Cloud Computing Standards
 
Cloud computing deployments are being announced on an almost daily basis.  Cloud computing speeds and streamlines application deployment without upfront capital costs for servers and storage. For this reason, many enterprises, governments and network/service providers are now considering adopting cloud computing to provide more efficient and cost effective network services.  The venture capital firm, the Sand Hill Group, has concluded that cloud computing represents one of the largest new investment opportunities on the horizon.  The Cloud  Computing market is forecast to be very big by IDC, Gartner Group, andother market research firms.  But there seems to be a lot of confusion regarding the service delivery method and lack of interoperability.  And there are no solid standards for Infrastructure as a Service, Platform as a Service or Software as a Service. This results in difficulties in exchanging information between cloud service providers and for users that change providers.  It may also present a problem when bursting between a private cloud and different public clouds.  Interoperability facilitates secure information exchange across platforms.

Camille Mendler, Vice President of Research at Yankee Group: “Cloud computing is the future of ICTs. It’s urgent to address interoperability issues which could stall global diffusion of new services. Collaboration between private and public sectors is required..”    That lack of interoperability is a huge problem, was highlighted at the recent Cloud Connect Conference (in Santa Clara, CA in March).  It was a very sobering experience for this author.  At the conference, it was revealed that there was no umbrella set of standards for cloud computing and no single standards body claims ownership of comprehensive cloud computing specifications.  IBM’s VP of Cloud Services Ric Telford was asked what he thought about the huge growth forecast for cloud computing.  Mr. Telford said: “I have no problem with those numbers (40% in three years; 70% in five) as long as you include the caveat, it could be any one of five delivery models.”  So the industry needs to define and standardize on those methods of delivering cloud services and applications to users, he said.

ITU-T Establishes Cloud Computing Focus Group

A new ITU-T Focus Group on Cloud Computing has been formed to enable a global cloud computing ecosystem where interoperability facilitates secure information exchange across platforms. The group will take a global view of standards activity in the field and will define a future path for greatest efficiency, creating new standards where necessary while also taking into account the work of others and proposing them for international standardization.

Malcolm Johnson, Director of ITU’s Telecommunication Standardization Bureau, said: “Cloud is an exciting area of ICTs where there are a lot of protocols to be designed and standards to be adopted that will allow people to best manage their digital assets. Our new Focus Group aims to provide some much needed clarity in the area.”   The new ITU-T Cloud Focus group will take a global view of standards activity in the field and will define a future path for greatest efficiency, creating new standards where necessary while also taking into account the work of others and proposing them for international standardization.

ITU-T study groups were invited to accelerate their work on cloud at the fourth World Telecommunication Policy Forum (Lisbon, 2009) and at an ITU-hosted meeting of CTOs in October 2009. The CTOs highlighted network capabilities as a particular area of concern, where increased services and applications using cloud computing may result in the need for new levels of flexibility in networks to accommodate unforeseen and elastic demands.

Vladimir Belenkovich, Chairman of the ITU Focus Group on Cloud Computing: “The Focus Group will investigate requirements for standardization in cloud computing and suggest future study paths for ITU. Specifically, we will identify potential impacts in standards development in other fields such as NGN, transport layer technologies, ICTs and climate change, and media coding.”

A first brief exploratory phase will determine standardization requirements and suggest how these may be addressed within ITU study groups. Work will then quickly begin on developing the standards necessary to support the global rollout of fully interoperable cloud computing solutions. The Focus Group will, from the standardization view points and within the competences of ITU-T, contribute with the telecommunication aspects, i.e., the transport via telecommunications networks, security aspects of telecommunications, service requirements, etc., in order to support services/applications of “cloud computing” making use of telecommunication networks; specifically:

  • identify potential impacts on standards development and priorities for standards needed to promote and facilitate telecommunication/ICT support for cloud computing
  • investigate the need for future study items for fixed and mobile networks in the scope of ITU-T
  • analyze which components would benefit most from interoperability and standardization
  • familiarize ITU-T and standardization communities with emerging attributes and challenges of telecommunication/ICT support for cloud computing
  • analyze the rate of change for cloud computing attributes, functions and features for the purpose of assessing the appropriate timing of standardization of telecommunication/ICT in support of cloud computing

The Focus Group will collaborate with worldwide cloud computing communities (e.g., research institutes, forums, academia) including other SDOs and consortia.  First meeting of the FG Cloud is  14-16 June 2010 in Geneva, Switzerland.  ITU-T TSAG is the parent group of this Focus Group.  More information on that meeting:

1st Meeting of Cloud Focus Group (ITU-T Members only):

The official combined announcement of ITU-T FG Cloud establishment and first meeting is contained in TSB Circular 114:   
If you wish to participate in the first meeting, note that there is an online registration form posted at:
  
The Focus Group web page:
will be updated as required. I recommend to regularly check whether new information is available.
From the Focus Group web page, you may subscribe to the
mailing list and have access to the meeting documentation for the June meeting.
The deadline for contributions is 7 June 2010.
http://ifa.itu.int/t/fg/cloud/docs/1006-gva/ihttp://ifa.itu.int/t/fg/cloud/docs/1006-gva/[email protected]http://www.itu.int/ITU-T/focusgroups/cloud/http://www.itu.int/cgi-bin/htsh/edrs/ITU-T/studygroup/edrs.registration.form?_eventid=3000151http://www.itu.int/md/T09-TSB-CIR-0114/en

ITU-T Distributed Computing Backgrounder:

A recently published ITU-T Technology Watch Report titled  ‘Distributed Computing: Utilities, Grids and Clouds’ describes the advent of clouds and grids, the applications they enable, and their potential impact on future standardization.

For further information please, please refer to the ITU-T web site:

http://www.itu.int/ITU-T/newslog/ITU+Group+To+Offer+Global+View+Of+Cloud+Standardization.aspx

http://www.itu.int/ITU-T/focusgroups/cloud/

ITU-T contacts:

Sarah Parkes                                                                          
Senior Media Relations Officer
ITU
Tel: +41 22 730 6135
Mobile: +41 79 599 1439
E-mail: [email protected]                                                                              

Toby Johnson
Senior Communications Officer
ITU
Tel: +41 22 730 5877
Mobile: +41 79 249 4868
E-mail: [email protected]

IEEE Cloud Computing Standards Study Group

Call for Participation

This a call for participation in the IEEE Cloud Computing Standards Study Group, sponsored by the IEEE Computer Society Standards Activities Board (SAB). An IEEE Standards Study Group is the initial step in the process of developing a IEEE standard and is open to all interested individuals.

Cloud computing is a new, rapidly growing model of computing which, according to the U.S. National Institute of Standards and Technology, “is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

While there is significant effort on specific cloud computing-related standards on the part of multiple entities, a major impediment to the growth of cloud computing is the lack of comprehensive high-level portability (how applications use clouds) and interoperability (how clouds work with each other) standards.

The mission of the IEEE Cloud Computing Standards Study Group is to determine the feasibility of developing an open standards profile which defines options for portability and interoperability of cloud computing resources. These profiles should address issues such as interfaces to computing, storage, network, and content resources, as well as workload (program and data) interoperability and migration, security, fault-tolerance, agency, legal and regulatory, intra-cloud policy negotiation, and financial relationships. It is expected that there will be multiple architectural approaches from which to choose.

The profiles should also support the Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS) service models, and the private cloud, community cloud, hybrid cloud, and public cloud deployment models.

Existing standards and those under development by Standards Developing Organizations (SDOs), and appropriate industry alliances, community collaboration efforts, and other groups will be used whenever practical. The Study Group will proactively reach out to such groups to facilitate their early involvement.

For further information and/or to be added to the IEEE CCSSG mailing list, please contact Steve Diamond, Chair, IEEE Cloud Computing Standards Study Group, at ieee-ccssg-chair [at] intercloud [dot] org

IEEE to Participate in White House Led Cloud Computing Strategy Discussion Focus is on Strategy to Accelerate the Adoption of Cloud Computing 

WASHINGTON, May 20 /PRNewswire/ — Dr. Alexander Pasik, Chief Information Officer (CIO) of IEEE, the world’s largest technical professional association, has been selected to join industry leaders and key administration officials to discuss the creation and adoption of national standards for cloud computing at a leadership meeting taking place today.

Dr. Pasik will join United States Deputy Secretary of Commerce Dennis Hightower, CIO Vivek Kundra and the Administration’s Cyber Security Coordinator, Howard Schmidt, along with other prominent industry thought leaders to discuss the challenges and opportunities of cloud computing adoption. Dr. Pasik brings to the meeting deep expertise in emerging technologies and their impact on business models, service-oriented architecture (SOA) and the technical and security characteristics of cloud computing models.

“As an IEEE member and CIO of the organization, I am very honored and excited to be a part of this initiative and to collaborate with the highest caliber of technology thought leaders on this critical issue,” Pasik said. “Our discussion is a significant step toward furthering technology standards to advance the implementation of cloud computing operations and establish national standards. It is a historic event in the legacy of U.S. technology.”

In March 2010, IEEE, in partnership with the Cloud Security Alliance, released findings from a survey of IT professionals that revealed overwhelming agreement on the importance and urgency of cloud computing security.

 

Conclusions:

While I knew cloud computing was way overhyped, I thought that there was one or more standardards organizations that claimed ownership.  I also thought that all the functional requirements and specifications done for grids, web services, and SOA (e.g. distributed management, federation, SLA requests and validation,etc) would not have to be re-invented and redone for clouds.  Wow, that’ll be a huge undertaking.   Which standards organization might step in to fill this void?  ITU, IEEE, other?

Without a set of unified cloud computing standards, it’s my belief that for at least the next five years, each cloud provider will define its own set of user interfaces, SLAs, performance parameters, security methods, etc.  The more cloud providers, the more chaos and confusion will reign.  Therefore, we believe an urgent, accelerated standards effort is needed for (at least) the network aspects of cloud computing, e.g. UNI and NNI, SLAs and validation/compliance.  I would’ve thought by now that the major players would’ve gotten together to create such an organization or combine several interested standards bodies/forums/alliances to make one.  We hope that ITU-T be the standards organization to set the reference network architecture for cloud computing.  Other standards bodies and/or forums will be needed to provide the computing framework and related standards.

 
Note that this author is an ITU-T member and has access to all documents, including contributions and meeting reports for the Cloud Computing Focus Group.  Please contact me if your organization might be interested in a consulting arrangement to monitor or research this new activity.
Page 316 of 319
1 314 315 316 317 318 319