Uncategorized
Does IEEE ComSoc have a role in today’s "High Tech" World?
KCBS Radio reported this morning that Sunnyvale, CA has successfully reinvented itself over the years- from agriculture to engineering and semiconductors to (the new) high tech. What exactly is that? Yahoo and other web software companies were cited as examples of “high tech.”
What I found so strange about this KCBS comment, was that engineering and seminconductors were no longer considered as “high tech.” Yet, there is justification for that notion.
Venture capitalists are NOT investing in semiconductor start-ups (save for a few mixed signal companies), because they claim the risk to reward ratio is unfavorable. The VCs are concerned about the high captial costs (mostly design tools, engineering workstations, etc) to get a semiconductor start-up going. And the payoff, in terms of market share, revenues and profits, is highly uncertain.
This same argument is made to justify avoiding almost all IT hardware companies- computing, storage, enterprise network, wireless network infrastructure, wireline (mostly fiber optic based) telecom equipment, etc. Instead, the VCs see a major opportunity in web software companies. That includes social networking, e-commerce, location based advertising, mobile payments, on line gaming, and cloud computing (for both enterprise and consumers).
This author has lived in Santa Clara, CA for over 41 years and has NEVER seen such an extended period where all forms of IT hardware were out of favor. While the balance of tech work has steadily shifted from hardware to software, it never resulted in a “hardware wipeout” as has been the case for the last few years. This is especially true in telecommunications and networking technology where only the largest revenue companies – Ericsson, Alcatel-Lucent, Huawei, Cisco – are big enough to fund research projects and pursue innovative new things. The second and third tier communications companies have become almost invisible or have disappeared completely.
Is a college education needed to be successful in the new world of high tech? Apparently not! The Financial Times (FT) reports that there is an increase in the number of students dropping out of US universities to follow their dreams and launch start-up companies in Silicon Valley. “Professors at MIT, Stanford and the University of California at Berkeley, three universities with a strong tradition as IT powerhouses, confirm an uptick in entrepreneurial dropouts as students seek to emulate the examples of famously successful non-graduates such as Bill Gates at Microsoft, Steve Jobs at Apple and Mark Zuckerberg at Facebook,” according to the FT.
http://www.ft.com/cms/s/0/f9849650-9eb0-11e0-a4f1-00144feabdc0.html#ixzz1QxjTlpdi
About a dozen college dropouts interviewed by the Financial Times said that they knew others who had made a similar choice. All confirmed investor willingness to fund them. “They want to see that you believe your story enough to risk everything for it,” said Julia Hu, who left MIT when she got funding to build her sleeping device company, Lark. “They don’t like to fund non-committed entrepreneurs. In that sense, it’s in their interest not to deter you when you say you are dropping out of school.”
Harj Taggar, a partner with Y Combinator, an incubator founded in 2005 that funds young entrepreneurs, said applications from students were rising. He noted that there was strong interest from angel investors who were “willing to fund these 18 and 19-year-old kids.”
With the total collapse of the communications eco-system, many talented and experienced EEs find they are no longer needed. Once out of work, they have had a very difficult time finding a new job. Many have had to change careers due to prolonged unemployment after being laid off. Others (like this author) were forced into early retirement when technical consulting jobs dried up. IEEE ComSocSCV had to cancel this year’s social event, because very few of our members could afford the $35 price of the dinner (which included wine and beer).
So if communications engineering skills are no longer needed today, what’s the role of IEEE ComSoc? Should it re-invent itself to better cater to the current web software fad (or mania)? That is, focus on middleware or the higher layers of the communication protocol stack, which are implemented in software? Or is that really Computer Science, which is within the scope of IEEE Computer Society and NOT IEEE ComSoc? We submit that most, if not all of the hot “high tech” areas today are based on web (plaftorm or applications) software, which is not the purview of IEEE ComSoc.
Network infrastructure has become almost a dirty word, since it is eschewed by VCs and neglected by end users. For example, it seems that almost no one cares about the network infrastructure, UNI and NNI’s for cloud computing. It is just assumed to be there (which in many cases is an invalid asumption, especially if the public Internet is used for Cloud access and service delivery).
Yet the (wireless and wireline) communications infrastructure MUST be in place for cloud computing, mobile apps, social networking and e-commerce to function. It must also grow with network traffic which is predicted to continue it’s steep upward trajectory. How will that network infrastructure evolve if only a few (large) companies are involved in progressing it?
Should ComSoc play some role in trying to re-create a communications eco-system focused on network infrastructure? If not, what should ComSoc’s role be in this new environment that places no value on EE or math/science education or hardware design skill/ experience?
Note: This author is working with Steve Weinstein (and hopefully other members of the ComSoc Strategic Planning committee) on a Strategic Plan for IEEE ComSoc. We welcome your inputs, but please recognize that IEEE USA is responsible for lobbying to create or maintain jobs.
IEEE CIO Says Cloud Interoperability a Bigger Problem than Security!
We’ve repeatedly stated that in the absence of any meaningful Cloud Computing standards there’d be a total lack of interoperability resulting in vendor lock-in. And that would be a major obstacle for widescale adoption of Cloud Computing. Now, someone at IEEE has confirmed this thesis.
Here is what I wrote in a recent article at www.viodi.com: What’s the Impact of No Cloud Standards?
No standards translates into a lack of interoperability, which locks-in the customer to a single cloud service provider. For example, the lack of a standard IaaS management API requires cloud user software to be particularized to a single Cloud Service Provider. This is because every cloud service provider has a different version of web services (mostly REST based, but a few use SOAP WS’s) protocols and software interfaces that the customer must adhere to.
I have also been pounding the table that there was no UNI for (shared) private network access to the Cloud and no NNI for public-public, private-public (AKA Hybrid Cloud), private-private Cloud communications!
According to Dr. Alexander Pasik, CIO at IEEE and an early advocate of cloud computing as an analyst at Gartner in the 1990s. “To achieve the economies of scale that will make cloud computing successful, common platforms are needed to ensure users can easily navigate between services and applications regardless of where they’re coming from, and enable organizations to more cost-effectively transition their IT systems to a services-oriented model.”
“The greatest challenge facing longer-term adoption of cloud computing services is not security, but rather cloud interoperability and data portability, say cloud computing experts from IEEE, a technical professional association. At the same time, IEEE’s experts say cloud providers could reassure customers by improving the tools they offer enterprise customers to give them more control over their own data and applications while offering a security guarantee. Today, many public cloud networks are configured as closed systems and are not designed to interact with each other. The lack of integration between these networks makes it difficult for organizations to consolidate their IT systems in the cloud and realize productivity gains and cost savings. To overcome this challenge, industry standards must be developed to help cloud service providers design interoperable platforms and enable data portability, the organization said.”
According to industry research firm IDC, revenue from public cloud computing services is expected to reach $55.5 billion by 2014, up from $16 billion in 2009. Cloud computing plays an important role in people’s professional and personal lives by supporting a variety of software-as-a-service (SaaS) applications used to store healthcare records, critical business documents, music and e-book purchases, social media content, and more.
However, the IEEE said lack of interoperability still presents challenges for organizations interested in consolidating a host of enterprise IT systems on the cloud. According to IEEE Fellow Elisa Bertino, professor of Computer Science at Purdue University and research director at the Center for Education and Research in Information Assurance, the interoperability issue is more pressing than perceived data security concerns.
“Security in the cloud is no different than security issues that impact on-premises networks. Organizations are not exposing themselves to greater security risks by moving data to the cloud. In fact, an organization’s data is likely to be more secure in the cloud because the vendor is a technology specialist whose business model is built on data protection.”
However, Steve O’Donnell, an IEEE member and former global head of Data Centres at BT in the United Kingdom, suggested much of the concern is about control for IT managers. “There’s a lack of enterprise tools that enable management of security and availability in the cloud in the same way as in a data center,” he said. “Enterprises believe their own data centers are secure and available, and want to own the management of cloud security and availability rather than outsourcing it to a third party.”
For its part, in April IEEE’s Standards Association announced its Cloud Computing standards Initiative, claimed to be the first broad-scope, forward-looking effort put forth by a global standards development organization aimed at addressing cloud portability and interoperability. We don’t believe that at all and are not expecting much from this intitiative as there has been very little information conveyed to potential IEEE Cloud standards participants.
Here is some background info on the two new IEEE Cloud Computing WGs, which will meet next month in San Jose, CA:
As part of its leadership in advancing cloud computing technologies, the IEEE Standards Association (IEEE-SA) has formed two new Working Groups (WGs) around IEEE P2301 and IEEE P2302. IEEE P2301 will provide profiles of existing and in-progress cloud computing standards in critical areas such as application, portability, management, and interoperability interfaces, as well as file formats and operation conventions. With capabilities logically grouped so that it addresses different cloud audiences and personalities, IEEE P2301 will provide an intuitive roadmap for cloud vendors, service providers, and other key stakeholders. When completed, the standard will aid users in procuring, developing, building, and using standards-based cloud computing products and services, enabling better portability, increased commonality, and greater interoperability across the industry.
IEEE P2302 defines essential topology, protocols, functionality, and governance required for reliable cloud-to-cloud interoperability and federation. The standard will help build an economy of scale among cloud product and service providers that remains transparent to users and applications. With a dynamic infrastructure that supports evolving cloud business models, IEEE P2302 is an ideal platform for fostering growth and improving competitiveness. It will also address fundamental, transparent interoperability and federation much in the way SS7/IN did for the global telephony system, and naming and routing protocols did for the Internet.
IEEE Cloud Computing References:
http://www.informationweek.com/news/cloud-computing/software/231000719
http://www.computer.org/portal/web/computingnow/news/computing-commodities-market-in-the-cloud
Cloud Computing Conference Leadership from a Master IT Journalist, Content and Editorial Director
The just concluded Cloud Leadership Forum (June 20-21, 2011) brought together CIOs, IT managers, vendors, and analysts to discuss the current status, future directions and important caveats of Cloud Computing. This very successful IDG event was organized and chaired by IDC Chief Analyst Frank Gens and IDG Enterprise Sr VP John Gallant. Both did an excellent job of running the conference, which provided very useful information to the many attendees.
A IDC/IDG survey revealed that information technology executives believe that cloud computing will have significant impact on IT organizations and IT vendors, as well as the enterprises they support. Here are just a few data points:
- More than 70 percent of those surveyed said they believed that by 2014, a third of all IT organizations will be providers of cloud services to customers or business partners.
- Almost 80 percent of respondents felt that cloud service brokers and service aggregators will provide integration, management, security and other services across public cloud offerings by 2015.
- More than 80 percent of the respondents said that one-third of Fortune 1000 enterprises will deploy at least one business-critical system in the cloud. More than half of the IT executives surveyed believe that mobile-optimized cloud services will be a primary interface with customers by 2014.
- In 2010, more than 40% of storage purchased was for cloud use (cloud computing and ISP internal use). This percentage is expected to increase this year and next, especially in light of Netflix’s move from in house to cloud resident data centers.
The key messages and lessons learned from the 2011 Cloud Leadership Forum will be detailed in a separate article. In this piece, we’d like to provide a lesson in leadership for IT publications (print and on-line) as well as organizers of technology conferences and similar events. We think there are important “take aways” and messages here for IEEE ComSoc executives, including the President, VP of Publications, On Line Content Board and CIO.
John Gallant has been organizing and chairing IT conferences for a very long time. He is an expert at moderating panel sessions and conducting “fireside chats” with industry experts. Those sessions reveal important points, critical issues, and “what to look out for” items that would otherwise not be readily obvious to the audience. John is also a long time survivor in the world of IT journalism- a claim that not many can make.
In the 1980s and early 1990s the three most popular Network and Communication “for profit” publications+ were Data Communications, Communications Week, and Network World. Only the latter has survived and is celebrating it’s 25th birthday this month! In addition to Network World, IDG Enterprise also publishes the venerable ComputerWorld, CIO, CSO print publications (which are also available on line). They also produce three on-line only pubs: ITWorld, InfoWorld, and the new CFOWorld). How did IDG succeed in the IT publishing business, while so many other like publications failed? We think the reason was the foresight, agility and leadership from John Gallant.
As a graduate of Boston College, with a dual major in Economics and Political Science, John found that law school was not for him. So he obtained a Masters Degree in Journalism from Boston University and became a reporter for ComputerWorld (CW) in 1983, covering mainframe software. At CW, John soon met up with Bruce Hoard who had created a quarterly supplement to Computerworld called “On Communications.” Mr. Hoard was the Editor in Chief of On Communications and the early Network World (NW). Mr. Gallant came aboard as that transition to a weekly publication was happening. He soon took over as Chief Editor and later became Publisher of the magazine, which is still going strong 25 years later!
How did NW survive, while so many other similar network and communications publications bit the dust? The short answer is foresight, agility, and execution.
John led a team at NW that determined early on NOT to focus on pure networking technologies. They had the opinion that network infrastructure was secondary to the goals and objectives of the enterprise and IT managers (their primary audience). The NW editorial team also thought that a magazine that only dealt with enterprise network infrastructure and architecture would NOT attract sufficient advertising. Cisco was dominating the enterprise network space and there were few other companies that would likely advertise in NW. John wrote in a follow up email, “We didn’t think that network infrastructure should be the only focus, albeit it has always been a very important one.”
With a broadened coverage scope, the NW editorial focus shifted to two important industry dynamics:
1. What does the Network change for the IT organization?
2. What new applications and services change the Network?
These two broad categories encompased issues like security, manageability, electronic data interchange, Unified Communications and (later) the mobile enterprise work force. It reflected the growing concerns of the NW readers (mostly IT end users) and helped attract advertisers.
The NW staff also looked beyond printed content. They were very early to recognize the move to on-line publishing and produced a web edition in 1995- coincident with the introduction of Netscape’s Internet browser. The NW brain trust started holding events – initially paid seminars, but later “town hall” meetings which were free of charge to “qualified” IT professionals. These have evolved into multiple topic “IT Roadmap” events that are held in selected cities each year. These events helped NW establish a personal connection with their audience and readership. The synergism between the print, on-line publication, and regional events attracted sponsors and became a very lucrative business for NW (and IDG) in the early 1990s.
During the deep recession of 2008, IDG merged their IT print and on-line publications, which were previously run as self contained, independent business entitities. IDG Enterprise was created to oversee content and editorial direction for those publications and John was promoted to Chief Content Officer for that new organization. In this role, Mr. Gallant leads the editorial teams of CIO, CSO, Computerworld, InfoWorld, ITworld, Network World and CFOworld to set content strategy and ensure that the brands continue to serve their respective audiences with the best products and services in the industry.
John also helps drive IDG Enterprise’s strategic efforts around social media and social media marketing. All of their on-line publications have links to various social media sites so that people can recommend the publications to their Facebook (and other social media) friends. John carefully describes his role in this context, ” I’m responsible for editorial social media strategy, but not social media marketing which is a very different function.”
What are the current hot topics in IT journalism? John ticked off the following:
1. Cloud Computing: Tradeoffs, public vs private vs hybrid, impediments to adoption, strategic advantages, etc.
2. Consumerization of IT: How to deal with end users consuming any application on any device at any time and at any location
3. Desktop Virtualization: Being able to deliver the same image to any device that the end user is responsible for
4. Mobile Computing: How to create new capabilities that will positively impact a business. Dealing with muliple mobile OS’s, plethora of devices/gadgets with different screen sizes, wireless networks with varying capabilities, mobile data offload (to WiFi hot spots), roaming and charging for data plans.
And what’s the future of IT publications with the demise of so many print publications depending on advertising revenue? John says that the Internet is a tremendous disintermediary of value. IT executives still like to read print publications, but advertisers don’t see value there. Recognizing this fact of life, IDG Enterprise has made a very aggressive push to on line video, user interaction (such as polling), and other tools to engage their readership. They are tied in to social media to recommend content and build communities. For example, there are over 40K members of IDG’s “CIO Forum” on LinkedIn.
Mr. Gallant states, “As for the future, I would like to clarify that there are a number of advertisers who do value print. For others, the trackability of online (its measureability) is of greater value. I don’t want to make it sound as though all advertisers have dismissed print, as they have not. But it’s clear that a lot of the sponsor focus has shifted to online, which is something that all media companies have and are dealing with. Our online-centric approach has helped us navigate this transition.”
Takeaways and Potential Action Items for IEEE ComSoc:
Are there any implications or messages for IEEE ComSoc with respect to IDG’s successful evolution as a print/on-line media and conference company? Even though the ComSoc audience is mostly academia, technology development and marketing people, could similarIDG initiatives also work for IEEE ComSoc on line publications, especially the two ComSoc web sites (this Community site as well as www.comsoc.org)? What can ComSoc do to get their readership more involved? Specifically, are there any on-line “bells and whistles” that could be added to IEEE Communications and IEEE Network on-line magazines? Should ComSoc provide regional conferences that provide more tangible value to attendees? Plenty here to think about. Please let us know your opinions and suggestions.
+ Note that IEEE ComSoc journals and magazines, like IEEE Communications, are non profit publications.
ITU-T Smart Grid Focus Group Architecture Document identifies candidate networking technologies & applications
Introduction:
ITU-T Focus Group on Smart Grid (FG Smart) was established by ITU-T TSAG agreement at its meeting in Geneva, 8-11 February 2010. The FG Smart will collaborate with worldwide smart grid communities (e.g., research institutes, forums, academia) including other SDOs and consortia.
ITU-T FG Smart has had seven meetings. The most recent one was in Jeju, Korea, 8-15 June 2011 At that meeting, FG Smart progressed a “deliverable document” which describes the architecture for smart grid. The document is based on NIST’s conceptual model [Publication 1108, NIST Framework and Roadmap for Smart Grid Interoperability Standards, Release 1.0, January, 2010] as a starting point of consideration.
Five domains are defined:
- Grid domain (bulk generation, distribution and transmission)
- Smart metering (AMI)
- Customer domain (smart appliances, electric vehicles, premises networks (HAN, BAN, IAN))
- Communication network(s)
- Service provider domain (markets, operators, service providers).
These five domains are viewed in three planes: the Service/Applications plane, the Communication plane, and the Energy plane. But the document does not say much about relevant Communications Technologies:
Several different communication technologies (listed below) could be used as the links between nodes in Home Area Network (HAN), Neighborhood Area Network (NAN), and Wide Area Network (WAN). How those technologies are to be used depends on many factors, such as performance, ease of implementation, and availability. Here’s a short list of a few candidate network technologies:
-3rd Generation Secure Radio systems within the IMT-2000 family
-4th Generation Secure Radio systems within the IMT-Advanced family
-Wireless Local Area Networks specified in IEEE 802.11
-Wireless Personal Area Networks, such as Bluetooth and Zigbee specified in IEEE 802.15
-WiMax specified by IEEE 802.16
-Short distance wireless communication, such as infrared communication
-IEEE 802.3 Ethernet – mamy variants are possible for smart grid use.
-PLC specified in various standard bodies, such as ITU-T G.9960/9961 (G.hn), G.9955/9956 (G.hnem), and IEEE 1901/1901.2
-Technology over coaxial cable, such as DOCSIS (Data Over Cable Service Interface Specifications), G.9954 (HomePNA), and G.9960/9961 (G.hn)
-Technologies over copper cable specified in ITU-T G.992 series, G.993 series (xDSL)
-Technologies over fiber cable specified in various standard bodies, such as ITU-T G.983 series (B-PON), G.984 series (G-PON), G.987 series (XG-PON), G.985/G.986 (point-to-point Ethernet based optical access system), IEEE 802.3ah (GE PON), and IEEE 802.3av (10GE PON)
ITU-T Editor’s Note: The actual physical mediums and communication technologies available for smart grid networks have not been selected or specified yet. There is a need to consider: Medium: fiber, Ethernet cable (twisted pair) coaxial, powerline, wireless – 3G, 4G, satellite, WiFi, WiMax, short distance; NGN (protocol or medium).
Aurhor’s Note: The individual subnetworks used for grid communications are very losely defined as HAN, NAN and WAN.
Fig.1. Simplified Smart Grid Domain Model in ICT Perspective
Figure 1 shows five interfaces across the planes and between domains, marked with numbers in circles. These are places where communications and exchange of information between the Communication network and other four domains, and between smart metering domain and customer domain take place. They are the focal of standards specifications and thus are called Reference Points.
Smart Grid Applications:
There are several representative applications for smart grid, including energy distribution, renewable energy management and storage, electric vehicles-to-grid, grid monitoring and load management, and smart metering. There is a particular focus on the smart metering and load management,because those two fundamental applications have the most interaction in the ICT area. This covers functions commonly called the Advanced Metering Infrastructure (AMI) plus additional functions to support Powered Electric Vehicle (PEV) charging and energy generations in the End-User domain.
Network Functions for Smart Metering and Load Control:
-Metering Networks: It provides connectivity for meters in a small geographical area, and data aggregation for meter readings in the area, and connectivity for End-User
Functions in homes or buildings through Gateway functions.
-Core Network/Transport: It provides connectivity over a wide geographical area, concentrating meter reading information from Neighbourhood Area Networks. It provides back-haul of data to the Applications Functions and Energy Control Functions.
Author’s Note: This document endeavors to compare the ITU-T FG-Smart Grid and IEEE P2030 architectures in a Table which has not been finalized yet.
IEEE P2030 provides guidelines for smart grid interoperability. “This guide provides a knowledge base addressing terminology, characteristics, functional performance and evaluation criteria, and the application of engineering principles for smart grid interoperability of the electric power system with end-use applications and loads”
http://grouper.ieee.org/groups/scc21/2030/2030_index.html
Closing Comment: It remains to be seen whether this FG Smart Grid Architecture draft recommendation or any other Smart Grid standard will clearly define the specific network technology for HAN, AMI network, NAN (including outdoor mesh wireless), WAN (including fiber optic).
Will NSN Survive with no bidders for a stake in the venture?
The Financial Times reports that Nokia and Siemens’ desire to sell a major stake in their Nokia Siemens Networks (NSN) venture took another setback this week as private equity groups KKR and TPG have decided such a deal is not a fit for them. The two companies decided to pass on a deal because neither could agree on price and how much control they would have over the company. With KKR and TPG out of the running, the Gores Group and Platinum Equity are the only remaining bidding consortium for NSN. Not surprisingly, Nokia, KKR and TPG are not commenting, other than to say “We continue to be in constructive talks with several interested parties.”
Nokia and Siemens have been searching for a bidder while NSN has struggled to make a profit. The joint venture has been losing money for much of the time since it was formed in 2007. In the first quarter, NSN lost $157 million. Yet with €12.7bn in revenues in its last year, NSN remains one of the world’s biggest telecoms equipment makers. It recently completed the $975m acquisition of network infrastructure assets from Motorola of the US.
NSN is faced with aggressive competitors led by Huawei and ZTE of China, but also from Alcatel-Lucent and LM Ericsson.
Analysts said Nokia’s crisis had meant management had not concentrated on the operational problems of NSN. “If I was running Nokia, the last management distraction I would want would be having to deal with NSN,” said Ben Uglow, analyst at Morgan Stanley. “The most logical thing to do would be to dispose of it.”
Nokia’s reluctance to act has led to growing frustration within Siemens. “NSN is bigger than some Dax companies but it is being managed as if it was a simple division within Nokia,” said one Siemens manager.
Under their original agreement, Nokia and Siemens are both tied to NSN until 2013 unless they agree to a change in ownership structure. An initial public offering is viewed as one possible option for the business after the joint venture agreement comes to an end.
http://www.ft.com/intl/cms/s/0/8e1ac730-92c2-11e0-bd88-00144feab49a.html#axzz1OtLgRl1g
The news of KKR and TPG’s withdrawal comes on the heels of Nokia CEO Stephen Elop denying rumors that the company — once the undisputed global leader in mobile phones — is for sale amid speculation that its plunging market value has made it a target.
http://www.reuters.com/article/2011/06/10/nokia-nsn-idUSLDE7590H920110610
Reference: NSN in Talks to Sell Majority Stake after Motorola & Huawei Settle Dispute
Comment: The problem NSN faces comes on top of the collapse and bankruptcy of Nortel- one of the top telecom equipment vendors of the 1980s and 1990s. Just this week, optical network player Ciena warned of a poor outlook for its business.
The NSN problems are just one more sign that the telecom industry has not recovered from the bust in the early part of this decade. We describe the state of the industry as in a massive eco-system collapse! Both the carriers and equipment vendors are under tremendous price pressures and continue to think more about cutting costs than expanding operations.
Keynotes and Smart Grid Communications Highlights of Connectivity Week Conference in Santa Clara, CA
On May 26th, U.S. Commerce Secretary Gary Locke (soon to be U.S. Ambassador to China) delivered a short but impressive keynote address at the Connectivity Week conference in Santa Clara, CA. The audience didn’t have an opportunity to ask questions, because Mr. Locke’s talk was delivered via a video recording.
Here were his main points;
-Smart Grid could reduce global power demand by 20% or more and significantly reduce carbon emissions.
-Power outages cost U.S. $500 per person per year. Smart grid technology could reduce number of power failures Gary Gary and quicken time to recover power.
-Economic benefits are expected to be three or four times the cost of building the smart grid (but what about maintaining it, e.g. OPEX?).
-Smart Grid products will be controlled over the Internet by consumers
-Canada, Brazil, European Union, China, and India are all working on Smart Grid projects.
-We must move forward, with all deliberate speed to progress Smart Grid standards, which are essential for interoperability and to drive costs down,
Note: It seems NIST is the Smart Grid umbrella standards organization in the U.S.
During his May 25th keynote, Aneesh Chopra, CTO of the United States, delivered a strong challenge to the audience of Smart Grid innovators, influencers, technologists, and decision makers.
Chopra challenged the audience to find solutions and answers to this question: “How can we safely and securely provide customers electronic access to their energy information, thereby supporting the continuing development of innovative new products and services in the energy sector?”
He spoke to one of the federal government’s key agenda items — fostering innovation around Smart Grid, energy efficiency, and renewable energy. Meeting the President’s goal of 80 percent clean energy by 2035 “demands a modernized electrical grid,” Chopra stated. In the context of discussing the entrepreneurial opportunities in the Smart Grid and smart energy arenas, he said, “My thesis is there’s never been a better time to be an innovator.”
“Aneesh Chopra energized the ConnectivityWeek crowd and delivered a clear message for entrepreneurs here in Silicon Valley and throughout the country,” said Anto Budiardjo, president and CEO, Clasma Events. “We are thrilled to have such a strong commitment to smart energy from the top levels of government and to provide the opportunity for industry-wide collaboration in support of these goals.”
Here are a few network and communications related take aways from this excellent conference (more in follow up articles):
-There are no Smart Grid standards or recommendation for either Home Area Networks (HANs) or the access network the Advanced Meter Infrastructure (AMI) will use for meter reading and control of other utility owned instruments. The AMI fixed line access network could be proprietary wireless, Broadband over Power Line (BoPL), or other network technology.
-BoPL is not likely to be used for critical Smart Grid communications. That’s because it won’t be available when you most need it, i.e. in the event of a power failure.
-AMI access network usually feeds into a proprietary wireless mesh network that may carrier other traffic types, e.g. commands to utility substations. Most of those networks use lightly licensed or unlicensed spectrum and mesh node topology with either IEEE 802.11n (WiFi) or proprietary OFDM PHY protocol with Ethernet MAC frames.
-Klaus Bender of UTC (Advocate of Utility Telecommunications interests) writes there are several potential smart grid telecom networks: corporate enterprise backbone network, the field force voice dispatch/ mobile data terminal network, the AMI meter reading network, and the command/control network for the power grid itself.
-Many utilities own their own fiber and provide a fiber optic backbone within their city or district. This is the case for Silicon Valley Utilities (formerly Santa Clara Municipal Utilities). Very few utilities are planning to deploy Fiber to the Home or Business.
-Most utilities prefer to operate their own network – sometimes hiring a 3rd party to run it for them. They don’t trust public network providers and especially availability/reliability in the event of a natural disaster or emergency. However, some utes expressed the desire for a hybrid network- one where critical tasks are conveyed over a private network and remedial tasks are facilitated by a public network
-Almost all utillities want to receive meter/ instrument data from customer premises to be processed in the cloud. This presents stringent requirements for security and such networks must be very reliable and always available. Yet one company – Heart Transverter in Costa Rico- has taken a completely different approach. They put all the logic and decision making for control and management of energy systems within the customer premises. We were impressed with this iconoclastic approach. It was said that consumers want more engagement when it comes to smart grid meters, instruments, and energy management sytems (whether located on or off premises).
-FCC has been almost singlularly focused on broadband for unserved and underserved areas and neglected utilties need for additional licensed spectrum that could be used for Smart Grid projects. Utilities currently use licensed spectrum for voice calls and narrowband data transmission/ telemetry readings, but many of them don’t have sufficient spectrum to build a robust wireless mesh network. Hence, they resort to mesh WiFi over unlicensed spectrum.
http://www.connectivityweek.com/2011/
Since 2004, the Buildy Awards have been presented at Connectivity Week to leaders, visionaries, and implementers of smart devices and smart systems. The five Buildy Award winner are listed by category:
-Smart Buildings: BuildingIQ for building energy optimization, which is helping to reduce energy consumption by more than 30 percent in commercial buildings in Australia. The technology is now available in North America.
-Smart Homes: Heart Transverter for efforts to build the Smart Grid from the bottom up, one house at a time, a truly innovative concept for driving home-to-grid connectivity with integrated energy storage, energy security, and more.
-Smart Industrial: Powerit Solutions for delivering integrated energy efficiency and automated demand response (AutoDR), yielding energy and demand savings for Four Star, a table grape producer in Delano, Calif.
-Smart Grid: The Electric Power Research Institute (EPRI) for the Électricité de France (EDF) PREMIO Project — a collaborative five-year demonstration project with 19 utility members to optimize integration of distributed energy resources, enabling load relief, network support, and CO2 reduction in the southeast of France.
-Connectivity Visionary: The GridWise® Architecture Council (GWAC) for leadership in the following areas: focusing vendor, policymaker, utility, and media attention on interoperability; fostering cooperation among engineers and policymakers to effectively address technical issues in ways non-technical communities understand; raising interoperability capabilities; and educating all stakeholders about interoperability.
“At Connectivity Week, we bring smart energy leaders and visionaries together from both sides of the meter — including the utility side and the consumption side,” said Anto Budiardjo, President and CEO of Clasma Events. “This year’s Buildy Award winners represent the leaders who are driving the smart energy vision to become a mainstream reality. They are true industry role models.”
Future articles on Connectivity Week will cover the Communications oriented sessions. Here is one of them:
Please see Daniel Wong’s summary of the Mobile Data Offload Panel Session at Connectivity Week: https://techblog.comsoc.org/2011/06/06/summary-of-connectivity-week-panel-session-on-mobile-data-offload
Summary of Connectivity Week panel session on Mobile Data Offload
The Wireless Communications Alliance (WCA) Mobile SIG organized a session on “Mobile Data Offload” on May 24th co-located with the Connectivity Week 2011 conference. It went from 1:30pm to 3:30pm, not 3:30pm to 5:30pm as advertised. However, engaging and informative discussions occured during those 2 hours.
The moderator was Stefan Pongratz, Financial Analyst at Dell’Oro Group. The panelists were Michael Luna (CTO, SEVEN networks), Steve Sifferman (VP, Advanced Radio Solutions, Powerwave Technogies), Vikta McClelland (VP, Technologies Services for Internet Solutions, Ericsson) and Steve Shaw (VP, Marketing, Kineto Wireless). Greg Williams (BelAir Networks) was on the program but did not show up.
WCA provided an abstract of the panel::
“Service providers need to address annual mobile data growth of 109% with minimal or zero increase in ARPU. To manage this, providers need to leverage mobile data offload. For the operators the main purpose for the offloading is congestion of the cellular networks. Two choke points in the mobile network are the radio access network (RAN) and the core network. RAN build-outs are expensive and constrained by permitting, site surveys etc and core networking needs to include deep packet inspection (DPI) on massive packet streams. This packet processing workload is beyond the architectural capability of most installed systems.
Large established companies and start-ups have innovations to address the capacity and cost issues by routing mobile packets directly to the Internet. The main complementary network technologies used for the mobile data offload are Wi-Fi and Femtocell.
In this seminar, a panel of experts in network infrastructure will discuss the new proposed solutions and their impact on the major wireless carriers. The panel will address the role of mobile packet offload for both currently deployed (3G) and next generation (4G/LTE) systems.”
Pongratz divided the challenges into two categories, namely financial and technical:
a) Financial challenges included the flat ARPU accompanying higher and rapidly growing data usage.
b) Technical challenges: in today’s world to get more capacity, operators would go for more spectrum, more spectral efficiency and more sites. In tomorrow’s world, these would still be the tools plus small cells, and mobile offload from RAN to internet directly. The challenge will also be offset by wifi offload, traffic control, network optimization.
The points raised during the panel discussion included the following:
– We are running out of spectrum, with estimates putting 2013 or 2014 as the time when we really run out. Thus, we need wifi offload. There are more and more smartphones. 700 MHz, 2100 MHz, etc. will be deployed.
– Wifi offload is different from the femto model, which is more for voice.
– Michael Luna said: There will be a capacity problem in the access. Coverage is the second issue, as Wifi coverage is not everywhere. Third layer is that IP is not the same in every network. Many developers only cared about the abstraction of the network. Don’t care about the challenges of data over cellular.
– Steven Shaw said: Offload is behind the scenes, will not be advertised to users. Improve coveraged, drive more capacity, Kineto can direct traffic to any ISP behind the network, viewed by customers as mini BS. They view the huge pipes coming into buildings, plus wifi access, as symbiotic.
– Time square cell access was maxed out. They added wifi, now also maxed out. Users will do more if connectivity is increased. From twitter to photo to video.
– Ericsson perspective: there are heterogeneous networks with wifi, cellular, etc. they need to work together and not interfere with each other. Need new business models, to overcome the scissor effect. Doubling of data every year over the next 10 years. In early 2008, when they were planning strategy as far as voice vs data, data was only 20% of total traffic. By end of 2008, data was the great majority of the traffic. Huawei dongle had been released. There was a great paradigm shift. Now, operators must build for data.
– Kineto. Phone minutes dropping, costs dropping for voice service. A whole new generation doesn’t know what a wired phone is. Now, tmobile incentivizes use of wifi by users: they stopped charging for voice over wifi. This encourages people to turn on their wifi at home. So it is one idea to consider.
– QoS issues are a key aspect. How to provide minimum guarantee?
– How would operators monetize VoIP?
– Ericsson: Have to decide what is free, and what is value added service? Otherwise, carriers will not be able to stay ahead on the billing for all the new models. Consumers have expectations of paying for voice and not data. ISPs need to manage expectations. Kineto disagrees. Hard to segment out. Ericsson on telefonica model. They offer APIs to developers for free, they collect revenues and share with developers. So they work with the over the top organizations. 500 developers signed up. Operators bringing QoS, knowledge of user info.
-How about picocells? Big push starting in 2012?
Ericsson: capacity crunch not there yet. Heterogeneous aspects of managing handoffs is hard. Refarming: Other countries, Europe and Asia have done more refarming. Capacity is not the problem right now. It’s signaling, right now.
– Who would manage WiFi offload? Maybe it would be managed, integrated, controlled by cellular operators.
Reference: From a recent Light Reading post on this topic:
“Recent research conducted in South Korea showed smartphones there were within Wi-Fi coverage areas 63 percent of the time and remained in those coverage areas an average of two hours, an indication that Wi-Fi offload is already widely used on an unmanaged basis, especially in a country with such dense Wi-Fi coverage. Wi-Fi offload is only one option that wireless network operators are pursuing to address the wireless data bandwidth problem. Among the others are use of femtocells and picocells, adding to the cellular network.”
http://www.lightreading.com/document.asp?doc_id=208591&site=mio&
IEEE Communications magazine shows great improvement in past year thanks to Editor in Chief Steve Gorshe
One of the key benefits of IEEE ComSoc membership is to be able to download individual articles or the entire issue of IEEE Communications magazine -ComSoc’s flagship publication. The magazine was always a good read, but it’s improved greatly in the last year, thanks to Editor in Chief Steve Gorshe. Steve is a former IEEE Communications Society Director of Magazines and had organized special issues of IEEE Communications magazine in the past. He has also been editor of several optical networking standards (including SONET and GFP) for ATIS and ITU-T.
While many of IEEE publications have drifted to journals oriented toward academia, IEEE Communications magazine has actually increased articles from industry that highlight new communications technologies. Practicing engineers and marketing professionals have found those articles of keen interest and value. We encourage ComSoc members and those with an IEEE Xplore account to browse recent issues of the magazine.
Steve’s goals for IEEE Communications magazine are all progressing very nicely:
1: Establishing proper balance between articles and feature topics targeted at engineers working in industry and those in academia.
2: Continuation of the new history column. edited by long time colleague and ComSoc Strategic Planning Committee Chairman Steve Weinstein.
3: Reducing the time from submission to first decision on standalone (open call) papers.
4: Provide a balanced coverage of both current hot topics, but also emerging topics and less well-known topics that would provide a diverse, broader coverage for the magazine’s readers.
5: Maintain the high quality of the articles through continuing careful peer review of feature topic proposals and all papers. Four of the top five papers from Communications Society publications were from Communications Magazine, according to recent IEEE Xplore paper download statistics.
Mr. Gorshe writes, “In the past year, there have been a significant number of articles and feature topics from industry, achieving a good balance relative to those from academia. The balance between hot topics and new topics has also been good. The popular History Column continues, now edited by Steve Weinstein. The time from article submission to first decision has been significantly reduced, with more initiatives in the works to further reduce the time. Download data from IEEE Xplore confirms that the magazine has maintained the high quality of its papers. My thanks to the hardworking editor team and ComSoc staff. In the coming year, I am working on better correspondence with authors and implementing several new ComSoc publications initiatives.”
This author has co-authored standards contributions and worked with Mr. Gorshe at NEC-America and PMC-Sierra. I have always been impressed with his knowledge, experience and work ethic.
From 1999-2000, we co-authored several contributions on Virtual Concatenation and mapping Ethernet MAC frames directly to SONET/SDH (that was later to become Generic Framing Procedure or GFP). In a guest editorial on GFP and Data over SONET/OTN Steve wrote. “The protocol work officially began in American National Standards Insitute (ANSI) working group TlX1.5 in 1999 in response to a contribution from Alan Weissberger.”
IEEE Communications Magazine – May 2002 page 61
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01000214
Steve’s credentials are impeccable. He is a Fellow of the IEEE and has 29 years of experience in research and development of telecommunications systems and ICs for PMC-Sierra, NEC America, and GTE. He is also technical editor for multiple ITU-T standards, including G.7041 (Generic Framing Procedure – GFP) and G.Sup43 (Transport of IEEE 10G Base-R in Optical Transport Networks (OTN)), and has received awards for his technical and editorial contributions to the North American ATIS COAST Committee (formerly T1X1/OPTXS) telecom standards committee. Mr. Gorsche has 32 patents issued or pending, over 24 published papers, and is co-author of a telecommunications textbook and of three additional book chapters. Steve obtained a PHD while working in industry, which is quite an accomplishment.
Let’s all tip our hats to Steve Gorshe for the excellent job he’s done to improve the quality of IEEE Communications magazine!
ComSocSCV June 8 meeting to offer extraordinary insight into planning, design, maintenance and evolution of SCUs Campus Network
Ever wondered how a contemporary university campus network was planned, designed, built and maintained? How it has evolved to support diverse traffic types (including VoIP, Internet, storage retrieval, OTTP video, IPTV, games, etc), more users and higher bandwidth applications? What a huge impact mobiility has on the network? How the data center works and is accessed by students, staff and faculty? June 8th is your chance to find out about these and all other aspects of Santa Clara University’s (SCU) state of the art campus network and data center.
Three experts from SCUs IT Department will address how the network evolved, the architecture, design and development challenges, traffic types supported, maintenance issues, security protection (e.g. from DoS attacks), and how to grow the network to meet new user demands and applications. Mobility, video and high resolution imaging posed critical challenges for the network. How are these being addressed?
We’ll also examine Data Center issues, including how servers are interconnected, how the data center is remotely accessed, impact of hosted virtual desktops and tradeoffs in supporting high resolution imaging apps. Finally, we’ll take a hard look at the motivations and critical issues in migrating the data center to a private, hybrid or public cloud computing environment.
A lively panel session will follow the presentation with audience participation encouraged. Get your questions ready!
Session Moderator: Alan J Weissberger, ComSocSCV Chairman, x-SCU Adjunct EECS Dept & SCU Osher Member
Speakers
- Carl Fussell — Director of Information Technology, Santa Clara University
- Todd Schmitzer — Manager of Network, Telecom & Security, Santa Clara University
- Eddie Butler — Lead/Senior Network Engineer, Santa Clara University
Don’t miss this extra special event. It starts at 6pm with food/drinks/networking session with opening remarks at 6:30pm. National Semiconducor Auditorium in Santa Clara, CA
More details and RSVP information at: http://www.ewh.ieee.org/r6/scv/comsoc/index.php
Sponsorship Opporunity: ComSocSCVs June 8th meeting offers an outstanding corporate sponsorship opportunity. We have recruited the top brass at SCU IT Dept to talk about their campus network & data center: Managing Director of IT, Head of Network Planning & Operations, and Chief Networking Engineer. Sponsors get a table during networking which attendees tend to congregate around. That’s a chance for corporate image enhancement and additional networking We list sponsor on our rotating slides, web site, FB page and Linked In group. Great exposure! Contact: [email protected] if you are interested in co-sponsoring refreshments for this meeting.
Metro WiFi Reborn: City Wide Mega-Hot Spot for Mobile Data Offload
Fixed WiMAX operator Towerstream is building out a dense Wi-Fi zone in New York City, described by Bloomberg Business Week as seven square miles in Manhattan. This is a new business model for Towerstream, which has up till now only provided fixed broadband wireless links to enterprise customers in large citiies. For this NYC project, the firm is deploying WiFi equipment from Ruckus Wireless, including 1,000 routers. Its using its own Building- top to Building- top fixed wireless connections for backhaul. That same wireless backhaul, in NYC and other densely populated cities, has proven successful for Towerstream’s point-to-multi-point connections it offers today as a nx DS1 or DS3 wireless replacement service.
While the Muni WiFi business model didn’t pan out- there was no payback to operators for the free WiFi provided- this Metro WiFi project is different. It’s intended to be used for mobile data traffic offload from 3G and 4G networks. Data traffic that’s offloaded to a Wi-Fi network doesn’t use local cell towers or capacity on cellular backhaul connections. In Towerstream’s planned metro WiFi network, wireless data (from iPhones, Android phones, tablets, mobile game players, and other gadgets with built- in WiFi) would be detected by the nearest Wi-Fi antenna, then passed off to other WiFi antennas until it reached one of nine large base stations Towerstream operates around NYC, including one at the top of the Empire State Building. From there, the data traffic would be routed to the Internet.
The WiFi antennas are much cheaper and less obtrusive than cell towers. They’re about the size of a football, cost about $800 apiece, and sit on poles or rooftops; cell towers can run upwards of $200,000. Towerstream representatives have fanned out in Manhattan, persuading landlords and building owners to let the company install the devices on their property. The company pays $50 to $1,000 per installation per month, depending on location.
“AT&T, China Telecom, and many others are doing this kind of ‘Wi-Fi offload'” on a smaller scale, says Michael Howard, co-founder of market research firm Infonetics Research and an IEEE member who spoke at an IEEE ComSocSCV workshop on Mobile Infrastructure and Apps last year.
Both AT&T and VZW have announced they are off-loading 3G/4G traffic to WiFi hot spots. However, those WiFi zones aren’t as dense or extensive in coverage as what Towerstream is planning for NYC and other large cities (Chicago and SF could be next). As such, Towerstream could become a vendor-neutral cost-effective alternative to carriers building WiFi zones for high bandwidth mobile data offload.
Bloomberg Business Week reports that there’s little doubt about consumer demand. “Last year, Towerstream conducted a three-month test of a 200-device Wi-Fi network in Manhattan. Without any promotion, the network handled 20 million Web sessions by consumers who happened to spot Towerstream when trolling for a Wi-Fi connection. That’s a fifth of the Wi-Fi traffic generated by AT&T during the same three months at its hotspots, which include most Starbucks (SBUX) and McDonald’s (MCD). Demand is expected to increase, even as cellular networks go from today’s 3G technology to 4G. While 4G is roughly four times faster than 3G, overall data traffic is projected to rise more than 30 percent per year, according to multiple studies. “If any of these estimates are even close to true, those 4G networks will be filled up almost immediately,” says Towerstream CEO Jeff Thompson.”
http://www.businessweek.com/magazine/content/11_23/b4231036687850.htm
Smart phone and tablet users will likely also benefit from this offloading as they’ll realize a much faster rate of service from a dense, high-speed Wi-Fi network than the comparable 3G or “4G” service. Equally important, the traffic offloaded to WiFi doesn’t count in the number of bytes/month in most 3G/4G service plans. Therefore, mobile data subscribers can send/receive much more wireless data without hitting limits or paying overages on their 3G/4G bills.
In a statement accompanying its 1Q2011 earnings release, Towerstream CEO Jeff Thomspon said, “”We continue to achieve significant progress building what we believe is the largest and fastest Wi-Fi offload network in Manhattan. The NYC network will be ready for customers in the second half of 2011.” The cost of Towerstream’s Wi-Fi mobile data offload program totaled $1.2 million in the first quarter 2011.
This January, the company revealed it was getting into the Wi-Fi hotzone business with an aim of becoming a wholesale provider to operators desiring to offload heavy mobile data traffic. Mr. Thomspon tipped off the company’s plans to enter this market in a Wimax360 post last year.
“Most smartphones now come with built-in Wi-Fi, which is a mature and secure technology that even encompasses Quality of Service (QOS). Wi-Fi is now capable of carrying up to 200Mbps (the older Wi-Fi started at 11MB, 1 MB less than Verizon’s new LTE network, according to them. The carriers can no longer ignore these extremely fast, inexpensive Wi-Fi networks and chip sets. Certainly, it will take some work. Similar to the network Towerstream is piloting in Manhattan, Wi-Fi networks must allow consumers to use their phones as phones (SMS, calls) and not just to access data. It requires a bit more capital to support a carrier class network that experiences very low latency and can handle QOS. These networks must also improve in order to allow seamless connectivity and hand-off capabilties. For example, I hate when my iPad constantly prompts me to join a Wi-Fi network. This is not user-friendly.
Finally, let’s talk about the new WhiteFi announcement from the FCC. Building a city wide Wi-Fi network does take a significant amount of hotspots, but they possess far more bandwidth than 4G. In her article, CNET’s Maggie Reardon discusses the limitations of implementing femtocells to offload wireless traffic onto wired broadband connection. She quotes a Nielsen SVP, “You can only split the cell sites so far. There are physical limitations.” The introduction of new white space to carrier class Wi-Fi networks could possibly result in a seamless high-capacity network, a great alternative to which carriers could offload. WhiteFi would enable users to travel long distances on little power. We are excited to see which hardware vendors develop gear for this initiative. We will be first in line!”
http://wimax360.com/profiles/blogs/the-fcc-helps-align-the-stars?xg_source=activity
On the heals of Towerstream’s mega WiFi hot spot in NYC, we’ve just learned that the city of Santa Clara (my hometown for last 41 years) will be offering free WiFi in the city as part of its IEEE 802.11n buildout later this year. That mesh WiFi network uses Tropos equipment. That will be another story for another time.