Google’s largest internal network interconnects its Data Centers using Software Defined Network (SDN) in the WAN

Google’s use of SDN in its internal WAN backbone:

Urs Hölzle,Sr VP of Technical Infrastructure at Google presented the opening keynote speech at the 2012 Open Network Summit, April 17 in Santa Clara, CA.  The audience was surprised to learn that Google had built its own switches and SDN confrollers for use in its internal backbone network – the one which is used to interconnect its data centers.

Here are the key points made in Mr. Hölzle’s presentation:

Google currently operates two WAN backbones, according to Hölzle:

1] I-Scale is the public Internet-facing backbone that carries user traffic to and from Google’s data centers. It must have bulletproof performance.

2] G-Scale is the internal backbone that carries traffic between Google’s data centers worldwide. The G-Scale network has been used to experiment with SDN.

  • Google chose to pursue SDN in order to separate hardware from software. This enables the company to choose hardware based on necessary features and to choose software based on protocol requirements.
  • SDN provides logically, centralized network control. The goal is to be more deterministic, more efficient and more fault-tolerant.
  • SDN enables better centralized traffic engineering, such as an ability for the network to converge quickly to target optimum on a link failure.  Determinist behavior should simplify planning vs over provisioning for worst case variability.
  • The SDN controller uses modern server hardware, giving it more flexibility than conventional routers.
  • Switches are virtualized with real OpenFlow and the company can attach real monitoring and alerting servers. Testing is vastly simplified.
  • The move to SDN is really about picking the right tool for the right job.
  • Google’s OpenFlow WAN activity really started moving in 2010. Less than two years later, Google is now running the G-Scale network on OpenFlow-controlled switches. 100% of its production data center to data center traffic is now on this new SDN-powered network.
  • Google built their own OpenFlow switch because none were commercially available. The switch was built from merchant silicon. It has scaled to hundred of nonblocking 10GE ports.
  • Google’s practice is to simplify every software stack and hardware element as much as possible, removing anything that is not absolutely necessary.
  • Multiple switch chassis are used in each domain.
  • Google is using open source routing stacks for BGP and ISIS.
  • The OpenFlow-controlled switches (designed and built by Google) look like regular routers. BGP/ISIS/OSPF now interfaces with OpenFlow controller to program the switch state.
  • A preliminary version of the Open Flow protocol is being used now.  (The Open Flow standard is still maturing).
  • All data center backbone traffic is now carried by this new SDN based network. The old network has been shut down.
  • Google started rolling out centralized traffic engineering in January.
  • Google is already seeing higher network utilization and gaining the benefit of flexible management of end-to-end paths for maintenance.
  • Over the past six months, the new network has seen a high degree of stability with minimal outages.
  • The new SDN-powered network is meeting the company’s SLA objectives.
  • It is still too early to quantify the economics.
  • A key SDN benefit is the unified view of the network fabric — higher QoS awareness and predictability.

Mr. Hölzle said that Google’s software-defined networking system has been running for about six months and that it was therefore too early to accurately benchmark cost savings. “This will have a bigger impact in costs than any technical change like a larger router, or 10 gigabit optical switches instead of 2.5 gigabit.  I would expect the cost reduction to come from better system utilization, and substantially easier management,” he said.

“In utilization alone, we are hoping for a 20 percent to 30 percent reduction,” he continued.  Google’s very specific network applications, like search, made it hard to say what others could expect to save. Hölzle thought that the savings would be enough to compel large Internet service providers to change their systems to S.D.N. over the next five years.

Surprisingly, perhaps, Mr. Hölzle thought that the incumbent networking providers would lead the transition. Start-up networking companies likeNicira have created a stir with their SDN approaches, but Mr. Hölzle thought that the big service providers would have a level of trust with the incumbent network equipment companies (We don’t necessarily agree- there are no incumbent networking companies that are leaders in SDNs).

“The natural players are the ones already in the field – Cisco, Alcatel, Juniper,” he said, noting that NEC was an early leader in S.D.N. “They have the networking management software, just at the level of hardware ports, not data flows.” Google talks with all of these companies about their S.D.N. plans, Mr. Hölzle said. Within a year or two, he thought, Google would be purchasing S.D.N.-related products from one or more of these companies.

SDN use in global WANs:

There were also presentations from NEC, NTT, Verizon and other WAN players endorsing SDN and Open Flow at this conference.  NEC said it’s Open Flow controller, together with IBM switches, would be deployed in the WAN as early as this July.   NTT stated that a global cloud virtualization service that leverages SDN will also be launched this summer.

Complete program with selected presentations is at:


GigaOM wrote that the conference was like “a giant science fair for the networking industry. There are arcane demonstrations detailing how software-defined networks and the OpenFlow protocol will change the way networks are built, managed and operated. There are speakers from Google, Verizon and Yahoo detailing their projects and successes with OpenFlow as well as investors and bankers swarming the whole event.”

“The creation of the OpenFlow protocol, which separates the act of directing how packets move across a network from the physical act of moving those packets, has helped create excitement around networking, and is precipitating change. The change is actually the creation of software-defined networks that are programmable (for the record, a software defined network doesn’t need OpenFlow). There’s also a third change that’s been going on regarding the commoditization of networking hardware and the rise of merchant silicon.”

Opinion Piece:

Forbes magazine pointed out that SDN networks are “more secure, more dependable and much easier to manage” because the software that controls network traffic is separate from the physical routers and switches.

“By separating the software that controls network traffic from the physical routers and switches, SDN should make networks more secure, more dependable and much easier to manage. Because SDN runs on commodity hardware, it could translate into siginficant savings for network operators. Perhaps most important, it opens up the network to the possibility of vast innovation.

For Google, software defined networking represented a better way of moving traffic between its global data centers. According to Holzle, things that were hard to do on processors embedded in a networking box become much easier when they separately designed and merely communicated to the hardware using OpenFlow. “You can use all the [computer] tools for software development and that makes it faster to develop software that is higher in quality,” he said.

One of the big advantages for Google is better traffic management—this new approach basically ensures that every lane on its global network of data highways is smoothly moving packets toward their destinations. “Soon we will be able to get very close to 100 percent utilization of our network,” Holzle said. This is a big increase from the industry expectation of thirty to forty percent utilization.”

The market segments where SDN might be advantageously used include:

  • Cloud Services Providers / large website data centers
  • Universities and research campus networks
  • Metro Area CSP data center interconnect
  • Enterprise data centers
  • Internet service provider core routed networks
  • Campus LAN
  • Enterprise WAN

Author’s NOTE:Also see this blog post:

Check out IEEE ComSocSCV July 11 meeting: 6pm-*45pm @Texas Instruments Building E, 2900 Semiconductor Dr., Santa Clara, CA 95051. Software Defined Networking (SDN) Explained- New Epoch or Passing Fad?

Session Abstract: After several years of research, Software Define Networking (SDN) has finally become a reality. At this year’s Open Networking Summit, Google announced it had already deployed its own SDN design in the backbone network that interconnects all its Data Centers. NTT and Verizon hinted that they’d deploy SDN soon, while network equipment vendors indicated they were committed to the concept. IT executives and managers are also taking notice. One pundit predicted a ‘new epoch’ in networks based on SDN- for data centers, campus networks and WANs. But what exactly is SDN and the associated OpenFlow protocol that the Open Networking Foundation (ONF) is standardizing?


6:00 – 6:30pm Networking and Refreshments
6:30 – 6:40pm Opening remarks
6:40 – 7:25pm SDN Overview & Research Projects by Guru Parulkar
7:25 – 8:10pm ONF: Taking OpenFlow and SDN from lab to market by Dan Pitt
8:10 – 8:40pm Panel Session, Discussion & Q&A


 More info at: