TMR: Data Center Networking Market sees shift to user-centric & data-oriented business + CoreSite DC Tour
TMR Press Release edited by Alan J Weissberger followed by Coresite Data Center Talk & Tour for IEEE ComSocSCV and Power Electronics members
TMR Executive Summary and Forecast:
The global data center networking market is expected to emerge as highly competitive due to rising demand for networking components.
The major players operating in the global data center networking market include Hewlett Packard Enterprise, Cisco Systems, Inc., Arista Networks, Microsoft Corporation, and Juniper Networks. The key players are also indulging into business strategies such as mergers and acquisitions to improve their existing technologies. Those vendors are investing heavily in the research and development activities to sustain their lead in the market. Besides, these firms aim to improve their product portfolio in order to expand their global reach and get an edge over their competitors globally.
The global data center networking market is likely to pick up a high momentum since the firms are rapidly shifting to a more user-centric and data-oriented business. According to a recent report by Transparency Market Research (TMR), the global data center networking market is expected to project a steady CAGR of 15.5% within the forecast period from 2017 to 2025. In 2016, the global market was valued around worth US$63.05 bn, which is projected to reach around a valuation of US$228.40 bn by 2025.
On the basis of component, the global data center networking market is segmented into services, software, and hardware. Among these, the hardware segment led the market in 2016 with around 52.0% of share of data center networking market, as per the revenue. Nevertheless, projecting a greater CAGR than other segments, software segment is as well foreseen to emerge as the key segment contributing to the market growth. Geographically, North America was estimated to lead the global market in 2016. Nevertheless, Asia Pacific is likely to register the leading CAGR of 17.3% within the forecast period from 2017 to 2025.
Rising Demand for Networking Solutions to Propel Growth in Market
Increased demand for networking solutions has initiated a need for firms to change data center as a collective automated resource centers, which provide better flexibility to shift workload from any cloud so as to improve the operational efficiency.
Rising number of internet users across the globe require high-speed interface. Companies are highly dependent on the data centers in terms of efficiency to decrease the operational cost and improve the productivity.
Nevertheless, virtualization and rising demand for end-use gadgets are the major restrictions likely to hamper growth in the data center networking market in the coming years. Rising usage of mobile devices and cloud services also is hindering the steady strides in the data center networking market.
Popularity of Big Data to Add to Market Development in Future:
Rising popularity of big data and cloud services from the industry as well as consumer is anticipated to fuel the development in the global data center networking market. Advantages such as low operational costs, flexibility, better security and safety, and improved performance are likely to proliferate the market growth.
Disaster recovery and business continuity has resulted in simplification of data center networking by saving both money and time for companies. Financial advantages along with technology is likely to augur the demand in data center networking and cloud computing.
Companies are highly focused on data center solution providers to perform efficiently and effectively, with better productivity, high profit, and decreased prices. These goals require high-end networking technologies and upgraded performance server. It also needs a proper integration between simplified networking framework and server to reach the optimum level of performance.
The study presented here is based on a report by Transparency Market Research (TMR) titled “Data Center Networking Market (Component Type – Hardware, Software, and Services; Industry Vertical – Telecommunications, Government, Retail, Media and Entertainment, BFSI, Healthcare, and Education) – Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2017 – 2025.”
Get PDF Brochure at:
https://www.transparencymarketresearch.com/sample/sample.php?flag=B&rep_id=21257
Request PDF Sample of Data Center Networking Market:
https://www.transparencymarketresearch.com/sample/sample.php?flag=S&rep_id=21257
About TMR:
Transparency Market Research is a next-generation market intelligence provider, offering fact-based solutions to business leaders, consultants, and strategy professionals.
Our reports are single-point solutions for businesses to grow, evolve, and mature. Our real-time data collection methods along with ability to track more than one million high growth niche products are aligned with your aims. The detailed and proprietary statistical models used by our analysts offer insights for making right decision in the shortest span of time. For organizations that require specific but comprehensive information we offer customized solutions through adhoc reports. These requests are delivered with the perfect combination of right sense of fact-oriented problem solving methodologies and leveraging existing data repositories.
TMR believes that unison of solutions for clients-specific problems with right methodology of research is the key to help enterprises reach right decision.”
Contact
Mr. Rohit Bhisey
Transparency Market Research
State Tower
90 State Street,
Suite 700,
Albany, NY – 12207
United States
Tel: +1-518-618-1030
USA – Canada Toll Free: 866-552-3453
Email: [email protected]
Website: https://www.transparencymarketresearch.com
Research Blog: http://www.europlat.org/
Press Release:
………………………………………………………………………………………………………..
Coresite Data Center Tour:
On May 23, 2019, IEEE ComSocSCV and IEEE Power Electronics members were treated to a superb talk and tour of the Coresite Multi-Tenant Data Center (MTDC) in Santa Clara, CA.
CoreSite is a Multi-Tenant Data Center owner that competes with Equinix. CoreSite offers the following types of Network Access for their MTDC colocation customers:
•Direct Access to Tier-1 and Eyeball Networks
•Access to Broad Range of Network Services (Transit/Transport/Dark Fiber)
•Direct Access to Public Clouds (Amazon, Microsoft, Google, etc)
•Direct Access to Optical Ethernet Fabrics
………………………………………………………….
CoreSite also provides POWER distribution and backup on power failures:
•Standby Generators
•Large Scale UPS
•Resilient Design
•Power Quality
•A/B Power Delivery
•99.999% Uptime
….and PHYSICAL SECURITY:
•24/7 OnSite Security Personnel
•Dual-Authentication Access
•IP DVR for All Facility Areas
•Perimeter Security
•Equipment Check-In/Out Process
•Access-Control Policies (Badge Deactivation, etc)
……………………………………………………………………………………………………..
There are 28 network operators and cloud service providers that have brought fiber into the CoreSite Santa Clara MTDC campus. The purpose of that is to enable customers to share fiber network/cloud access at a much higher speed and lower cost than would otherwise be realized via premises-based network/cloud access.
While the names of the network and cloud service providers could not be disclosed, network providers included: Verizon, AT&T, Century Link, Zayo. In addition, AWS Direct Connect, Microsoft Azure ExpressRoute, Alibaba Cloud, Google Cloud interconnection and other unnamed cloud providers were said to have provided direct fiber to cloud connectivity for CoreSite’s Santa Clara MTDC customers.
Here’s how network connectivity is achieved within and outside the CoreSite MTDC:
The SMF or MMF from each customer’s colocation cage is physically routed (under the floor) to a fiber wiring cross-connect/patch panel maintained by Coresite. The output fibers are then routed to a private room where the network/cloud providers maintain their own fiber optic gear (fiber optic multiplexers/switches, DWDM transponders and other fiber transmission equipment) which connect to the outside plant fiber optic cable(s) for each network/cloud services provider.
The outside plant fiber fault detection and restoration are done by each network/cloud provider- either via a mesh topology fiber optic network or 1:1 or N:1 hot standby. Coresite’s responsibility ends when it delivers the fiber to the provider cages. They do, however, have network engineers that are responsible for maintenance and trouble shooting in the DC when necessary.
Instead of using private lines or private IP connections, CoreSite offers an Interconnect Gateway-SM provides their enterprise customers a dedicated, high-performance interconnection solution between their cloud and network service providers, while establishing a flexible IT architecture that allows them to adapt to market demands and rapidly evolving technologies.
CoreSite’s gateway directly integrates enterprises’ WAN architecture into CoreSite’s native cloud and carrier ecosystem using high-speed fiber and virtual interconnections. This solution includes:
-Private network connectivity to the CoreSite data center
-Dedicated cabinets and network hardware for routing, switching, and security
-Direct fiber and virtual interconnections to cloud and network providers
-Technical integration, 24/7/365 monitoring and management from a certified CoreSite Solution Partner
-Industry-leading SLA
3 thoughts on “TMR: Data Center Networking Market sees shift to user-centric & data-oriented business + CoreSite DC Tour”
Comments are closed.
Merchant silicon is burrowing its way deeper into data center networks. It will be in 63 percent of all Ethernet switches that ship in 2022, a jump from the 56 percent last year, by IHS Markit analysts’ estimate.
“It’s going along with the trends we’ve seen in the market in terms of adoption by many enterprises, as well as telco and other cloud service providers, whether it be hyperscalers or tier-twos,” say Devan Adams, IHS principal analyst for cloud and data center switching.
“When you have a vendor like Cisco start to accept merchant silicon being used within even some of their up-and-coming switches – like the recent announcement they made releasing their 400G switches, which are very hyped – that’s big.”
Adams says that “everyone’s looking to try to introduce 400G.” After Juniper and Arista made their 400G switch announcements, many expected Cisco to follow with models based on its own proprietary silicon, “which was the case, but only half the case. They made the announcement that they’re also going to offer the merchant silicon version.”
It’s a big win for customers, he says, pointing out an announcement that Cisco has Innovium as a third-party vendor. “That’s interesting because they’re very new; they’re really a small silicon vendor.”
Homogenizing the Data Center
While Broadcom is the undisputed leader in merchant silicon, there is a trend of increased adoption from white-box and traditional switch vendors that used to primarily deploy their own custom silicon, according to IHS’s Data Center Network Equipment Market Tracker report, which predicts proprietary or custom silicon will top 25 percent of all units shipped in 2023. The report also says that programmable silicon will account for 12 percent. This compares to 38 percent proprietary and 6 percent programmable for 2018.
“Proprietary or custom silicon vendors are mainly traditional switch vendors, but many are expanding their portfolios to include non-proprietary third-party commodity silicon,” Adams says.
Juniper is one example, with its QFX1003 switch, which will use its proprietary ZX ASICs and its QFX5220 switch, based on Broadcom merchant silicon. Both will offer 400GE speeds.
“Virtually everybody’s moving towards merchant silicon and the scale-out model,” says Mike Bushong, VP of enterprise and cloud marketing for Juniper. “Adoption in the data center is going to be aggressively on the merchant-silicon side. There’s going to be a place in the data center for custom silicon, although I’d say that in terms of buying behavior, I fully expect merchant silicon to dominate.”
The first benefit end users will see is economic, Bushong says, and it’s the main reason they care about the move to merchant silicon. “But it’s not the only benefit they’ll see.” There’s a very real agility angle to merchant silicon. “In data centers, if I can reduce diversity, if I can make everything look the same, then I can become more efficient. I can become faster,” he says.
Part of what merchant silicon does is provide operational uniformity even with different vendors, Bushong says, meaning whether you’re buying from Juniper or from someone else, you leverage some of the same underlying characteristics.
The question people doing procurement should ask is whether they are making decisions that make their operations more uniform, he says.
“Who’s in the room when you evaluate?” Bushong says. “If end users view merchant silicon as only about what the box looks like and how they’re buying it – if they fail to make the connection to how they are going to manage it and what is the operational side – then they’re not getting the full benefit of merchant silicon. They’re making an uninformed decision and only looking at part of the problem space.”
What’s stalling things now is consideration of architectural changes, he says.
“Are the protocols different?” Bushong says. “The hardware’s largely made that transition out. What some of the other companies are trying to do now is say that if that transition is there, then is there a monopoly that can be broken up? This is what Barefoot [Networks] is trying to do. If you can break those pieces apart, they think they can provide greater economic leverage. That’s kind of the next frontier.”
AT&T’s Programmable Network
AT&T in 2016 was the first telecommunications provider to announce it was using programmable switches from Barefoot in its network. It installed Tofino-based white boxes running SnapRoute’s FlexSwitch network operating system in parts of its existing MPLS-based networks. AT&T then used data-plane programmability using a language called P4, which Barefoot says has since been adopted by many large data center operators. The telco not only applied merchant silicon to their core MPLS network, but also used it for in-band network telemetry (INT) on all traffic going from a link between San Francisco and Washington, D.C.
It was significant that the INT concept was used on such a fundamental part of AT&T’s network, says Ed Doe, Barefoot’s chief business officer, “and one where they wanted to be able to measure, with very fine control, exactly what were the latency utilization, the SLAs – service level agreements – that they were able to maintain.”
Tencent’s Mixing and Matching
Doe says the concept of telemetry has gone into many hyperscale data center customers. As recently as late last year, other hyperscale customers started to go on the record about using technology from Barefoot and applying the telemetry concept to it.
One of those hyperscalers was China’s Tencent. The gaming and social media company is using Cisco switches that contain Barefoot silicon to advance telemetry in the core of its standard cloud offering. Barefoot’s Doe says customers like Tencent are paving the way for more mainstream enterprises, “especially if they can take advantage of a typical switch from a company like Cisco that harnesses the Barefoot technology in P4 and telemetry.”
Tencent used not only Cisco’s NX operating system (which is the standard operating system on these switches), but it also incorporated SONiC (Software for open networking in the cloud), an open-source network OS developed by Microsoft.
“That was unique in that Cisco worked with Tencent to bring that to bear onto these same switches powered by the Barefoot Tofino tech and using the P4 programming language,” Doe says.
Incumbents Come Onboard
Customers are benefiting because new entrants into the merchant space, like Barefoot, are giving people not just an alternative but also the benefit of new technology. “This new tech is being brought to them or packaged and made easily accessible by companies like Arista and Cisco, which are customers of ours, and providing these solutions to their customers, which are usually different-sized data center companies,” Doe says.
Until vendors like Arista and Cisco came on board with merchant silicon, end users had to be more technologically advanced to take advantage of white boxes, often writing their own software to recreate what the hyperscalers were doing. “But now, with those two – and in the near future many more – you’re going to have many options … no matter what the size is,” he says.
Why Stordis Chose Barefoot Over Broadcom
One company working closely with Barefoot is Stordis, which announced its own switches at the Open Compute Project (OCP) global summit in April. Having started as a niche distribution company in the storage space, Stordis is repositioning itself as a networking-focused business. The German company started doing open networking several years ago.
“We got interested in having a different switch ASIC than the usual ones, and we always liked the concept of programmability that Barefoot offered,” Stordis CEO Alexander Jeffries says. “We were selling all these bare-metal white tops, just with a Broadcom base, and then we also started selling quite a few of the Barefoot switches.”
Those were essentially the same switches as Broadcom’s but with Tofino chips.
“Since we were doing quite a lot of work in the OpenFlow space, we actually knew that customers needed specific requirements to be able to use two switches properly,” Jeffries says, adding that existing switches didn’t have enough compute power, memory, RAM, or storage space. “We also saw there was not a single switch with time synchronization and time stamping, so we added all these features and got these new switches built. There was basically nothing else available with this kind of feature set [and] making use of the Barefoot Tofino.”
The Stordis switch is designed like a bare metal switch, Jeffries says, “so you have commercial-grade software like Kaloom, but we will also offer open-source software like ONL [Open Network Linux].”
Stordis is signed up to the ONF Stratum project, as well as OCP on the hardware side. “We’re also looking at supporting things like Redfish and OpenBMC for the management of the switches to give the statistics of the switch, like fan speeds and temperature,” the CEO says. “We’re trying to support as much open source as possible with these units.”
While most other switches have dual- or quad-core CPUs, Stordis puts an 8-core CPU into the switch for extra processing power. The switch has 32GB of memory and a 128GB SSD.
From a hardware perspective, much of the box’s functionality would have been possible with a Broadcom switch, Jeffries says. The reason Stordis went with Barefoot is that it’s programmable, enabling the chipsets to perform a certain feature or functionality. A Cisco switch comes with an abundance of software and functionality, which frequently is overkill. The beauty of Tofino is that you can program the switch to perform only one certain functionality.
“The kind of customers we’re talking to for these kinds of switches are quite a bit different to what we were talking to in the past,” Jeffries says. “We’re getting a lot of interest from service providers – so telecommunication companies – the security environments, lots of interest in academic research, [interest] from all kinds of use cases.”
In-band telemetry is a huge plus, Jeffries says, “so when you have traffic queues and congestion in your network, this kind of monitoring you can do very well with P4.” He says Stordis engineers have many ideas for new switches and new switch models, “so it will be interesting to see how quickly we grow and where we are a year or two from now.”
Stordis customers are talking about what they need to put in place if they’re no longer going with Juniper or Cisco, but they also don’t want to use open networking gear, Jeffries says.
“What does it actually mean in terms of resources, training people, perhaps having to develop your own code?” he says. “How market-ready is the open-source option? Many of the open-source things look very, very interesting, but it probably will take another year, or two, or three, to get really market-ready and stable. It’s a real challenge to get these open-source things going because at the moment it’s more like proof-of-concept or basic foundation for many of these open-source initiatives, and one needs to take what’s there and actually get it into production-grade software that you actually can deploy in a mission-critical environment. This is the real challenge, because not everyone’s got the resources and funds like the Facebooks and Amazons and Microsoft Azures. [The hyperscalers] can go out and employ whatever they need to at whatever cost, but that’s not as easy with other kinds of organizations, and that’s going to be the challenge for the next two to three years.”
But if it does goes there, Jeffries says, “you could see the networking industry – or these open-source options – becoming like a Red Hat to networking, where you have your open-source switch and some kind of enterprise license, which offers support as well. You could see something similar to that happen in networking and I think that’s not too unrealistic.”
https://www.datacenterknowledge.com/networks/why-merchant-silicon-taking-over-data-center-network-market
Alan, thanks for arranging the CoreSite MTDC tour. It’s always good to get a view of where the bits actually flow. So much thought has to go into the physical, practical things, like supplying power … it was interesting to hear that power substations built specifically for datacenter usage have become a trend (they were saying Silicon Valley Power (SVP) was able to do something like that because of that corridor of data centers running along Central Expressway in Santa Clara, CA.) Great stuff – many thanks!
Slides posted: IEEE/CoreSite Presentation and Tour: Multi-Tenant Data Centers
http://comsocscv.org/docs/MTDC%20CoreSite%20Tour%2023May2019%20IEEE-Comsoc.pdf