Synergy Research Group: Hyperscale Data Center Count > 500 as of 3Q-2019

New data from Synergy Research Group shows that the total number of large data centers operated by hyperscale providers increased to 504 at the end of the third quarter, having tripled since the beginning of 2013. The EMEA and Asia-Pacific regions continue to have the highest growth rates, though the US still accounts for almost 40% of the major cloud and internet data center sites.

The next most popular locations are China, Japan, the UK, Germany and Australia, which collectively account for another 32% of the total. Over the last four quarters new data centers were opened in 15 different countries with the U.S., Hong Kong, Switzerland and China having the largest number of additions. Among the hyperscale operators, Amazon and Microsoft opened the most new data centers in the last twelve months, accounting for over half of the total, with Google and Alibaba being the next most active companies. Synergy research indicates that over 70% of all hyperscale data centers are located in facilities that are leased from data center operators or are owned by partners of the hyperscale operators.

Hyperscale DC Q319

……………………………………………………………………………………………………………………………………………………………………………………………………

Backgrounder:

One vendor in the data center equipment space recently called hyperscale “too big for most minds to envision.” Scalability has always been about creating opportunities to do small things using resources that happen to encompass a very large scale.

IDC, which provides research and advisory services to the tech industry, classifies any data center with at least 5,000 servers and 10,000 square feet of available space as hyperscale, but Synergy Research Group focuses less on physical characteristics and more on “scale-of-business criteria” that assess a company’s cloud, e-commerce, and social media operations.

A hyperscale data center is to be distinguished from a multi-tenant data center as the former is owned and operated by a mega cloud provider (Amazon, Microsoft, Google, Alibaba, etc) while the latter is owned and operator by a real estate company that leases cages to tenants who supply their own IT equipment.

A hyperscale data center accomplishes the following functions:

  • Maximizes cooling efficiency. The largest operational expense in most data centers worldwide — more so than powering the servers — is powering the climate control systems. A hyperscale structure may be partitioned to compartmentalize high-intensity computing workloads, and concentrate cooling power on the servers hosting those workloads. For general-purpose workloads, a hyperscale architecture optimizes airflow throughout the structure, ensuring that hot air flows in one direction (even if it’s a serpentine one) and often reclaiming the heat from that exhaust flow for recycling purposes.
  • Allocates electrical power in discrete packages. In facilities designed to be occupied by multiple tenants, “blocks” are allocated like lots in a housing development. Here, the racks that occupy those blocks are allocated a set number of kilowatts — or, more recently, fractions of megawatts — from the main power supply. When a tenant leases space from a colocation provider, that space is often phrased not in terms of numbers of racks or square footage, but kilowatts. A design that’s more influenced by hyperscale helps ensure that kilowatts are available when a customer needs them.
  • Ensures electricity availability. Many enterprise data centers are equipped with redundant power sources (engineers call this configuration 2N), often backed up by a secondary source or generator (2N + 1). A hyperscale facility may utilize one of these configurations as well, although in recent years, workload management systems have made it feasible to replicate workloads across servers, making the workloads redundant rather than the power, reducing electrical costs. As a result, newer data centers don’t require all that power redundancy. They can get away with just N + 1, saving not just equipment costs but building costs as well.
  • Balances workloads across servers. Because heat tends to spread, one overheated server can easily become a nuisance for the other servers and network gear in its vicinity. When workloads and processor utilization are properly monitored, the virtual machines and/or containers housing high-intensity workloads may be relocated to, or distributed among, processors that are better suited to its functions, or that are simply not being utilized nearly as much at the moment. Even distribution of workloads directly correlates to temperature reduction, so how a data center manages its software is just as important as how it maintains its support systems.

References:

https://www.zdnet.com/article/how-hyperscale-data-centers-are-reshaping-all-of-it/

https://www.vxchnge.com/blog/rise-of-hyperscale-data-centers

………………………………………………………………………………………………………………………………………………………………………………………………………..

Synergy’s research is based on an analysis of the data center footprint of 20 of the world’s major cloud and internet service firms, including the largest operators in SaaS, IaaS, PaaS, search, social networking, e-commerce and gaming. The companies with the broadest data center footprint are the leading cloud providers – Amazon, Microsoft, Google and IBM. Each has 60 or more data center locations with at least three in each of the four regions – North America, APAC, EMEA and Latin America. Oracle also has a notably broad data center presence. The remaining firms tend to have their data centers focused primarily in either the US (Apple, Facebook, Twitter, eBay, Yahoo) or China (Alibaba, Baidu, Tencent).

There were more new hyperscale data centers opened in the last four quarters than in the preceding four quarters, with activity being driven in particular by continued strong growth in cloud services and social networking,” said John Dinsdale, a Chief Analyst and Research Director at Synergy Research Group.

“This is good news for wholesale data center operators and for vendors supplying the hardware that goes into those data centers. In addition to the 504 current hyperscale data centers we have visibility of a further 151 that are at various stages of planning or building, showing that there is no end in sight to the data center building boom.”

Reference:

https://www.srgresearch.com/articles/hyperscale-data-center-count-passed-500-milestone-q3

…………………………………………………………………………………………………………………………………………………………………………………………………………

About Synergy Research Group:

Synergy provides quarterly market tracking and segmentation data on IT and Cloud related markets, including vendor revenues by segment and by region. Market shares and forecasts are provided via Synergy’s uniquely designed online database tool, which enables easy access to complex data sets. Synergy’s CustomView ™ takes this research capability one step further, enabling our clients to receive on-going quantitative market research that matches their internal, executive view of the market segments they compete in.

Synergy Research Group helps marketing and strategic decision makers around the world via its syndicated market research programs and custom consulting projects. For nearly two decades, Synergy has been a trusted source for quantitative research and market intelligence. Synergy is a strategic partner of TeleGeography.

To speak to an analyst or to find out how to receive a copy of a Synergy report, please contact sales@srgresearch.com or 775-852-3330 extension 101.

 

 

Verizon Software-Defined Interconnect: Private IP network connectivity to Equinix global DC’s

Verizon today announced the launch of Software-Defined Interconnect (SDI), a solution that works with Equinix Cloud Exchange Fabric™ (ECX Fabric™), offering organizations with a Private IP network direct connectivity to 115 Equinix International Business Exchange™ (IBX ®) data centers (DC’s) around the globe within minutes.

Verizon claims its new Private IP service [1]  provides a faster, more flexible alternative to traditional interconnectivity, which requires costly buildouts, long lead times, complex provisioning and often truck rolls: APIs are used to automate connections and, often, reduce costs, boasts Verizon.  The telco said in a press release:

SDI addresses the longstanding challenges associated with connecting premises networks to colocation data centers. To do this over traditional infrastructure requires costly build-outs, long lead times and complex provisioning. The SDI solution leverages an automated Application Program Interface (API) to quickly and simply integrate pre-provisioned Verizon Private IP bandwidth via ECX Fabric, while eliminating the need for dedicated physical connectivity. The result is to make secure colocation and interconnection faster and easier for customers to implement, often at a significantly lower cost.

Note 1.  Private IP is an MPLS-based VPN service that provides a simple network designed to grow with your business and help you consolidate your applications into a single network infrastructure. It gives you dedicated, secure connectivity that helps you adapt to changing demands, so you can deliver a better experience for customers, employees and partners.

Private IP uses Layer 3 networking to connect locations virtually rather than physically. That means you can exchange data among many different sites using Permanent Virtual Connections through a single physical port. Our MPLS-based VPN solution combines the flexibility of IP with the security and reliability of proven network technologies.

……………………………………………………………………………………………………………

“SDI is an addition to our best-in-class software-defined suite of services that can deliver performance ‘at the edge’ and support real-time interactions for our customers,” said Vickie Lonker, vice president of product management and development for Verizon. “Think about how many devices are connected to data centers, the amount of data generated, and then multiply that when 5G becomes ubiquitous. Enabling enterprises to virtually connect to Verizon’s private IP services by coupling our technology with the proven ECX Fabric makes it easy to provision and manage data-intensive network traffic in real time, lifting a key barrier to digital transformation.”

Verizon’s private IP – MPLS network is seeing high double-digit traffic growth year-over-year, and the adoption of colocation services continues to proliferate as more businesses grapple with complex cloud deployments to achieve greater efficiency, flexibility and additional functionality in data management.

“Verizon’s new Software Defined Interconnect addresses one of the leading issues for organizations by improving colocation access. This offer facilitates a reduction in network and connectivity costs for accessing colocation data centers, while promoting agility and innovation for enterprises. This represents a competitive advantage for Verizon as it applies SDN technology to improve interconnecting its Private IP MPLS network globally,” said Courtney Munroe, group vice president at IDC.

“With Software-Defined Interconnect, a key barrier to digital transformation has been lifted. By allowing enterprises to virtually connect to Verizon’s private IP services using the proven ECX Fabric, SDI makes secure colocation and interconnection easier – and more financially viable – to implement than ever before,” said Bill Long, vice president, interconnection services at Equinix [2].

Note 2. Equinix Internet Exchange™ enables networks, content providers and large enterprises to exchange internet traffic through the largest global peering solution across 52 markets.

………………………………………………………………………………………………………

Expert Opinion:

SDI is an incremental addition to Verizon’s overall strategy of interconnecting with other service providers to meet customer needs, as well as virtualizing its network, says Brian Washburn, an analyst at Ovum (owned by Informa as is LightReading and many other market research firms).

“Everything can be dynamic, everything can be made pay-as-you-go, everything can be controlled as a series of virtual resources to push them around the network as you need it, when you need it,” Washburn says.

For Equinix, the Verizon deal builds its gravitational pull. “It pulls in assets and just connects as many things to other things as possible. It is a virtuous circle. The more things they get into their data centers, the more resources they have there, that pulls in more companies to connect to the resources,” Washburn says. Equinix is standardizing its APIs to make interconnections easily.

SDI is similar to CenturyLink Dynamic Connections, which connects enterprises directly to public cloud services. And telcos are building interconnects with each other; for example, AT&T with Colt. “I expect we’ll see more of this sort of automation taking advantage of Equinix APIs,” Washburn says.

Microsoft also provides a virtual WAN service to connect enterprises to Azure. “It’s a different story, but it falls into the broader category of automation between network operators and cloud services,” Washburn said.

…………………………………………………………………………………………………………..

Verizon manages 500,000+ network, hosting, and security devices and 4,000+ networks in 150+ countries. To find out more about how Verizon’s global IP network, managed network services and Software-Defined Interconnect work please visit:

https://enterprise.verizon.com/products/network/connectivity/private-ip/

IHS Markit: Microsoft #1 for total cloud services revenue; AWS remains leader for IaaS; Multi-clouds continue to form

Following is information and insight from the IHS Markit Cloud & Colocation Services for IT Infrastructure and Applications Market Tracker.

Highlights:

·       The global off-premises cloud service market is forecast to grow at a five-year compound annual growth rate (CAGR) of 16 percent, reaching $410 billion in 2023.

·       We expect cloud as a service (CaaS) and platform as a service (PaaS) to be tied for the largest 2018 to 2023 CAGR of 22 percent. Infrastructure as a service (IaaS) and software as a service (SaaS) will have the second and third largest CAGRs of 14 percent and 13 percent, respectively.

IHS Markit analysis:

Microsoft in 2018 became the market share leader for total off-premises cloud service revenue with 13.8 percent share, bumping Amazon to the #2 spot with 13.2 percent; IBM was #3 with 8.8 percent revenue share. Microsoft’s success can be attributed to its comprehensive portfolio and the growth it is experiencing from its more advanced PaaS and CaaS offerings.

Although Amazon relinquished its lead in total off-premises cloud service revenue, it remains the top IaaS provider. In this very segmented market with a small number of large, well-established providers competing for market share:

•        Amazon was #1 in IaaS in 2018 with 45 percent of IaaS revenue.

•        Microsoft was #1 for CaaS with 22 percent of CaaS revenue and #1 in PaaS with 27 percent of PaaS revenue.

•        IBM was #1 for SaaS with 17 percent of SaaS revenue.

…………………………………………………………………………………………………………………………………

Multi-clouds [1] remain a very popular trend in the market; many enterprises are already using various services from different providers and this is continuing as more cloud service providers (CSPs) offer services that interoperate with services from their partners and their competitors,” said Devan Adams, principal analyst, IHS Markit. Expectations of increased multi-cloud adoption were displayed in our recent Cloud Service Strategies & Leadership North American Enterprise Survey – 2018, where respondents stated that in 2018 they were using 10 different CSPs for SaaS (growing to 14 by 2020) and 10 for IT infrastructure (growing to 13 by 2020).

Note 1. Multi-cloud (also multicloud or multi cloud) is the use of multiple cloud computing and storage services in a single network architecture. This refers to the distribution of cloud assets, software, applications, and more across several cloud environments.

There have recently been numerous multi-cloud related announcements highlighting its increased availability, including:

·       Microsoft: Entered into a partnership with Adobe and SAP to create the Open Data Initiative, designed to provide customers with a complete view of their data across different platforms. The initiative allows customers to use several applications and platforms from the three companies including Adobe Experience Cloud and Experience Platform, Microsoft Dynamics 365 and Azure, and SAP C/4HANA and S/4HANA.

·       IBM: Launched Multicloud Manager, designed to help companies manage, move, and integrate apps across several cloud environments. Multicloud Manager is run from IBM’s Cloud Private and enables customers to extend workloads from public to private clouds.

·       Cisco: Introduced CloudCenter Suite, a set of software modules created to help businesses design and deploy applications on different cloud provider infrastructures. It is a Kubernetes-based multi-cloud management tool that provides workflow automation, application lifecycle management, cost optimization, governance and policy management across cloud provider data centers.

IHS Markit Cloud & Colocation Intelligence Service:

The bi-annual IHS Markit Cloud & Colocation Services Market Tracker covers worldwide and regional market size, share, five-year forecast analysis, and trends for IaaS, CaaS, PaaS, SaaS, and colocation. This tracker is a component of the IHS Markit Cloud & Colocation Intelligence Service which also includes the Cloud & Colocation Data Center Building Tracker and Cloud and Colocation Data Center CapEx Market Tracker. Cloud service providers tracked within this service include Amazon, Alibaba, Baidu, IBM, Microsoft, Salesforce, Google, Oracle, SAP, China Telecom, Deutsche Telekom Tencent, China Unicom and others. Colocation providers tracked include Equinix, Digital Realty, China Telecom, CyrusOne, NTT, Interion, China Unicom, Coresite, QTS, Switch, 21Vianet, Internap and others.

DriveNets Network Cloud: Fully disaggregated software solution that runs on white boxes

by Ofer Weill, Director of Product Marketing at DriveNets; edited and augmented by Alan J Weissberger

Introduction:

Networking software startup DriveNets announced in February that it had raised $110 million in first round (Series A) of venture capital funding.  With headquarters in Ra’anana, Israel, DriveNets’ cloud-based service, called Network Cloud, simplifies the deployment of new services for carriers at a time when many telcos are facing declining profit margins. Bessemer Venture Partners and Pitango Growth are the lead VC investors in the round, which also includes money from an undisclosed number of private angel investors.

DriveNets was founded in 2015 by telco experts Ido Susan and Hillel Kobrinsky who are committed to creating the best performing CSP Networks and improving its economics. Network Cloud was designed and built for CSPs (Communications Service Providers), addressing their strict resilience, security and QoS requirements, with zero compromise. 

“We believe Network Cloud will become the networking model of the future,” said DriveNets co-founder and CEO Ido Susan, in a statement. “We’ve challenged many of the assumptions behind traditional routing infrastructures and created a technology that will allow service providers to address their biggest challenges like the exponential capacity growth, 5G deployments and low-latency AI applications.”’

The Solution:

Network Cloud does not use open-source code. It’s an “unbundled” networking software solution, which runs over a cluster of low-cost white box routers and white box x86 based compute servers. DriveNets has developed its own Network Operating System (NOS) rather than use open source or Cumulus’ NOS as several other open networking software companies have done.

Fully disaggregated, its shared data plane scales-out linearly with capacity demand.  A single Network Cloud can encompass up to 7,600 100Gb ports in its largest configuration. Its control plane scales up separately, consolidating any service and routing protocol. 

Network Cloud data-plane is created from just two building blocks white boxes – NCP for packet forwarding and NCF for fabric, shrinking operational expenses by reducing the number of hardware devices, software versions and change procedures associated with building and managing the network. The two white-boxes (NCP and NCF) are based on Broadcom’s Jericho2 chipset which has high-speed, high-density port interfaces of 100G and 400G bits/sec. A single virtual chassis for max ports might have this configuration:  30720 x 10G/25G / 7680 x 100G / 1920 x 400G bits/sec.

Last month, DriveNets disaggregated router added 400G-port routing support (via whitebox routers using the aforementioned Broadcom chipset).  The latest Network Cloud hardware and software is now being tested and certified by an undisclosed tier-1 Telco customer.

“Just like hyper-scale cloud providers have disaggregated hardware and software for maximum agility, DriveNets is bringing a similar approach to the service provider router market. It is impressive to see it coming to life, taking full advantage of the strength and scale of our Jericho2 device,” said Ram Velaga, Senior Vice President and General Manager of the Switch Products Division at Broadcom.

Network Cloud control-plane runs on a separate compute server and is based on containerized microservices that run different routing services for different network functions (Core, Edge, Aggregation, etc.). Where they are co-located, service-chaining allows sharing of the same infrastructure for all router services. 

Multi-layer resiliency, with auto failure recovery, is a key feature of Network Cloud.  There is inter-router redundancy and geo-redundancy of control to select a new end to end path by routing around points of failure.

Network Cloud’s orchestration capabilities include Zero Touch Provisioning, full life cycle management and automation, as well as superior diagnostics with unmatched transparency.  These are illustrated in the figures below:

Image Courtesy of DriveNets

 

Future New Services:

Network Cloud is a platform for new revenue generation.  For example, adding 3rd party services as separate micro-services, such as DDoS Protection, Managed LAN to WAN, Network Analytics, Core network and Edge network.

“Unlike existing offerings, Network Cloud has built a disaggregated router from scratch. We adapted the data-center switching model behind the world’s largest clouds to routing, at a carrier-grade level, to build the world’s largest Service Providers’ networks. We are proud to show how DriveNets can rapidly and reliably deploy technological innovations at that scale,” said Ido Susan CEO and Co-Founder of DriveNets in a press release.

………………………………………………………………………………………………

References:

https://www.reuters.com/article/us-tech-drivenets-fundraising/israeli-software-firm-drivenets-raises-110-million-in-first-funding-round-idUSKCN1Q32S0

https://www.drivenets.com/about-us

https://www.drivenets.com/uploads/Press/201904_dn_400g.pdf

https://www.prnewswire.com/il/news-releases/drivenets-delivers-worlds-first-400g-white-box-based-distributed-router-to-service-provider-testing-300833647.html

 

Will Hyperscale Cloud Companies (e.g. Google) Control the Internet’s Backbone?

Rob Powell reports that Google’s submarine cable empire now hooks up another corner of the world. The company’s 10,000km Curie submarine cable has officially come ashore in Valparaiso, Chile.

The Curie cable system now connects Chile with southern California. it’s a four-fiber-pair system that will add big bandwidth along the western coast of the Americas to Google’s inventory.  Also part of the plans is a branching unit with potential connectivity to Panama at about the halfway point where they can potentially hook up to systems in the Caribbean.

Subcom’s CS Durable brought the cable ashore on the beach of Las Torpederas, about 100 km from Santiago. In Los Angeles the cable terminates at Equinix’s LA4 facility, while in Chile the company is using its own recently built data center in Quilicura, just outside of Santiago.

Google has a variety of other projects going on around the world as well, as the company continues to invest in its infrastructure.  Google’s projects tend to happen quickly, as they don’t need to spend time finding investors to back their plans.

Curie is one of three submarine cable network projects Google unveiled in January 2018. (Source: Google)

……………………………………………………………………………………………………………………………………………………………………………………..

Powell also wrote that SoftBank’s HAPSMobile is investing $125M in Google’s Loon as the two partner for a common platform, and Loon gains an option to invest a similar sum in HAPSMobile later on.

Both companies envision automatic, unmanned, solar-powered devices in the sky above the range of commercial aircraft but not way up in orbit. From there they can reach places that fiber and towers don’t or can’t. HAPSMobile uses drones, and Loon uses balloons. The idea is to develop a ‘common gateway or ground station’ and the necessary automation to support both technologies.

It’s a natural partnership in some ways, and the two are putting real money behind it. But despite the high profile we haven’t really seen mobile operators chomping at the bit, since after all it’s more fun to cherry pick those tower-covered urban centers for 5G first and there’s plenty of work to do. And when they do get around to it, there’s the multiple near-earth-orbit satellite projects going on to compete with.

But the benefit both HAPSMobile and Loon have to their model is that they can, you know, reach it without rockets.

…………………………………………………………………………………………………………

AWS’s Backbone (explained by Sapphire):

An AWS Region is a particular geographic area where Amazon decided to deploy several data centers, just like that. The reason behind a chosen area is to be close to the users and also to have no restrictions. At the same time, every Region is also connected through private links with other Regions which means they have a dedicated link for their communications because for them is cheaper and they also have full capacity planing with lower latency.

What is inside a Region?

  • Minimum 2 Availability Zones
  • Separate transit centers (peering the connections out of the World)

How transit centers work?

AWS has private links to other AWS regions, but they also have private links for the feature AWS Direct Connect – a dedicated and private & encrypted (IPSEC tunnel) connection from the “xyz” company’s datacenters to their infrastructures in the Cloud, which works with the VLANs inside (IEEE 802.1Q) for accessing public and private resources with a lower latency like Glacier or S3 buckets and their VPC at the same time between <2ms and usually <1ms latency. Between Availability Zones (inter AZ zone) the data transit there’s a 25TB/sec average.

From AWS Multiple Region Multi-VPC Connectivity:

AWS Regions are connected to multiple Internet Service Providers (ISPs) as well as to Amazon’s private global network backbone, which provides lower cost and more consistent cross-region network latency when compared with the public internet.  Here is one illustrative example:

,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,

From Facebook Building backbone network infrastructure:

We have strengthened the long-haul fiber networks that connect our data centers to one another and to the rest of the world.

As we bring more data centers online, we will continue to partner and invest in core backbone network infrastructure. We take a pragmatic approach to investing in network infrastructure and utilize whatever method is most efficient for the task at hand. Those options include leveraging long-established partnerships to access existing fiber-optic cable infrastructure; partnering on mutually beneficial investments in new infrastructure; or, in situations where we have a specific need, leading the investment in new fiber-optic cable routes.

In particular, we invest in new fiber routes that provide much-needed resiliency and scale. As a continuation of our previous investments, we are building two new routes that exemplify this approach. We will be investing in new long-haul fiber to allow direct connectivity between our data centers in Ohio, Virginia, and North Carolina.

As with our previous builds, these new long-haul fiber routes will help us continue to provide fast, efficient access to the people using our products and services. We intend to allow third parties — including local and regional providers — to purchase excess capacity on our fiber. This capacity could provide additional network infrastructure to existing and emerging providers, helping them extend service to many parts of the country, and particularly in underserved rural areas near our long-haul fiber builds.

………………………………………………………………………………………………….

Venture Beat Assessment of what it all means:

Google’s increasing investment in submarine cables fits into a broader trend of major technology companies investing in the infrastructure their services rely on.

Besides all the datacenters AmazonMicrosoft, and Google are investing in as part of their respective cloud services, we’ve seen Google plow cash into countless side projects, such as broadband infrastrucure in Africa and public Wi-Fi hotspots across Asia.

Elsewhere, Facebook — while not in the cloud services business itself — requires omnipresent internet connectivity to ensure access for its billions of users. The social network behemoth is also investing in numerous satellite internet projectsand had worked on an autonomous solar-powered drone project that was later canned. Earlier this year, Facebook revealed it was working with Viasat to deploy high-speed satellite-powered internet in rural areas of Mexico.

While satellites will likely play a pivotal role in powering internet in the future — particularly in hard-to-reach places — physical cables laid across ocean floors are capable of far more capacity and lower latency. This is vital for Facebook, as it continues to embrace live video and virtual reality. In addition to its subsea investments with Google, Facebook has also partnered with Microsoft for a 4,000-mile transatlantic internet cable, with Amazon and SoftBank for a 14,000 km transpacific cable connecting Asia with North America, and on myriad othercable investments around the world.

Needless to say, Google’s services — ranging from cloud computing and video-streaming to email and countless enterprise offerings — also depend on reliable infrastructure, for which subsea cables are key.

Curie’s completion this week represents not only a landmark moment for Google, but for the internet as a whole. There are currently more than 400 undersea cables in service around the world, constituting 1.1 million kilometers (700,000 miles). Google is now directly invested in around 100,000 kilometers of these cables (62,000 miles), which equates to nearly 10% of all subsea cables globally.

The full implications of “big tech” owning the internet’s backbone have yet to be realized, but as evidenced by their investments over the past few years, these companies’ grasp will only tighten going forward.

Huawei to build Public Cloud Data Centers using OCP Open Rack and its own IT Equipment; Google Cloud and OCP?

Huawei:

On March 14th at the OCP 2019 Summit in San Jose, CA, Huawei Technologies (the world’s number one telecom/network equipment supplier) announced plans to adopt OCP Open Rack in its new public cloud data centers worldwide. The move is designed to enhance the environmental sustainability of Huawei’s new public cloud data centers by using less energy for servers, while driving operational efficiency by reducing the time it takes to install and maintain racks of IT equipment.  In addition to Huawei’s adoption of Open Rack in its cloud data centers, the company is also expanding its work with the OCP Community to extend the design of the standard and further improve time-to-market, and high serviceability and reduce TCO.  In an answer to this author’s question, Jinshui Liu CTO, IT Hardware Domain said the company would make its own OCP compliant compute servers and storage equipment (in addition to network switches) that would be used in its public cloud data centers.  All that IT equipment will ALSO sold to its customers building cloud resident data centers.

The Open Rack initiative introduced by the Open Compute Project (OCP) in 2013, seeks to redefine the data center rack and is one of the most promising developments in the scale computing environment. It is the first rack standard that is designed for data centers, integrating the rack into the data center infrastructure.  Open Rack integrating the rack into the data center infrastructure as part of the Open Compute Project’s “grid to gates” philosophy, a holistic design process that considers the interdependence of everything from the power grid to the gates in the chips on each motherboard.

“Huawei’s engineering and business leaders recognized the efficiency and flexibility that Open Rack offers, and the support that is available from a global supplier base. Providing cloud services to a global customer base creates certain challenges. The flexibility of the Open Rack specification and the ability to adapt for liquid cooling allows Huawei to service new geographies. Huawei’s decision to choose Open Rack is a great endorsement!” stated Bill Carter, Chief Technology Officer for the Open Compute Project Foundation.

 

OCP specified Open Rack v2:

 

Last year Huawei became an OCP Platinum Member. This year, Huawei continues investment in and commitment to OCP and the open source community. Huawei’s active involvement within the OCP Community includes on-going participation and contributions for various OCP projects such as Rack and Power, System Management and Server projects with underlying contributions to the upcoming specs for OCP accelerator Module, Advanced Cooling Solutions and OpenRMC.

“Huawei’s strategic investment and commitment to OCP is a win-win,” said Mr. Kenneth Zhang, General Manager of FusionServer, Huawei Intelligent Computing Business Department. “Combining Huawei’s extensive experience in Telco and Cloud deployments together with the knowledge of the vast OCP community will help Huawei to provide cutting edge, flexible and open solutions to its global customers. In turn, Huawei can leverage its market leadership and global data center infrastructure to help introduce OCP to new geographies and new market segments worldwide.”

During a keynote address at OCP Global Summit, Huawei shared more information about its Open Rack adoption plans as well as overall OCP strategy. Huawei  also showcased some of the building blocks of these solutions in its booth, including OCP-based compute module, Huawei Kunpeng 920 ARM CPU, Huawei Ascend 310 AI processor and other Huawei intelligent Compute products.

Huawei’s Booth at  OCP 2019 Summit

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

In summary, Huawei has developed an optimized rack scale design that will become the foundation of its cloud and IT infrastructure roll out.   This extends the company’s product portfolio from telecom/networking to cloud computing and storage as well as an ODM for compute and storage equipment.  Hence, Huawei will now compete with Microsoft Azure as well as China CSPs Alibaba, Baidu and Tencent in using OCP compliant IT equipment in their cloud resident data centers,.  Unlike the other aforementioned OCP Platinum members, Huawei will design and build its own IT equipment (the other  CSPs buy OCP equipment from ODMs).

There are now 124 OCP certified products available with over 60 more in the pipeline.  Most of the OCP ODMs are in Taiwan.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Google:

While Google has been an OCP Platinum member since 2015, they maintained a very low profile at this year’s OCP Summit, so it’s not clear how much OCP compliant equipment they use in Google Cloud or in any of their cloud resident data centers.  Google did present 2 tech sessions at the conference:

Google’s 48V Rack Adaptation and Onboard Power Technology Update” at the OCP 2019 Summit early Friday morning March 15th.  Google said that significant progress has been made in three specific applications:

1. Multi-phase 48V-to-12V voltage regulators adopting the latest hybrid switched-capacitor-buck topologies for traditional 12V workloads such as PCIEs and OTS servers;

2. Very high efficiency high density fixed ratio bus converters for 2-stage 48V-to-PoL power conversions;

3. High frequency high density voltage regulators for extremely power hungry AI accelerators.

Google and ONF provided an update on Stratum — a next generation, thin switch OS that provides silicon and hardware independence, which was first introduced at the 2018 OCP Summit.  Stratum was said to enable the next generation of SDN.  It adds new SDN-ready interfaces from the P4 and OpenConfig communities to ONL (Open Network Linux) that enable programmable switching chips (ASICs, FPGAs, etc.) and traditional switching ASICs alike. The talk described how the open source community has generalized Google’s seed OVP contribution for additional whitebox targets, and demonstrate Stratum on a fabric of OCP devices controlled by an open source control plane.

I believe Google is still designing all their own IT hardware (compute servers, storage equipment, switch/routers, Data Center Interconnect gear other than the PHY layer transponders). They announced design of many AI processor chips that presumably go into their IT equipment which they use internally but don’t sell to anyone else (just like Amazon AWS).

Google Cloud Next 2019 conference will be April 9-11, 2019 at the Moscone Center in San Francisco, CA.

References:

https://www.huawei.com/en/press-events/news/2019/3/huawei-ocp-open-rack-public-cloud-datacenters

https://www.globenewswire.com/news-release/2019/03/14/1754946/0/en/Huawei-to-Adopt-OCP-s-Open-Rack-across-New-Public-Cloud-Datacenters-Globally.html

 

Synergy Research: Cloud Service Provider Rankings (See Comments for Details)

Cloud services market remains top heavy, with the large providers dominant. Synergy Research

………………………………………………………………………………………………………………………………………………………………………

According to Larry Dignan of ZDNET, “the cloud computing market in 2019 will have a decidedly multi-cloud spin, as the hybrid shift by players such as IBM, which is acquiring Red Hat, could change the landscape. This year’s edition of the top cloud computing providers also features software-as-a-service giants that will increasingly run more of your enterprise’s operations via expansion.

One thing to note about the cloud in 2019 is that the market isn’t zero sum. Cloud computing is driving IT spending overall. For instance, Gartner predicts that 2019 global IT spending will increase 3.2 percent to $3.76 trillion with as-a-service models fueling everything from data center spending to enterprise software.  In fact, it’s quite possible that a large enterprise will consume cloud computing services from every vendor in this guide. The real cloud innovation may be from customers that mix and match the following public cloud vendors in unique ways. ”

Key 2019 themes to watch among the top cloud providers include:

  • Pricing power. Google recently raised prices of G Suite and the cloud space is a technology where add-ons exist for most new technologies. While compute and storage services are often a race to the bottom, tools for machine learning, artificial intelligence and serverless functions can add up. There’s a good reason that cost management is such a big theme for cloud computing customers–it’s arguably the biggest challenge. Look for cost management and concerns about lock-in to be big themes.
  • Multi-cloud. A recent survey from Kentik highlights how public cloud customers are increasingly using more than one vendor. AWS and Microsoft Azure are most often paired up. Google Cloud Platform is also in the mix. And naturally these public cloud service providers are often tied into existing data center and private cloud assets. Add it up and there’s a healthy hybrid and private cloud race underway and that’s reordered the pecking order. The multi-cloud approach is being enabled by virtual machines and containers.
  • Artificial intelligence, Internet of things and analytics are the upsell technologies for cloud vendors. Microsoft Azure, Amazon Web Services and Google Cloud Platform all have similar strategies to land customers with compute, cloud storage, serverless functions and then upsell you to the AI that’ll differentiate them. Companies like IBM are looking to manage AI and cloud services across multiple clouds.
  • The cloud computing landscape is maturing rapidly yet financial transparency backslides. It’s telling when Gartner’s Magic Quadrant for cloud infrastructure goes to 6 players from more than a dozen. In addition, transparency has become worse among cloud computing providers. For instance, Oracle used to break out infrastructure-, platform- and software-as-a-service in its financial reports. Today, Oracle’s cloud business is lumped together. Microsoft has a “commercial cloud” that is very successful, but also hard to parse. IBM has cloud revenue and “as-a-service” revenue. Google doesn’t break out cloud revenue at all. Aside from AWS, parsing cloud sales has become more difficult.

IBM is more private cloud and hybrid with hooks into IBM Cloud as well as other cloud environments. Oracle Cloud is primarily a software- and database-as-a-service provider. Salesforce has become about way more than CRM.

………………………………………………………………………

China Permits Virtual Telecom Operators vs Amazon Virtual Private Cloud (VPC)

China has granted the official go ahead for virtual telecom operator businesses after piloting the practice for almost five years. The China Ministry of Industry and Information Technology has issued official licenses to 15 private virtual telecoms to resell internet access, the ministry said in a statement released Monday on its website.   These virtual operators, including Chinese tech giants Alibaba and Xiaomi, do not maintain the network infrastructure but rent wholesale services like roaming and text messages from the country’s three major telecom infrastructure operators China Mobile, China Unicom, and China Telecom.

In a move to further open up the telecom sector, China started to issue pilot licenses in May 2013 to private companies to allow them to offer repackaged mobile services to users. It issued pilot operation licenses to eleven ‘mobile virtual network operators’, or MVNOs, at the end of 2013  which has gradually increased to A 42 virtual telecom businesses.

Granting virtual telecom operators official licenses is aimed at encouraging mobile telecom business innovation and improving the sector’s overall service quality, the statement said.

Reference:

http://usa.chinadaily.com.cn/a/201807/23/WS5b559eb4a310796df4df82ed.html

………………………………………………………………………………………………………………………..

While Amazon is not a virtual ISP, they do offer Virtual Private Cloud (VPC) service:

To securely transfer data between an on-premises data center and Amazon Web Services (AWS), consider implementing a transit Virtual Private Cloud (VPC).  Transit VPCs not only manage your networks more efficiently, but also add dynamic routing and secure connectivity in your cloud environment. Because these transit VPCs are deployed with high availability on AWS, downtime is limited.

Amazon’s VPC lets a company or enterprise provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that the user defines. The user has complete control over the enterprise virtual networking environment, including selection of IP address range, creation of subnets, and configuration of route tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications.

These AWS resource requests are implemented virtually and can be used to connect Amazon VPCs, whether they are running in different parts of the world and/or running in separate AWS accounts, to a common Amazon VPC that serves as a global network transit center. This approach uses host-based Virtual Private Network (VPN) appliances in a dedicated Amazon VPC and helps to simplify network management by reducing the amount of connections required to connect multiple Amazon VPCs and remote networks.

Simplify network management and reduce your total number of connections by deploying a highly available, scalable, and secure transit Virtual Private Cloud (VPC) on AWS.

Download the eBook to learn more about:

  • How to build a private network that spans two or more AWS Regions
  • Sharing connectivity between multiple Amazon VPCs and on-premises data centers
  • How transit VPCs enable you to share Amazon VPCs and AWS resources across multiple AWS accounts

For more info please refer to https://aws.amazon.com/networking/partner-solutions/featured-partner-solutions/

 

IHS Markit: CSPs accelerate high speed Ethernet adapter adoption; Mellanox doubles switch sales

by Vladimir Galabov, senior analyst, IHS Markit

Summary:

High speed Ethernet adapter ports (25GE to 100GE) grew 45% in 1Q18, tripling compared to 1Q17, with cloud service provider (CSP) adoption accelerating the industry transition. 25GE represented a third of adapter ports shipped to CSPs in 1Q18, doubling compared to 4Q17. Telcos follow CSPs in their transition to higher networking speeds and while they are ramping 25GE adapters, they are still using predominantly 10GE adapters, while enterprises continue to opt for 1GE, according to the Data Center Ethernet Adapter Equipment report from IHS Markit.

“We expect higher speeds (25GE+) to be most prevalent at CSPs out to 2022, driven by high traffic and bandwidth needs in large-scale data centers. By 2022 we expect all Ethernet adapter at CSP data centers to be 25GE and above. Tier 1 CSPs are currently opting for 100GE at ToR with 4x25GE breakout cables for server connectivity,” said Vladimir Galabov, senior analyst, IHS Markit. “Telcos will invest more in higher speeds, including 100GE out to 2022, driven by NFV and increased bandwidth requirements from HD video, social media, AR/VR, and expanded IoT use cases. By 2022 over two thirds of adapters shipped to telcos will be 25GE and above.”

CSP adoption of higher speeds drives data center Ethernet adapter capacity (measured in 1GE port equivalents) shipped to CSPs to hit 60% of total capacity by 2022 (up from 55% in 2017). Telco will reach 23% of adapter capacity shipped by 2022 (up from 15% in 2017) and enterprise will drop to 17% (down from 35% in 2017).

 “Prices per 1GE ($/1GE) are lowest for CSPs as higher speed adapters result in better per gig economies. Large DC cloud environments with high compute utilization requirements continually tax their networking infrastructure, requiring CSPs to adopt high speeds at a fast rate,” Galabov said.

Additional data center Ethernet adapter equipment market highlights:

·         Offload NIC revenue was up 6% QoQ and up 55% YoY, hitting $160M in 1Q18. Annual offload NIC revenue will grow at a 27% CAGR out to 2022.

·         Programmable NIC revenue was down 5% QoQ and up 14% YoY, hitting $26M in 1Q18. Annual programmable NIC revenue will grow at a 58% CAGR out to 2022.

·         Open compute Ethernet adapter form factor revenue was up 11% QoQ and up 56% YoY, hitting $54M in 1Q18. By 2022, 21% of all ports shipped will be open compute form factor.

·         In 1Q18, Intel was #1 in revenue market share (34%), Mellanox was #2 (23%), and Broadcom was #3 (14%) 

Data Center Compute Intelligence Service:

The quarterly IHS Markit “Data Center Compute Intelligence Service provides analysis and trends for data center servers, including form factors, server profiles, market segments and servers by CPU type and co-processors. The report also includes information about Ethernet network adapters, including analysis by adapter speed, CPU offload, form factors, use cases and market segments. Other information includes analysis and trends of multi-tenant server software by type (e.g., server virtualization and container software), market segments and server attach rates. Vendors tracked in this Intelligence Service include Broadcom, Canonical, Cavium, Cisco, Cray, Dell EMC, Docker, HPE, IBM, Huawei, Inspur, Intel, Lenovo, Mellanox, Microsoft, Red Hat, Supermicro, SuSE, VMware, and White Box OEM (e.g., QCT and WiWynn).

………………………………………………………………………………………………………………………………………….

Mellanox Ethernet Switches for the Data Center:

In the Q1 2018 earnings call, Mellanox reported that its Ethernet switch product line revenue more than doubled year over year. Mellanox Spectrum Ethernet switches are getting strong traction in the data center market. The recent inclusion in the Gartner Magic Quadrant is yet another milestone. There are a few underlying market trends that is driving this strong adoption.

Network Disaggregation has gone mainstream

Mellanox Spectrum switches are based off its highly differentiated homegrown silicon technology. Mellanox disaggregates Ethernet switches by investing heavily in open source technology, software and partnerships. In fact, Mellanox is the only switch vendor that is part of the top-10 contributors to open source Linux. In addition to native Linux, Spectrum switches can run Cumulus Linux or Mellanox Onyx operating systems. Network disaggregation brings transparent pricing and provides customers a choice to build their infrastructure with the best silicon and best fit for purpose software that would meet their specific needs.

25/100GbE is the new 10/40GbE

25/100GbE infrastructure provides better RoI and the market is adopting these newer speeds at record pace. Mellanox Spectrum silicon outperforms other 25GbE switches in the market in terms of every standard switch attribute.

IHS Markit: 25/100GE switch port growth surges; 200/400GE in 2019 + Optical Tranceiver Market Overview

IHS Markit: 25/100GE switch port growth surges; 200/400GE in 2019

Overview:

IHS Markit recently released its Data Center Network Equipment market tracker, which showed that worldwide data center Ethernet switch revenue grew 12% YoY in 3Q17, reaching $2.9B. Key segments driving demand were purpose-built switches which grew 13% YoY and bare metal switches which grew 47% YoY and continues to flourish as customers transition from traditional switches to white box and branded bare metal models.

Worldwide data center Ethernet switch ports shipped grew 24% YoY in 3Q17, reaching 12.5M. 25GE and 100GE experienced significant uptake, resulting in 251% and 369% YoY growths, respectively; yet the 2 port speeds combined only make up 16% of ports shipped while 10GE still leads with 61% of ports shipped in 3Q17. We forecast 25/100GE ports shipped to rise to 46% combined and 10GE to decrease to 46% by 2021, as customers migrate from 10GE to 25GE server connections and 100GE ASPs decline making them more viable options for large and small cloud service providers (CSPs) to deploy.

IHS Markit expects trials for 200/400GE to begin in 2018 with production shipments occurring in 2019 and revenue to reach approximately $1B by 2021.

We believe 200GE will be deployed first yet have a short shelf life as 400GE is expected to follow closely behind and will become the primary choice going forward. The gap in time will be solely determined by how long it takes for the higher speed to become production-ready with adequate supply. CSPs will be the main customers for 200/400GE as they transition from 100GE in an effort to satisfy increasing high-bandwidth demands,” said Cliff Grossner, Ph.D., Senior Research Director and Advisor for the Cloud and Data Center Research Practice at IHS Markit.

 

 

More Data Center Ethernet Switch Market Highlights:

·         The need for greater than 100GE speeds results in 200/400GE shipments beginning in 2019.

·         The continued adoption of 25GE between servers and ToR switches will push adopters of 25GE to upgrade to 100GE for inter-switch connectivity. This shift is now underway in the enterprise.

·         The market for 10GE/40GE has seen a shift with ASPs falling rapidly; the number of ports shipped is also slowing, with revenue growth of 10GE and 40GE port shipments following unit shipments.

·         CSPs are the earliest adopters of higher speeds and pave the way for use of higher-speed technologies; large DC cloud environments with high compute utilization requirements continually tax their networking infrastructure, requiring them to adopt high speeds at a fast rate, ultimately resulting in the lowest $/1GE ratios.

·         Vendor performance: Cisco continues to dominate and Arista is #2 in the DC Ethernet switch market; Cisco garnered 53% of DC Ethernet switch market revenue in 3Q17 with revenue up 5% YoY. Arista was #2 with 13% share and 50% YoY growth. Huawei was #3 with 7% share and 23% YoY growth.

Research Synopsis:

The IHS Markit Data Center Networks Intelligence Service and provides quarterly worldwide and regional market size, vendor market share, forecasts through 2022, analysis and trends for (1) data center Ethernet switches by category [purpose-built, bare metal, blade, and general purpose], port speed [1/10/25/40/50/100/200/400GE] and market segment [enterprise, telco and cloud service provider], (2)application delivery controllers by category [hardware-based appliance, virtual appliance],  and (3) software-defined WAN (SD-WAN) [appliances and control and management software], (4) FC SAN switches by type [chassis, fixed], and (5) FC SAN HBAs. Vendors tracked include A10, ALE, Arista, Array Networks, Aryaka, Barracuda, Broadcom, Cavium, Cisco, Citrix, CloudGenix, Dell, F5, FatPipe, HPE, Huawei, InfoVista, Juniper, KEMP, Radware, Riverbed, Silver Peak, Talari, TELoIP, VMware, ZTE and others.

………………………………………………………………………………………………………………………………………………………………………………………..

Global Optical Transceiver Market: Striding to 200G and 400G

Posted on February 1, 2018 by FS.COM

The demand for higher Ethernet speed, couple with the prevalence of Cloud computing, Internet of Things and virtual data center, has driven the prosperity of optical transceiver market. Optical transceivers, direct attach cables (DACs) and active optical cables (AOCs) have evolved dramatically to catch leading edge broadband network capacity. The past decades have witnessed massive adoption of optical transceivers with data rates ranging from 1G, 10/25G to 40/100G, while higher-speed 200G or even data center 400G is just on the horizon. The sales of optical components grows steadily and is expected to continue in the years to come.

10G, 25G, 40G and 100G: Seeing Broad Adoption in Data Center 

As network gets faster and virtualization gradually becomes the norm, data center is undergoing a major transformation. The trend emerges in the industry signifies a migration toward higher speed transceivers and better service. These high-bandwidth transceivers are driving revenue growth which suggests a strong market. The global optical transceiver market is anticipated to reach to $9.9 billion by 2020, driven by the widespread use of 10/25 Gbps, 40 Gbps and 100 Gbps, and with the biggest sales forecasted for 25G and 100G ports. The imminent 200 Gbps and 400 Gbps optical transceivers also poise to hold a fraction of the market share.

optical transceiver market trend forecast

10G Transceiver: Moving to the Edge

Initially offered in the early 2000s, 10 Gigabit Ethernet has matured now to become a commonplace in data center. 10G server connections reached majority of new shipments and have outpaced 1G connection in 2015. Basically the 10G Ethernet is stacked to move to 40G and 100G at the access layer, following the upgrade path of 10G-40G-100G, which, however, will quadruple the cabling complexity, power consumption and overall cost. And this will be exacerbated when aggregating into 100G (10×10G) interface.

25G Transceiver: Pave the Road for 100G

So there comes the game changer: 25G Ethernet for better economics and efficiency. 25 Gigabit Ethernet makes the road to 100G smoother with reduced cost, lower power consumption and less cabling complexity. SFP28 optical transceiver is designed for use in 25G Ethernet, delivering 2.5 times higher speed per lane at lower power. 25G SFP28 can be viewed as the enhanced version of 10G SFP+ transceiver, utilizing the same form factor but running at 25 Gb/s instead of 10 Gb/s. Besides, SFP28 25G is back compatible with SFP+ so it will work sufficiently on SFP+ ports. By the year of 2019, the price of a 25G SFP28 will be almost the same as a 10G SFP+. So you will be saving a great bunch of money if choosing to move to 25G. Some users even plan to skip 10G and directly deploy 25G Ethernet for better scaling to 50G and 100G.

Image result for IHS Markit: 25/100GE switch port growth surges; 200/400GE to begin in 2019

40G Transceiver: Affordable for Mass Deployment

Obviously, 10GbE is no longer fast enough for data centers handling large-scale applications, so 40G is designed to alleviate bottlenecks in the access layer . When firstly planning to scale to 40G, the cost is extremely high that makes the implement of 40G technology difficult. Luckily, we’ve seen significant cost reduction of 40G optics in the past 2 years: QSFP-40G-SR offered by FS.COM is $49 only. The price drop accelerates 40G transceivers adoption in aggregation links, or in access links to connect servers. For scaling to“spine-leaf” architecture, 40G switches can be used as spine switch with the 40G QSFP+ ports breaking out into 4 10G SFP+ ports to support 10G server uplinks. 40G port revenue has peaked in 2016 and will now decline in favor of 25G and 50G ports.

100G Transceiver: Ramping up in Data Center

Currently 100G are the fastest Ethernet connections in broad adoption, which is growing sustainably. And the optical transceiver market indicates that 100G QSFP28 module price will continue to drop, making the cost difference between 40G and 100G even small. For example, FS.COM offers great cost reduction on 100G transceivers: only $199 for QSFP28 100G-SR. Moreover, 100G switch port shipments will outnumber 40G switch port shipments in 2018—as 25G server and 100G switch became commonplace in most hyperscale data centers that replaces previous 10G servers and 40G switches. Vendors of 100G QSFP28 transceiver will continue to grow the product and push the limits of its versatility.

200G and 400G – New Connection Speed Hits Data Center

Another foreseeable trend in interconnect market is the phase out of low speed transceivers in the core of networks and in data centers. So here comes the major shift from 10G and under to 40/100G and higher. New developments with QSFP28 technology in 2018 also will pave the way for the 200G and 400G QSFP-DD: next-generation 200G and data center 400G Ethernet will deploy starting in 2018, and become mainstream by 2019-2020. On the whole, optical transceiver market is evolving to higher speed, more reduced power consumption and smaller form factor. Let’s take a look at these future-proofing optical transceivers.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………..

 

Recent Posts