Yuval Bachar, Principal Engineer, Global Data Center Infrastructure Architecture and Strategy, Linkedin
Data centers are the core of everything we do today, from social media to financial transactions, but how did we get here and where are we going? If we look at the history and travel back in time about 12 years, data centers were IT domain. Most companies kept data centers in different sizes and infrastructure was managed by IT teams and OEMs, with high Capex and Opex costs just to build bare metal data centers. Startups had to find a local space to place their limited number of servers and get network connectivity to the internet on their own.
Fast forward to today where we are seeing an incredible transformation of the data centers and cloud operators.The mega data center operators have created a network of mega data centers (150Mw+ in each location) and have started spreading them across the globe, offering virtual data centers to enterprise companies of all sizesand delivering vast compute power to enable their own applications. Clearly, running such data centers is no longer in the IT domain and the OEMs in most cases cannot solve the scaling problem at hand. The scale of the data centers does not allow old data center operational methods to be used anymore and most of the elements in the data center cannot be monitored and managed by humans. Hence, NOCs (Network Operations Centers) are no longer needed, everything is automated, and human touch is limited only to cases where physical action is needed.
The amount of data center innovation that was created in the last 12 years is just incredible. However, is that innovation pace sustainable? Will the technology of the data centers’ hardware and operational tools continue to evolve in the same pace as the last 12 years or is it slowing down?
I think that the data center industry has reached a point of equilibrium and stabilization where the players for cloud service have been defined, the large mega applications (like Facebook and Google) have become a monopoly on their respective businesses, and there has been slow but sustained growth in all of these markets.
I believe in bundling the cloud services and the special application together because although they are representing two different market segments, they have a lot of commonality in the way they build and scale data centers. The compute, network, storage, and operation solutions that have been created in the last few years will be sufficient for the near future. These solutions are mostly based on white box hardware with open source and commercial SDx (software defined everything) and can be scaled to very large footprints. All we need now is to keep on repeating the same replicated build outs to scale in order to address market needs.
Of course, innovation will continue to happen in the mega data centers, just at a much slower pace.
The compute, network, storage, and operation solutions that have been created in the last few years will be sufficient for the near future
Here are a few areas of development:
Networking: Pluggable 100G is the technology of choice for data center intranet (inside the data center) connectivity. The question at hand is if we need a faster channel (like 400G) or just many more 100G links? Most of the data center operators decided that many more 100G channels is the solution. The next cycle of data center interconnect innovation will happen in 2020/2021 because the repetitive scale out method is working and will continue to address the data center needs.
Compute: Our standard compute engines have reached a CPU capability plateau with minor improvements. The amazing eco-system of virtualization and containers-based compute, on the other hand, is enabling us to do everything we need with high CPU utilization and performance. Is there a need for another VMs/Containers cycle now? Not really, we just need to make them optimized, secured, and scalable. Emerging technologies like serverless/function compute will emerge in the next few years but will not dominate for a while.
Data center operations and power: We have reached a point where we know how to operate fully automated data centers with PUE (Power usage effectiveness) of sub-1.1 efficiently. We will keep on building similar data centers with limited improvements since there is no need for a major technology breakthrough to supply the demand.
How about colocation (colo) data centers? I think that colocations are in transformation state. While they started as just building facilities, power, and cooling services, they have recognized the need to re-invent themselves in order to remain successful in the future. Since most enterprise companies will either run their compute needs in the mega clouds or keep a very small footprint as a private cloud, the colocation data center business model will be not be sustainable. Colocation companies will have to track the next innovation cycle in order to maintain relevancy.
We must now ask ourselves, where is the next innovation cycle coming from? All engineers in the data center technology area did not stop being innovative. They will simply have to shift the focus of their innovation to the next cycle of infrastructure build-outs: The Edge. The Edge is the first greenfield area since the mega data center revolution about a decade ago and just like the incredible data centers’ innovation cycle, the Edge will bring a new innovation wave.