Thank you for Subscribing to CIO Applications Weekly Brief

To the Cloud & Beyond A Death Knell for Private Data Centers?
Toney Flack, CIO, Wichita State University


Toney Flack, CIO, Wichita State University
We then progressed through the minicomputer and departmental computer eras (such as DEC), to personal computers and to RISC Unix/Linux bare-metal servers with some Microsoft Windows bare-metal servers to the small form factor of virtualized, multi-processor, multi-core blades of Intel and AMD microprocessors running Windows Server and Linux under the hyper-visors of today such as VMware, Microsoft’s Hyper-V, or Citrix’s XenServer.
A lot has changed, and very rapidly by comparison with other industries (Moore’s Law). Many private data centers, though, even while still fully functional, are considered dinosaurs and unfamiliar and maybe even unwelcome eyesores by some IT millennials and by some facilities and finance folks these days; but, are they really a dead or dying species?
Yes, and no. Mostly yes.
Many of us are saddled with oversized data centers built to hold large, water-cooled mainframes and not today’s heterogeneous mix of dense racks of computing, storage, networking, and security appliance devices.
Given the maintenance costs and replacement costs of UPS, generator sets, and transfer switches, HVAC cooling and conditioning equipment, monitoring equipment, and fire detection and suppression (no longer Halon!) equipment and the relative infrequency of purchasing same in “small” quantities by inexperienced personnel (in the refurb and design of DCs), for a small to medium-sized private data center the economics just don’t make sense any longer.
Our team and most of our peers in academia and in the private sector are rapidly moving every application possible to “the cloud”. New commercial systems are typically being purchased under the SaaS model, often hosted by the vendor in their own cloud or in one of the major commercial cloud vendor’s systems such as AWS, Azure, or Google. Home-grown systems are being virtualized and then moved to a commercial cloud system such as AWS, Azure, or Google.
Obviously, the large cloud corporations have the luxury of building from scratch, and locating these new state-of-the-art data centers where most appropriate, without the constraints of having to build them near an existing corporate headquarters or other (manufacturing or warehousing) facility. Thus they construct SAS-70 compliant, Tier IV, Category 5 storm-withstanding facilities, and locate them where land, labor, and utility costs are favorable and/or where the weather climate is conducive to the deployment of hybrid natural cooling systems for portions of the year. Most of us are mere mortals who operate small to medium sized private, “average” data centers who do not have the funding and venue advantages available as described above to construct or refurbish data centers to these levels of protection and operating cost efficiencies. Some skeptics still have lingering concerns regarding network reliability and performance, regarding unknowns related to data storage location(s) (particularly if outside of their home country’s borders), and about possible additional challenges in demonstrating regulatory compliance in an “outsourced” data storage arrangement in an era of everincreasing electronic statues in all areas of the globe and in all industries (HIPAA, PCI, SOX, FERPA, GLBA, etc.).
I believe that the fact of the matter is that commercial cloud providers, high performance data carriers, and co-location centers, etc. have now matured to the point where any residual potential concerns have more to do with perception and emotion than reality. There is always a visceral reaction to the release of what is perceived to be “total control”, but once you realize that you never really had that level of control in the first place, and with the economic advantages and environmental administration reduction–it’s actually a cathartic feeling to “let go” and take the cloud plunge!