InsightaaS: Every week or so, I’m reminded that “social” encompasses (and connects) many different types of relationships, often in a very non-linear way. The path to today’s featured post provides one such example. In recent months, I’ve had the good fortune to spend time working with DataCenter Dynamics Intelligence (DCDi), the research division of IT event and publishing leader DataCenter Dynamics. DCDi’s LinkedIn group recently spotlighted an article that Julius Neudorfer, director, network services at North American Access Technologies, wrote for Mission Critical Magazine. The article references “Mad Men” in its title, and as yesterday’s ATN feature was about ad tech, I thought it might provide a good sequel. However (as I might have guessed from the fact that it’s a Mission Critical article featured by DCD), the content is deep in its coverage of data centre power and cooling trends, and makes only passing references to the series centred on 1950s advertising professionals.
However, I (and likely, most ATN followers) am more interested in data centre power and cooling than ad tech anyway, so I was delighted to review Neudorfer’s piece, which spans both an interesting analysis of the state of the data centre industry today and some intriguing predictions for the next year (and beyond). He begins by tracing the path that has delivered us to the threshold of “Open SDE (software-defined everything) – a future in which large enterprises may begin to join hyperscale operators in building infrastructure by combining bare metal components with open software that blurs the distinctions between servers, storage and networking. Neudorfer believes that “2015 will be the year that the generic hardware “genie” comes out of the bottle,” enabled by components sourced from Alibaba and from major vendors (Intel, Juniper, even Cisco) who are providing varying types of support for Open SDE.
The second half of the article is a potpourri of observations on issues and developments that will affect data centre operators and operations. These include: burgeoning bandwidth requirements, “Really Big Data,” the trend towards green data centre operations generally and liquid cooling specifically (both particularly interesting to us at InsightaaS, as we write on sustainable IT for Bloomberg BNA), and a handful of positioning-oriented issues, such as tax subsides for data center builds, the trend towards very large data center facilities, Uptime Institute’s newly-affirmed ownership of the term ‘Tier’ with respect to data centers and the incongruity of representing storage with “an icon that resembles a 3.5 in. floppy disk (which many younger users have never actually seen).” Neudorfer wraps many of these ideas together in a “Bottom Line” predicting that “the data centers of the future will be dark, vast warehouse style buildings with minimal or no mechanical cooling at all…[and moreover], if delivering low cost utility computing is the goal, TCO is the driving strategy, and bandwidth has now become a low-cost commodity, data centers will be built wherever the climate favors free cooling, power is cheapest, and the tax incentives offer the greatest benefit.”
Before writing this column, I looked back at an article that I wrote five years ago with my predictions for the beginning of the new decade. It focused on some of the new developments related to the physical facility and how it might be impacted by the then recently introduced cloud computing. It looks like that vision of the future is here in full force. The issues and definitions of the “data center,” as well as who wants to own or operate them (especially for the enterprise), have become more complex as the related titles in the C suite have expanded (CIO, CTO, CSO, etc.), as well as CFOs, who tend to see colos and cloud service as “utility computing,” financially strategic operating expenses rather than depreciating fix assets for a data center facility. Needless to say, the concept of an application bound to a dedicated server is now nearly obsolete thanks to virtualization, a term and technology that now encompassed the data center itself, hence the virtual data center or “VDC.”
Moving even further, we are now entering the age of software defined everything (SDE) (which began as software defined networks “SDN” and later software defined data center “SDDC”), wherein IT hardware is no longer purpose built or whose role is no longer even clearly delineated (e.g., server, storage, network), and is perceived as the new panacea. In some cases this was motivated by large customers wanting lower priced, generic hardware (directly sourced, bypassing the major OEMs, e.g., “no-frills” hardware a la open compute). In others cases, it was developed to be the next generation of universal “bare-metal” hardware, designed to run open source software, which is architected to overcome the limitations/bottlenecks of moving bits between the “dedicated” and siloed category of devices.
MEMOS TO ALL DEPARTMENTS FOR 2015
Open secrets: It is no secret that the computer hardware sold by major OEMs are essentially all made by Asian contract manufacturers. Up until recently, one of the main reasons many enterprise organizations purchased the major branded product was for the sales, logistical, and technical support offered by the brand. However, while the Internet giants such as Google and Facebook have long had the scale and purchasing power to have custom low cost hardware built, some smaller or even some large organizations, did not want to try to directly source, self-support, and use “generic” hardware. Moving forward, similar to the way Linux developed and became accepted, the Open SDE movement has begun to take hold and generic hardware platforms for server, storage, and networking will become the underpinnings of cloud service providers and even some enterprises…