By Katie Broderick; special to InsightaaS.com from 451 Research.
InsightaaS perspective: 451 Research is one of the world’s leading sources of insight into cutting edge technologies — especially in areas that are important to InsightaaS and our principals, including cloud, analytics, and sustainable IT.
The following piece presents a synopsis of a longer report, “Disruptive Technologies in the Datacenter,” in which a group of 18 experts from 451 Research and its Uptime Institute subsidiary rated technologies that are not yet widely adopted, but which have the potential to disrupt datacenter design and economics, and the product roadmaps of datacenter suppliers. A two-step process winnowed an initial list of 40 possible disruptors to the ten highest-ranked technologies, and then rated each of the 10 finalists on a scale of 1-5, considering the magnitude of potential impact, the speed at which it is likely to occur, and the overall likelihood of the technology reaching commercial maturity. The report’s author holds that “score above 3.6 should merit action or very close observation.” Using this as a guideline, readers are advised to consider how flash storage, cloud-level redundancy, advanced DCIM software, and prefabricated modular (PFM) datacentres might affect their future datacentre plans.
Note: if you are interested in obtaining a copy of this report, please contact 451 directly, or contact InsightaaS at email@example.com.
451 Research is regularly introduced to potentially disruptive technologies for the datacenter space. In some cases, the technology is genuinely new and yet to be deployed commercially in any market, and other times the technology is already proven, but is set to be deployed in a new way.
Recently, 451 Research’s Datacenter Technologies team undertook a project to identify the 10 most potentially disruptive technologies in the datacenter. The research and the technologies are explored in detail in our ‘Disruptive Technologies in the Datacenter’ report.
The selection criteria excluded technologies that were either already widely adopted, but causing disruption (such as virtualization), or which were new, but unlikely to disrupt datacenter design and economics, or the roadmaps of suppliers. A scoring system took into account the opinions of 18 451 Research analysts and The Uptime Institute (an independent division of The 451 Group) experts.
The technologies that passed the initial criteria (about 30 others were considered) and were evaluated are:
- Low-power servers
- These servers use specially designed low-power and low-cost processors that may or may not be based on x86 architecture (ARM, Intel Atom). Particularly in hyperscale datacenters and for the right types of workloads, servers built on these processors may be more efficient and more upwardly and downwardly scalable.
- On-site clean power generation
- Low-carbon and low-waste energy created on a datacenter’s site can be more efficient and dependable than utility power. Power sources such as solar, wind, geothermal, hydroelectric and fuel cells (Bloom Energy) continue to be evaluated in terms of cost, efficiency and reliability.
- Advanced datacenter infrastructure management software
- This form of DCIM ties into multiple pre-existing datacenter management systems to become the mega-tool or operating system for the datacenter. It potentially ties into building management systems, IT service management (IBM’s and Emerson’s partnership, for example), CRM, financial reporting, cloud management, and other datacenter management systems and tools. It may be a necessary subsystem for many other disruptive innovations in the datacenter.
- Cloud-level resiliency
- This strategy refers to a set of technologies that may utilize the cloud, low-latency networks or software-level planning to achieve high availability for datacenters. Generators, installing UPS, clustering, mirroring, resilient networks and RAID storage are all used today — the question is whether moving traffic or workloads between lower-resiliency datacenters is a practical way to reduce infrastructure investment.
- Silicon photonics
- Silicon photonics combines optical and silicon into a single microprocessor, enabling light (photons) to transmit data at high speeds and with low power consumption. This innovation, at the right cost, could disaggregate typical systems into their component parts (processors, I/O, memory and storage) and lead to new datacenter designs. One prominent photonics user is Facebook, with its Open Compute Project.
- Chiller-free datacenters
- The existence of chiller-free datacenters is made possible by a choice of technologies to use outside air temperatures for cooling, such as air-side economization, water-side economization and evaporative cooling (for example, Google’s datacenter in Belgium uses evaporative cooling and has no mechanical chiller). The use of these cooling techniques, along with management systems and higher operating temperatures, gives datacenter managers the option — in some cases — to avoid deploying expensive mechanical chillers in their datacenters.
- Power-proportional computing
- Power-proportional computing automatically reduces or increases the power required by various IT systems in the datacenter based on their levels of utilization or the work that they are doing (such as Hewlett-Packard’s Data Center Power Control or recent GEIT Award Winner TSO Logic). Such rightsizing of datacenter power resources could affect IT systems’ design, cooling distribution and energy efficiency in the datacenter — and it could create wider power swings than most datacenters have experienced to date.
- Flash storage
- Flash storage is faster and more efficient (but more expensive) than traditional disk storage; it could have implications for datacenter power use and design, as well as IT performance. However, the price for flash decreased in recent years, and it continues to fall.
- Prefabricated modular (PFM) datacenters
- PFM datacenters use preassembled building blocks to either create a new facility or add capacity to an existing datacenter site (IO’s IO.Anywhere; HP’s Pod, EcoPOD and Flex DC; and Colt Group’s ftec data centre, for example). These alternatives to brick-and-mortar datacenters can deliver faster time to market, lower capex and better opex throughout the lifecycle.
- This technology uses a new way of applying variable resistance to effectively merge main memory and storage to create universal high-performance solid-state memory. Ongoing memory constraints in the datacenter could be addressed through the use of memristors, and this could dramatically improve performance of some applications in the datacenter (databases, ‘big data’ applications and analytics). It could also lead to the long-term convergence of servers and storage at component and system levels.
Each technology was evaluated by Uptime Institute experts and 451 Research analysts on a scale of one to five, based on three criteria:
- Size of potential impact.
- How quickly the technology will have that impact.
- Likelihood that the technology will reach fruition and disrupt the market.
The technologies’ overall scores are outlined in the table below — the evaluations are discussed in much greater detail in the full report (see below). The closer the score is to five, the more attention executive management — of both datacenters and their suppliers — should pay. Any score above 3.6 should merit action or very close observation.
Datacenter Technology Disruptive Rankings
For more on 451 Research’s analysis of the 10 most disruptive technologies in the datacenter, see our ‘Disruptive Technologies in the Datacenter’ report.
Special from 451 Research’s Market Insight Service – DCT – Datacenter Technologies.