InsightaaS: Martyn Jackson is a senior solutions architect with Dell’s Enterprise Systems Group and a 10 year Dell veteran. With over 25 years of experience supporting, building and designing IT solutions across a range of industries from sectors as diverse as energy, finance and healthcare, he has developed an interesting perspective from which to share thoughts on key trends in infrastructure design and build. In the article below, Jackson outlines the evolution of a new enterprise market segment with hyperscale tastes, characteristics of hyperscale computing, such as the drive towards standardization, and how converged IT architectures can address growing enterprise need for rapid scale and infrastructure flexibility. InsightaaS is pleased to present this overview of an important market development from such an experienced guest contributor. (ed.)
It was nearly a decade ago that a new breed of customer – Internet companies building giant data centre capacity – found themselves in need of a new type of server to support their massive scale. This trend certainly didn’t go unnoticed. In fact, in 2007, a few entrepreneurial-minded engineers at Dell saw this opportunity and Dell’s Datacentre Solutions Group was born. Fast forward a few years, and a new server market segment was created that continues to attract the attention of new players today.
But web giants like Google and Amazon, as well as newer Internet companies such as HootSuite aren’t the only ones who need simple, scalable IT. Businesses around the globe have been paying attention to the flexibility and efficiency these organizations have achieved with their IT strategies, and are curious about how to apply the hyperscale design principles to their more mainstream data centre needs.
At the same time, as people change jobs – sometimes moving out of large hyperscale IT operations and into more mainstream enterprises – they can bring hyperscale values with them to drive added efficiencies in the new infrastructure that they are working with. It’s not just about the lowest possible cost for a system, but also about building the right robust IT architecture to drive automation, improve total cost of ownership and maximize the value IT provides to stakeholders.
So while it’s true there are still sharp contrasts between hyperscale and mainstream enterprise computing, leading enterprises are starting to adopt the guiding principles of hyperscale.
A closer look
To better understand this evolution, it’s important to understand server market dynamics and how they have changed over the past few years. Let’s simplify things and say there are four primary market segments that have emerged as major forces in the server market today: first, there are the largest of the large global Internet companies that we’ve already discussed, then there are the companies that aren’t quite hyperscale, but still massive in scale such as Web Tech and HPC, next up are traditional large enterprises, followed by small and medium businesses.
These server market segments all have their own unique challenges, needs and workload requirements that need to be addressed when bringing new solutions to market. But that’s not to say best practices shouldn’t be shared or cross-pollination shouldn’t occur.
In the case of hyperscale, when working with the massive Internet companies we witnessed first-hand their need to keep a common IT architecture, but tweak server, storage and networking for certain workloads like analytics or web serving. It simply didn’t make sense from a CAPEX, OPEX or efficiency standpoint for them to build different architectures for different workloads. This same principle applies to the other market segments as well.
Take large enterprises and Big Data applications as an example. Business units are increasingly demanding IT capabilities that support real-time analytics and greater intelligence in their operations. The marketing department may want to deliver sales messages to mobile phones based on a customer’s purchasing history, social media activity, location and other sensor data. But that same marketing department will also want to continue running its conventional CRM system and dozens of other applications on the same infrastructure. The bottom line is data centers must have the flexibility to meet all these varying needs, while at the same time handling usage spikes by shifting resource utilization instead of maintaining costly excess capacity that can go unused for significant amounts of time.
New Converged Architectures
When you’re working with data centres that are comprised of literally, hundreds of thousands of servers, there’s a textbook of lessons waiting to be learned. This has now become possible: 2015 marks the year that IT leaders can learn and apply the principles of hyperscale technologies to realize the advantages that cloud providers have achieved.
Case in point are new converged architectures that provide – for the first time – a common, scalable platform that easily adapts to the ever-shifting business and technology landscape. By leveraging the building-block concept derived from the large hyperscale IT operations, organizations can better manage, scale and tailor infrastructure to meet business needs as they change over time.
Technology to converge server, storage and networking is already here, and these new converged architectures will revolutionize how modern enterprises consume and manage IT in 2015.