451 Research: Software-defined storage – a proxy for storage transformation

InsightaaS: Time was, storage transformation was a lot about falling prices, but as 451 storage research VP Simon Robinson explains in the article below, storage is now having to play catch up with compute virtualization in the data centre. If “software-defined storage” a “proxy” that Robinson believes stands in for several transformational disruptions that have been underway is an over-used term, 451 Research has found that user organizations are buying in, driven by a concern that is a familiar theme for storage market watchers: expectations for reduced capex, and indeed, reduced operational costs.

Has the storage discourse changed, or are we back at square one with cost the determining attribute in storage decision-making? Top user requirements uncovered in 451’s storage survey would suggest this is so: compatibility with existing storage architectures and support for third-party open storage systems ranked as number one and two issues in the minds of respondents, attributes that would indicate the IT or storage manager’s disinclination to incur additional costs that may be associated with new technologies, or maintenance to support complex storage environments.

Robinson interprets simultaneous user focus on compatibility and support for third-party open systems as vision for the implementation of SDS to support new and existing workloads and applications, a view that has merit, assuming the two variables are antithetical. More interesting is his view on where SDS will gain initial traction: in non-mission-critical, tier two and test/dev applications that generate a lot of data but may not require the same availability or reliability as production environments (with cost savings a key benefit), in the “emerging ‘hyper-convergence’ space,” where a converged storage-server architecture, such as the one recently introduced in VMware’s updated vSAN reinforces the simplicity provided through an integrated offering; and in enabling cloud-based storage, or the running of storage OS from vendors like NetApp on OpenStack and other clouds. Robinson’s circle back is to cloud use cases (private and public), then, which in this technology’s infancy were also associated with resource saving and efficiency.

 

Simon Robinson, research VP, storage, 451 Research
Simon Robinson, research VP, storage, 451 Research

Enterprise storage – so long an afterthought of the IT department – is changing. An unpalatable combination of relentless data growth, capital expense, complexity, fragmentation and huge operational overhead, is prompting many enterprises to rethink their entire storage infrastructure strategies, especially in an era of unprecedented budgetary scrutiny. While these challenges are not new, the changing software infrastructure higher up the stack is the catalyst for change; server and desktop virtualization, which exposed the rigid nature of many storage system designs, was merely the appetizer. Now that the enterprise wants the IT stack to look like – and behave like – a cloud, the storage infrastructure is having to catch up once again.

This is prompting a fundamental rethink of storage strategies at many organizations, especially when it comes to the deployment of new, ‘cloud era’ applications and workloads. The modish term for this transformation – being used by storage suppliers old and new – is software-defined storage (SDS). While the term itself – an offshoot of the broader ‘software defined datacenter’ (SDDC) – is overused, it is driving recent storage investments, and speaks to some fundamental changes that have been taking place in enterprise storage systems design for several years. In this report, we look at how end users are initially reacting to SDS, as well as look ahead to initial adoption scenarios.

The 451 Take

As we noted in our Storage Preview report, the notion that storage somehow ‘has to change’ – and not just at the margins, but in some fundamental ways – is one core narrative that will dictate the storage industry’s agenda over the coming year. There are multiple threads to this, but we think the continued emergence of a number of disruptive technologies and approaches – principally software-defined storage, but also flash, convergence and cloud – offers IT organizations a number of valid strategies around storage transformation. Overall, end-user understanding and awareness of the value – and limitations – of SDS offerings will accelerate in 2015, and while overall deployments will remain largely confined to non-critical applications, SDS offerings will continue to develop and mature along multiple trajectories, creating opportunities for innovative players.

Don’t get too hung up on definitions; SDS is a proxy for transformation
Broadly, software-defined storage is a term that speaks to a number of simultaneous and somewhat related trends in storage technology that have been building in recent years. First is the shift in storage system design away from utilizing ASIC-based designs to using x86 industry standard processors. Another is the emerging presence of rich storage software functions that are – or can be – divorced from the underlying hardware, essentially ‘virtualizing’ the underlying storage. This is sometimes described as the separation of the data plane from the control plane, and is designed to enable new levels of control, flexibility and choice. A third trend is the emergence of storage stacks that to varying degrees utilize open source software, often in conjunction with broader open source platforms, such as OpenStack.

Software-defined storage is not a single type of product or technology; rather; it’s a pervasive approach that can be applied to a wide number of use cases and technologies. Indeed, one of the promises of SDS is that it can help consolidate and simplify what is often a highly fragmented storage infrastructure. Just as virtualization and cloud is a proxy for IT transformation at the server level, we think SDS is primarily a synonym for storage transformation, in terms of efficiency, agility and lowering the overall cost of managing the storage environment.

This lack of a clear and consistent definition certainly was initially a cause for confusion for IT decision-makers. SDS and the notion of the SDDC were initially dismissed as vendor hype and marketing-speak. Although some skepticism persists among IT decision-makers around SDDC in general, understanding is growing to the extent that more are starting to bake it more strategically into their plans. Over the course of 2014, we saw a marked rise in the number of organizations that agreed to some extent with the statement: “We are strategically planning to move to a SDDC” (See Figure 1 below). This suggests that, although IT decision-makers may not have a crystal clear idea of what SDDC involves specifically, the notion seems increasingly appealing.

451 SDS 1
Click on the image to enlarge

 

SDS: perceived benefits and requirements
So what are IT and storage decision-makers saying about the benefits of moving to a software-defined storage strategy? It’s early days, but a study 451 conducted in late 2014 with more than 100 enterprise storage managers found that although many benefits were cited, when pressed to cite the top benefit, respondents were pretty clear: reducing hardware costs and lower overall storage capex were number one and two, respectively. SDS is viewed as a path to reduce the capex of storage. However, reduced opex was the third-most-cited benefit, highlighting that cost savings are the principal drivers of an SDS strategy. Nonetheless, a number of respondents cited improved agility, greater ease of management/use and improved scalability as primary benefits of SDS.

451 SDS 2
Click on the image to enlarge

 

By contrast, relatively few respondents said that eliminating reliance on proprietary storage vendors was a top benefit for SDS. This suggests that vendor lock-in, per se, is not a primary motivator for SDS. This is consistent with what we are hearing elsewhere. IT managers accept that some degree of vendor commitment is necessary with any implementation; even open source implementations are usually backed by an ‘enterprise’ service and support agreement.

Moving to the requirements of an SDS architecture also reveals some interesting points. The top requirement highlighted by respondents was stated as ‘guaranteed compatibility with my existing architecture.’ This suggests a couple of things. First, it underscores that IT and storage managers are wary of introducing another silo into their environment. New architectures have to be compatible to some degree with what they already have on the datacenter floor. Second, it suggests they are thinking about a software-defined approach to support existing workloads and applications. However, the number two requirement for SDS was ‘better support for third-party open source platforms such as OpenStack.’ This suggests that IT managers are thinking about SDS for both existing and new workloads and applications. In other words, there seems to be no limit to the potential applicability of an SDS-based approach. However, we’re not quite at the point of mass adoption just yet. Respondents also highlighted the need for clearer industry standards, a clearer definition and a clearer explanation of the pros and cons of SDS, and more documented use cases among other key requirements for adoption. In a market where many storage suppliers are jumping on the SDS bandwagon, we don’t think all of these issues will necessarily be resolved quickly. Indeed, things may get more confusing before they start to improve.

451 SDS 3
Click on the image to enlarge

 

Where to start?
So where will SDS approaches garner initial traction? It’s probably easier to ascertain where it won’t happen. We don’t believe SDS approaches will initially encroach into the heart of datacenter running mission-critical applications, unless packaged as fully baked appliances (although this may seem to defeat the point of being software-defined, it also highlights that many organizations still prefer to buy IT as a fully integrated and supported hardware-based appliance).

We believe that SDS adoption will initially focus on a couple of areas. One is storage for non-mission-critical applications – tier two and test and development applications – that often create large amounts of data that needs to be stored, protected and retained, but without requiring the same level of reliability, availability and serviceability as mission-critical applications running on enterprise SANs. The primary benefit here might be capex savings through consolidation, especially where multiple islands of existing storage might be streamlined under a single point of management and common hardware platform. It’s perhaps not a coincidence that this is precisely the sort of workload where VMware was initially adopted. Similarly, we think SDS approaches have the opportunity to earn their stripes and prove their value in less critical areas. Users will then have the confidence to deploy them in more important areas in the future. Nexenta is one company that has been doing this successfully, while multiple players – including Red Hat and SUSE – are beginning to take Ceph into this arena.

We would place the ongoing emergence of object-based storage approaches into this category. While object storage deployments are still largely limited to big environments – typically with more than 1PB of unstructured data – the number of such environments continues to grow. Additionally, most object-based approaches are essentially software-centric in nature. They are designed to scale massively using commodity hardware, as well as leverage programmatic, API-based access methods. This makes them a good option for many emerging application types. They also increasingly support more traditional file and block interfaces, making them a potentially good fit for consolidation, spanning both ‘old style’ and ‘new style’ applications. Startups such as Scality, Cloudian and Caringo are heavily focused here with software-centric approaches, while efforts such as EMC’s ViPR are also somewhat targeting this opportunity.

The other area of SDS adoption in 2015 will be in the emerging ‘hyper-convergence’ space, where we think the simplicity of a converged storage-server architecture melded with SAN-like data resiliency and other services has huge appeal, especially in smaller environments, including remote office/branch office and enterprise departments. The availability of VMware’s VSAN product – recently updated – has done much to validate this market, and we are seeing an explosion of offerings. While some hyper-converged approaches are still only available to purchase as a self-contained hardware-based appliance – most notably Nutanix, whose popularity underscores the enduring appeal of this model – many suppliers, including Maxta, Atlantis Computing, StorPool, Stratoscale and StorMagic – are software-only, relying on hardware partners to deliver an integrated offering to customers where required.

One further, developing area for SDS in 2015 will be around enabling cloud-based and cloud-like storage capabilities, whether that is a service provider standing up public cloud storage services, or an enterprise looking to implement an on-premises private storage cloud. Models such as OpenStack will continue to provide a relevant framework here, while we expect to see more vendors enable their storage stacks to run as a cloud-based service. For example, NetApp plans to release ‘Cloud ONTAP’ in 2015, a hardware-independent version of its core OS that will run on Amazon and other clouds. Startup Zadara has a similar approach with its pay-as-you-go, storage-as-a-service offering that can run either in an external/public cloud or entirely on-premises.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.