“We are back!” Meg Whitman proclaimed from the main stage of this year’s HP Discover conference in Las Vegas this week. Referring obliquely to fallout from some questionable acquisitions and fierce competition in some of HP’s core markets over the past couple of years, the CEO pointed to financial recovery achieved through streamlining and restructuring efforts — but more importantly, to recovery of a sense of purpose and confidence about the company’s future. HP turned 75 this year, and took advantage of the Discover 2014 event to outline some of the reasons for this optimism by articulating the company’s new direction and technology to the nearly 15,000 attendees on hand at the event. As it has over its 75 year history, HP is investing in innovation, working, as Whitman explained “with the world as it is today…while planning for the future.” (for more Whitman, see video at article end)
HP’s stated goal at Discover 2014 is to power “the new style of IT,” an event tagline referring to solutions that addresses the gap between what technology delivers today and customer expectations. According to Bill Veghte, GM of HP’s Enterprise Group, better storage, networking and server virtualization, open standards-based hybrid cloud, new software and infrastructure that lays the foundations for exploitation of Big Data, right sizing of solutions for the SMB, and reduced data centre complexity are the keys to bridging this gap — areas that HP, not coincidentally, has been working to develop through its converged infrastructure portfolio. According to Veghte, IT delivery must be liberated from “brutal silos that we used to live with,” managed as “a single pool of resources” with new levels of automation and self-service to empower the developer and the technology user, through the creation of modern architectures that can enable “business model innovation” along with advanced technology capability.
If this technology vision is not news, many of the components announced at Discover 2014 represent important milestones in HP’s progress towards its ultimate ‘new style IT’ goal. Some examples of HP innovation that is relevant to users today includes:
All-flash arrays achieve price parity with disk storage
“The all-flash revolution is here, and we at HP are stealing everyone’s thunder.” David Scott
According to HP’s GM of storage, David Scott, to provide the performance that flash array can deliver, the traditional (3PAR) approach has been to take flash and insert it between storage layers; however, auto-tiering between flash and high performance disk drives has not offered an effective alternative due to the high price of flash. And while flash storage startups have flogged all-flash solutions, these have been unable to scale reliably to petabytes — an increasing requirement for enterprise storage.
To address cost and scale limitations, HP has introduced the all-flash HP 3PAR StoreServ 7450 Storage array, which features: hardware-accelerated, inline deduplication and thin cloning software, compaction technologies that HP claims can reduce usable capacity requirements by 75% (when added to existing thin provisioning and zero-block deduplication); express indexing, which can scale to 460 TB raw and more than 1.3 PB of equivalent usable capacity. According to HP, these provide six times the capacity provided on first generation all-flash systems. The enhanced solution also provides high density 1.92TB commercial multi-level cell (cMLC) solid state drives which rely on Adaptive Sparing technology to reduce over provisioning and extend usable capacity per drive up to 20 percent — resulting in cost parity for the system with high performance spinning disks. At $2 per usable gigabyte of storage, Scott believes “the tipping point for mainstream enterprise [adoption of] all-flash arrays has arrived, and will create tremendous revolution in the industry.”
The next generation of HP’s ConvergedSystem, Sharks are preconfigured ‘boxed’ solutions that combine HP servers, 3PAR storage and management through HP’s OneView tool, and have been optimized to run specific applications from vendors such as VMware, Citrix, Microsoft and SAP. Sharks will now come loaded with OneView v 1.1, which provides automated provisioning and management through standardized templates for routine tasks, as well as integration to major management systems including System Centre, OpenStack, vCentre and Red Hat open management tools. Sharks also feature a ‘consumer inspired’ interface with a 3D view for monitoring the health of the data centre, a map that can drill down into dependencies and an alert system that relies on machine information as well as social network posts in identification of issues. According to Tom Joyce, GM HP Converged Systems, focus this year is on building Sharks for smaller businesses and for Citrix VDI environments. There is a Shark for SAP Hana that combines a new class of server with in-memory storage, as well as Sharks for Microsoft Analytics and HP’s own Vertica — apps that are growing in popularity due to demand for Big Data functionality.
And in related Big Data news, HP also announced an upgrade to HAVEn, a platform introduced last year that included integration with Hadoop, Vertica, Autonomy, HP Enterprise security and other apps in order to manage three types of data: machine data, which GM of HP’s Autonomy, Robert Youngjohns, described as log data that is vast, and growing quickly in quantity; business data, an important but declining proportion of the total; and human information, which is accounting for an increasing share of total data in an organization. While most tools are currently focused on business data, Youngjohns outlined HP tools for using Big Data in IT operations, security, information infrastructure, app development, marketing, and legal and compliance activities. Upgrades this year include a HAVEn Workbench, new versions of Vertica (which is experiencing the fastest growth of any product in the HP portfolio), and IDOL OnDemand, which exposes analytics to web services to inspire additional app development work. Youngjohns listed the top 3 apps built on HAVEn as AppPulse Mobile, HP Healthcare Analytics, and HP Service Anywhere, which monitors IT apps operations performance.
Keys to better security
According to Art Gililand, GM of HP Enterprise Security, global spend on security last year was $46 billion, but results were less than stellar due to evolution of the adversary landscape (particularly in cloud and mobile where we are rushing without due care), to government intervention and to hacker talent: “Failure is inevitable,” he claimed, “because we’re battling against the best in the world, and they only have to be right once.” And while companies devote 86% of security budgets to preventing threats from getting in, in reality, they in most cases, threats have already penetrated and are waiting for the right moment to strike. A good second line of defence, he claimed, is protecting information assets though encryption, a tactic that few corporations engage in because it is difficult to manage. To help with this challenge, HP introduces three new Atalla Secure Encryption solutions: Enterprise Secure Key Manager 4.0, a key management system for up to 20 million keys, which integrates with HP’s ProLiant Gen8 servers but separates keys from the data; Atalla Cloud Encryption, ‘split-key management’ which provides a virtual key for the cloud provider and a master key for the user that must be used together in order to protect data ownership; and Atalla Information Protection and Control, software that protects data using encryption and access control at the point of creation, in collaboration and storage phases — essentially through the data lifecycle. Together, these solutions aim to provide protection for data at rest and in motion across cloud, mobile and on-prem environments, and from threats inside and outside the organization.
Apollo lift off — levelling the HPC playing ground
The data-driven business environment of today is driving unprecedented requirement for the kind of computing capacity that is delivered by HPC. As Antonio Neri, GM HP servers and networking, drew it, the problem is that demand in HPC means we will soon need 1 million teraflops of compute speed, which would require 1 gigawatt of power to run — all of output of Hoover Dam — and a data centre footprint the size of 30 football fields. To ensure that HPC is sustainable, HP has developed, and introduced with great fanfare at the event, the HP Apollo Family, returning to a brand name associated with high-performance systems in the 1990s. The Apollo 8000 System is a supercomputer that combines high levels of processing power with a unique liquid cooling technique that relies on a network of rack coils to keep water and computing separate. According to Neri, as compared to traditional systems, at 250 Tflops per rack (double density per server tray) the high performing Apollo 8000 can service four times the computational needs, will produce $1 million in energy savings in operation over 5 years, and remove 3,800 tons of CO2 per year. Describing NREL’s deployment (the US Department of Energy’s primary lab for renewable energy research, Bobi Garrett noted that the new Apollo facility has achieved a PUE of 1.06, and has managed to repurpose 90% of waste heat as heating for the lab’s office space. In addition to HPC support for researchers, the HP and Intel system has delivered $1 million in savings from high efficiency design, including 20% savings in avoided energy cost.
A companion system is the Apollo 6000, which leverages an air cooled server rack design, is accessible to a broader range of enterprise customers. The 6000 System features an external power shelf and advanced power management that allows users to manage efficiency, but also optimize the system for specific workloads. According to HP, the Apollo 6000 packs up to 160 blade servers per rack, and is capable of delivering greater performance (Intel has experienced a 35% performance increase through use of the Apollo 6000 for its electronic design automation application workloads), while using 46% less energy than its nearest competitor.
To broaden the appeal of HPC beyond government and academia, HP is offering financing for the Apollo systems, and for organizations that are embarking on HPC, has also introduced the HP Helion Self-Service HPC, an integrated, private cloud solution built on the Helion OpenStack cloud platform that can be self or HP managed, and which is based on a modular design to allow incremental deployment.
This list of announcements no means exhausts the innovation on display at Discovery 2014. Summarizing these accomplishments, Whitman argued that HP is the sole IT vendor with the product breadth to deliver the range of solutions introduced at Discovery. There is something to that — from “Niagara,” code name for a new large format, PageWide modular printing technology that HP believes will give it entre to a $1.3 opportunity in the design print market, to Helion cloud product and distribution announcements and evolution of HP’s SDN networking strategy (stay tuned for a deeper dive on cloud and networking), HP development runs the gamut.
For the future, there’s more to come: think ‘The Machine’, a new computing paradigm that HP CTO Martin Fink quipped is so-called “because HP Labs does not have a marketing department” that will rely on electrons to compute, photons to communicate and ions to store. If this sounds futuristic it is all of that, but The Machine is also built on existing HP technology development, including Moonshot, the first instance of special purpose cores, Bloom, an HP tool that allows management at scale of thousands of servers and 40,000 vms from a single console, silicon photonics — optical interconnects on the chip that generate huge energy reductions, which in turn remove design constraints in traditional server architectures requiring close proximity of processors and memory — and Memristor non-volotile memory, which ionizes the memory resistor circuit compound to enable binary set for storage, technology that has been under development for six years. With low energy, high-speed optical interconnects and memristor memory, The Machine could solve many of the heat/density, signal distortion, speed and size vs. performance issues that have dogged computing until now — to support Fink’s “Big Data on steroids.”
Combined with Distributed Mesh, which transforms processing to where the data is created, another phrase for distributed intelligence enabled through The Machine’s ability to scale to server, data centre or smartphone size that Fink called “cloud for the Internet of Things,” and a new Linux-based, open source OS, The Machine has the potential to create a revolution in traditional computing. According to Fink, all components for The Machine and Distributed Mesh are now in development and HP Labs is evolving SoC and photonics research towards creation of a fully integrated machine. Along the way, the company is also integrating future technology into products under development for use today — evidence for Fink that “HP is leading the industry once again.”