Data Storage Infrastructure 2022 Predictions

My team put on their thinking caps and contributed towards a list of data storage-centric predictions for the upcoming year. Admittedly some are based on, or projected from, analyst insights that we are especially close to because of our day-to-day work. But some are more qualitative and were the product of own observations and related expectations. Enjoy, and let me know what you think (we’ll have to check back at the end of the year to see how we did).

HPE Storage Solutions for Microsoft workloads team conjuring 2022 predictions
  1. IT equipment supply issues continue deep into 2022 – Major IT equipment manufacturers work within an especially global value chain, with the products we design then getting produced in conjunction with developers, component suppliers and manufacturing partners on the other side of the world. And just based on our known logistics and component (such as processors) issues, we expect supply challenges to continue well into the new year. In addition, some major disruption in southeast Asia, whether political (e.g. China south sea), a pandemic resurgence, or extreme weather condition (e.g. flooding) could quickly impact a majority of the world’s top 10 busiest ports, including our supply lines, inventory… as well as sadly putting an end to your dream of getting that cool Apple iPhone 13 anytime soon as it would probably come via a Foxconn facility in Zhengzhou and the port of Shanghai. Luckily the broadening investment in real-time logistics info-tech may help supply chain players better see, optimize, and work around problems. Absent a major crisis, we should have IT trade back to ‘normal’ by the end of the year.
  2. Companies become more concerned with managing Data than Storage –  As more customers evolve to a ‘service-oriented’ model, such as via an HPE GreenLake based solution, they are becoming less concerned about the specifics of what infrastructure is being used to store the data. Whether the deployment is on something like HPE dHCI, software-defined x86 scale-out or traditional arrays, the trend is for more focus on the desired business outcomes around the data and ensuring requirements are met vs. discussions on storage deployment details.
  3. NVMe a part of every infrastructureIDC already predicts NVMe storage will be used by 91% of companies within the next 2 years, and we are seeing NVMe drives as a key part of our storage solutions. Using a set of NVMe drives is a natural pairing for software-defined infrastructures and intelligent applications that maintain a software-based cache – pin the cache to these speedy drives and you ensure low latency and fantastic workload performance. On a more strategic level, we are also seeing where NVMe as part of more distributed architectures (e.g. NVMe-oF using RDMA, FC, TCP) is going to realize more adoption due to the consolidation of standards at the system level. Manufacturers are continuing to drop GenZ related development and are standardizing around CXL (Compute Express Link). CXL is a new open interconnect standard to reduce the latency of data sharing between CPU and memory within a system node. This system design consolidation for higher-performance within a host and surrounding devices is expected to have a follow-on effect of allowing more innovation in the surrounding fabric, and this is expected to further spur the use of NVMe drives and related media within the system node, across the rack, the aisle, and the datacenter.
  4. Cyber-crime will continue despite government action – Cyber-crime continues to be an endemic problem requiring governmental response. Yet there appears to be a disconnect between the growth in the occurrence of ransomware, trojans, and live criminal actions against enterprise servers and storage, while legislative actions seem more focused on rules to protect data, mostly from a physical perspective, and requirements to purge storage media and servers before they’re decommissioned or disposed of. Whereas the requirement to get certificates of destruction may be a good opportunity for professional services firms to generate a new source of income, it does little to thwart the serious threat of cyber-attacks by criminals, overseas adversaries, and terrorists.
  5. Points of Data Integration will grow – Despite the increasing threat to our data outlined in the previous prediction, we expect to see continued growth of integrations between companies, partners, customers and systems. Past IDC reports and more recent predictions have detailed how organizations are having to manage more APIs as part of doing business, and that “…mastery of APIs… [is] a price of admission to competing on the digital business battlefield.” Look for new data storage specific integrations becoming available especially between hyperscaler clouds, popular IT dashboards, and enterprise data storage platforms.  
  6. Software defined storage will continue to grow – Though there’s still an important place for traditional block storage arrays, the compelling economics and hybrid cloud features of the newest software-defined storage products will continue to gain adoption and expanded use. Products such as Azure Stack HCI especially when combined with hardware components such as NVMe storage, PMEM and GPU will increase utilization as infrastructure for VDI, enterprise virtualization, and big data.
  7. Container storage adoption will continue slowly – This whole area of containers and Kubernetes is one that I’m sure a lot of us have been lured into, from the inherent techno-coolness and social media-fueled enthusiasm. However, more and more is being written about the slow adoption of these technologies. The uptake of Containers and related container-based data storage is lagging – no doubt a function of the technical complexity outweighing what benefits are being realized from the license cost savings of using free opensource software. But just as in the Monty Python sketch, Container-based storage is not dead yet, and still something to continue to watch out for.
  8. The Battle of the Edge will intensify – The idea of Edge Computing is still fairly new, coming into common usage maybe 5-6 years ago, and tightly linked with mobile computing and Internet of Things, specific to wearables, home automation systems, sensors, RFID tags and the like. Within this limited context, the entire market opportunity for the Edge was expected by leading analysts to be less than $3 billion in 2022. Chump change vs. the forecast of all IT spending to top $4 trillion that same year. And though a growing host of companies continue to jockey for this market sliver, a few seem oriented towards a compressed world view where the Edge is almost every compute resource outside the hyperscaler cloud. Through that lens, ‘Storage at the edge’ becomes way more than just things like NAS-attached video cameras, but also includes data storage within remote offices and even enterprise storage arrays within datacenters. Expect the battle for the edge to not just be about products but also philosophies.
  9. Data Management still required despite the cloud – We’re seeing the line between on-prem and cloud blur, with more deployments being at least hybrid cloud, and most new applications starting life native to a hyperscaler environment. It’s being written that “Cloud computing has won”, and while this may have relieved IT from operational tasks centric to that application, it hasn’t eliminated responsibilities around ensuring the availability, protection and access to the data. We expect that through 2022, though IT is still on the hook for data security and locality concerns, teams will continue to lack easy-to-use tools to manage data across clouds, and that a new product market will take shape around enterprise data management, operations and mobility.
  10. The Year of Hybrid Cloud – We’ve got to add this one with a big wink, because this has been a prediction out there for at least the last decade (see Wired article from 2012). Of course, ten years ago the perspective was more theoretical and centric to mitigating periodic ‘inbound spillover’ of excessive application demand – what we’ve since taken to calling ‘cloud bursting’. Hybrid cloud has since continued growing in popularity especially over the past five years. Today we’re seeing adoption of hybrid cloud products that enable the actual mixed usage of on-prem and hyperscaler-based services together and managed within a single pane of glass (think: Azure Stack HCI managed through Windows Admin center). So maybe 2022 IS finally the year of Hybrid Cloud?

Learn How On-Prem Database Infrastructure is Evolving with New Cloud Benefits

Today, most companies have an IT strategy that includes cloud (e.g. “Cloud First”, “Cloud Best”), but according to IDC, “…most Microsoft customers still run the vast majority of production workloads on Windows in the on-prem data center”. This is especially true for business and mission-critical workloads. The challenge customers face in moving them to the cloud includes the need for maintaining consistent performance and availability, maintaining control of the data, and lack of cloud system expertise. And when it comes to protecting data in the cloud, there are numerous considerations such as backup/restore performance, security and data sovereignty.

To address these challenges, new infrastructure offerings are evolving to add the best of Cloud to these traditional platforms. Today, HPE announced new pre-defined HPE GreenLake cloud services for Microsoft SQL Server. Key benefits of these new “as a service” offerings include just paying for the infrastructure that you use, while being able to maintain the hands-on control to run and protect your production workloads using familiar HPE storage, compute and networking hardware.

The full HPE GreenLake value includes even more, such as faster time to solution with pre-configured, ready-to-ship offerings in as few as 14 days, getting the capacity you need when you need it—in minutes, not months, accessing online pricing for simplicity and financial clarity, and having web access for self-service via HPE GreenLake Central. In addition, the offloading of routine operational tasks through GreenLake Managed Services is optionally available for these offerings.

The Scoop on the New as-a-Service Offerings

HPE GreenLake platform-as-a-service solutions offer customers a faster time to value with a turnkey cloud experience on premises. The solutions, in pre-sized but adjustable configurations, have been developed to deliver levels of availability, performance, functionality, and cost, to meet a range of needs.

Microsoft SQL Server is the most common workload deployed on HPE primary storage platforms, and now as a new HPE GreenLake service, available in four pre-defined configurations, customers have the option to refresh their environment with a workload-optimized and tested configuration, ready to support their database regardless of size. All of these new configurations feature HPE Alletra 6000 all-NVMe storage, recently proven to deliver increased  database performance through the benefits of an  All-NVMe array. It’s expected that this pay-per-use pricing, point and-click self-service and other cloud features will provide a 30-40% TCO hardware savings vs. the traditional upfront capex model.

There are also HPE GreenLake services available for Veeam and Splunk. Watch for more news regarding availability of pre-defined configurations.

Why HPE Storage and HPE GreenLake for running and protecting your business-critical databases and applications

Whether SQL Server or some other critical workload, HPE infrastructure delivers unmatched availability. HPE is the only vendor with a 100% availability guarantee on enterprise-class data storage. This is coupled with the unique HPE InfoSight that uses Predictive Analytics to ensure uptime.  In addition, HPE storage, in partnership with Veeam, provides hybrid cloud data protection and mobility features that span on-premises and cloud.

HPE makes all your business databases and applications faster with leading all-flash, Storage Class Memory, NVMe media, and now All-NVMe storage arrays. This helps all your workloads run faster, lets users work faster, and enables businesses to create value and innovate faster.

HPE GreenLake for Microsoft SQL Server delivers the familiar benefits of HPE storage and HPE server platforms, a consumption-based pay-per-use model, plus additional benefits of the HPE GreenLake service:

  • Cost savings from flexible, consumption-based model with simplified units of measure for billing
  • Risk reduction from a validated full stack solution right-sized for your implementation
  • Visibility of data, applications and infrastructure on-premises and under your direct control
  • Business agility by leveraging scalable pay-per-use architecture with cloud-based administration

Through HPE GreenLake, customers can manage, pay for, and grow their environment over time.

Next steps on the road to HPE storage solutions with HPE GreenLake

Microsoft SQL Server was already available as HPE GreenLake service, but as of today, can be quickly ordered in pre-sized configurations using HPE storage infrastructure to serve a range of organizational sizes and application requirements. These services all provide superior uninterrupted operations for mission-critical environments while delivering cost savings from the ability of only having to pay for the infrastructure you need, as you use it, with on-demand scalability.

Get the details on today’s HPE GreenLake news here. Learn more about Microsoft Storage solutions from HPE here.

Increase Business-critical Database Performance with All-NVMe Storage

Today’s SQL Server Situation

Microsoft SQL Server continues to be dominant within the established relational database management market. And traditional RDBMS is still a leading database approach in terms of familiarity, installed based, and spending. Closer to home, we see SQL Server as our #1 workload on both the HPE 3PAR Storage and HPE Nimble Storage installed bases after only VMware VMs. On HPE Nimble Storage, SQL Server alone occupies more storage arrays than Oracle, Citrix, DB2, SAS, MySQL and Splunk combined!

The newest version of the database system, SQL Server 2019, presents new opportunities for savings with features like ‘Big Data Clusters’. But also new areas of complexity for some IT teams including running within containers, Kubernetes container management, having built-in Apache Spark, the PolyBase feature, and the potential to use it within a data lake or platform for AI/ML.   

The implication for customers is that new deployments can vary widely by usage (e.g. OLTP vs. OLAP) and there’s a need for the underlying Infrastructure to be optimized based on the use objective (Performance, Capacity, Availability, Cost…)  Customers should seek out guidance related to the targeted solution to ensure deployment success.

What makes HPE stand out in the SQL Server Infrastructure market

First of all, HPE ensures SQL Server workload availability. It’s the only vendor with an unmatched 100% availability guarantee on enterprise-class data storage. This is coupled with the unique HPE InfoSight that uses Predictive Analytics to ensure uptime. In addition, HPE Storage provides hybrid cloud data protection and mobility features that span on-premises and cloud.

Secondly, HPE makes SQL Server faster with leading all-flash, Storage Class Memory, NVMe media, and now All-NVMe storage arrays. This helps databases and applications run faster, lets users work faster, and enables businesses to create value and innovate faster.

Finally, and maybe more importantly, HPE has a breadth of platform solutions for SQL Server. This complete line of storage solutions for SQL Server broadly meet customer needs from mission-critical to entry level, with gradients for levels of performance, availability, usability, scale, and economics.

Breadth of solutions for the range of Database challenges

Don’t take my word for it – here’s the rundown of the industry’s broadest range of SQL Server infrastructure solutions – from the most mission-critical, scale-up environment to mid-market and departmental offerings:

  1. SQL Server 2019 on HPE Alletra 9000 and HPE Primera Storage – Large enterprise business-critical SQL Server. Provides 100% guaranteed availability, highest levels of performance
  2. SQL Server 2019 on HPE Alletra 6000 and HPE Nimble Storage – Enterprise/mid-market SQL Server. Provides six 9s availability guaranteed, easier data mobility and protection
  3. SQL Server 2019 Big Data Clusters on HPE Storage – Scale-out SQL Server environment, serves as platform for a data lake. Manage relational and Big Data together, from across the organization
  4. SQL Server 2019 on HPE Nimble Storage with Storage Class Memory – Acceleration for demanding online transaction processing (OLTP), powered by Intel Optane SSDs. High-performance read cache for faster queries. Testing shows more than a 50% decrease in latency
  5. SQL Server 2019 on HPE SimpliVity – Enterprise-grade hyperconverged speeds up application performance, improves efficiency, resiliency, and restores VMs in seconds.
  6. SQL Server 2019 on HPE MSA Gen6 – Entry-level/departmental SQL Server. Simplicity, speed, affordability, and enterprise-class reliability.

Testing the new All-NVMe Storage

HPE is bringing to market all new, All-NVMe storage platforms, and we were fortunate to get early access to run performance tests with our SQL Server test tools.  First, about the environment.

We conducted our own internal testing using the Microsoft Data Warehouse Fast Track (DWFT) tool in a couple separate runs during March and April, 2021 in Ft Collins, CO and Houston, TX.

For compute we had the ProLiant DL380 Gen10, and on it was Windows Server 2019 along with SQL Server 2019. The server had four 32Gb Fibre Channel connections to the storage.

The storage array was the new HPE Alletra 6070. On it was the database (10 volumes) and tempDB (4 volumes).

A little about the DWFT tool – our motivation behind using this tool was to maintain consistency with performance tests we’ve been running now for almost the last decade.  It’s a familiar tool for customers and partners.  However, we’re seeing how it’s become dated in its ability to provide useful results because its been outpaced by the workload itself as well as the surrounding technology.  A specific issue is regarding the new SQL Server 2019 feature, Memory optimized tempDB.  We saw the same thing happen with the JetStress tool which had been used for many years on Microsoft Exchange, yet with the latest version of the application, the tool can’t report on the MetaCache database.

So, we ran our originally planned DWFT tests of the HPE Alletra 6070 vs. the same stack on the HPE Nimble AF80. However, based on the results I’m about to share, we recognized the need, as well as the opportunity, to re-do some testing which will better portray the benefits of the new platform.

SQL Server on HPE Alletra 6000 All-NVMe Flash Storage
Initial testing proved increased levels of enterprise database performance of the HPE Alletra 6000 All-NVMe Flash Storage system.  We demonstrated greater lab-verified app performance vs. earlier generation All-Flash system using the traditional Microsoft Data Warehouse Fast Track tool.  However, as addressed earlier, you can expect even greater real-world performance gains, which we’ll show in future, follow-up testing.

Proven performance gains:

  • +30.4% Measured Query throughput (Queries/Hr/TB)
  • +29.4% Relative throughput
  • +28.3% Measured Scan rate Physical (MB/Sec)

These are all Column Store measures, and indicate enhanced performance when working with large volumes of data as found in a data warehouse deployment.  We saw less compelling results in the Row Store metrics, but we feel this was more an issue with the test tool than the product being tested.  The DWFT results under-report the enhancements of the new Alletra platform, thus our plan to re-do the test with contemporary tools such as HammerDB, VDBench and TPC benchmarks. We expect to show significant performance gains with all-NVMe storage for data warehouses, transactional data processing and all types of business-critical databases.

3X Improved Database Price/Performance

Despite the muted performance data, we did get a compelling result when it comes to the Price/Performance of the new platform. We first looked at the cost of the old vs. new systems.

The original HPE Nimble AF80 system that we tested has a list price of $682,500. In contrast, the new HPE Nimble Storage All NVME Flash 6070 was only slightly more, with an estimated list price of $748,187, and note, this includes the related Data Services Cloud Console subscription. The new system cost was only up 9.6%.

Going back to our performance testing, we know we saw an over 30% gain. Specifically this came from the HPE Nimble AF80 system performance of 1,927 queries/Hour/TB vs. the HPE Nimble Storage All NVME Flash 6070 performance of 2,512 queries/Hour/TB, resulting in that 30.4% improvement.

So putting that together, for only a 10% additional investment, you’ll realize 30% increased performance, or a 3X Price/Performance gain with All-NVMe!

Make your move now to SQL Server on HPE Alletra 6000 Storage

HPE offers a broad range of Database Consolidation and Migration Services through HPE Pointnext professional services. They can provide the initial advice to help ensure a successful SQL Server deployment, as well as a HPE Database Migration service to deliver a smooth migration or fast database consolidation. Finally, HPE Pointnext provides support and ongoing training and readiness services to keep your Microsoft SQL Server environment operating efficiently and effectively.

Also on the way are new SQL Server “as a Service” offerings through HPE GreenLake.

Be aware of important upcoming dates:

  • SQL Server 2016 reaches the end of Mainstream support in July 2021
  • SQL Server 2012 reaches end of Extended support July 2022
  • SQL Server 2017 reaches end of Mainstream support Oct 2022

Regardless of your version of SQL Server, you will realize new levels of performance with the latest SQL Server on our newest HPE All-NVMe Storage.

Check out the new webpage for the HPE Alletra storage systems. The page is now live and loaded with information and resources on this exciting new storage platform. <Link to HPE Alletra on the web>

Welcome the all-new Azure Stack HCI

It’s a big day for hyperconverged infrastructure (HCI).  The world’s most cloud-connected HCI just got an overhaul, and Microsoft is (re)launching their Azure Stack HCI today <link to Microsoft announcement blog> .

Microsoft’s Azure Stack HCI will be a big factor in the white-hot HCI space. It’s positioned to serve Hyper-V and Azure cloud centric customers who would otherwise have to make do with Nutanix or vSAN. Azure Stack HCI more optimally delivers modern infrastructure that simplifies the management of on-prem resources while seamlessly connecting them to the cloud.

And let’s be clear – this isn’t marketing speak. Azure Stack HCI is now a new and separate O/S, designed for real Hybrid management. Customers will use the Azure Portal as well as Windows Admin Center (WAC) for resource visibility, management, billing, troubleshooting… everything.  

Azure Stack HCI Infrastructure as a Service topology

Goodbye Azure Stack HCI, Hello Azure Stack HCI!

The all-new Azure Stack HCI, available either as a download or pre-installed on hardware, replaces the current Azure Stack HCI offering which was built around Windows Server 2019 features. And with the new software comes new functionality including stretch clustering for DR – built-in async or sync replication that you can use for local high-availability or across metro distances, with optional encryption. That’s powerful HA for free, that you may not get with other HCI products.

Other advantages of the new Azure Stack HCI include a set-up wizard within WAC, and integrated billing with Azure cloud.  Did I mention the licensing is per core/per month, and gets bundled into your existing Azure account?  No need to track separate licenses, CALs, etc.

HPE and Azure Stack HCI

This is an important time for our Azure Stack HCI on Apollo 4200 solution in that it’s gotten better than ever – thanks to newly qualified drives, it now accommodates over twice the data capacity! The Azure Stack HCI on HPE Apollo solution was already the best solution for ‘Data centric’ workloads – where you need high capacity per node to run workloads like SQL Server, Exchange, Analytics, secondary storage and the like. But now with the support for even larger capacity media, it can accommodate 829 Terabytes of data within its highly space-efficient 2U footprint, and almost 17PBs in a single rack.

If you need more Performance or flexibility in your Azure Stack HCI architecture, then look no further than a solution built on HPE ProLiant servers.  Or if you prefer the entire offering – software and hardware – all as-a-Service, then HPE has that coming soon in a GreenLake offering. 

With HPE you’ll find the same broad portfolio of solutions around this HCI product, including our own Azure Stack HCI on the HPE Apollo 4200 solution. Microsoft just updated the new program catalog so you’ll find just a few of our HPE validated solutions listed as of today, but that list will re-expand over the coming days.  

Next steps

So if you’re with an enterprise looking for a 60 node deployment in your datacenter, or an organization who just wants to try things out with a 2 node cluster, you can start by taking a look at our current portfolio of all HPE solutions for Azure Stack HCI online here.  Info specifically on the Azure Stack HCI on HPE Apollo solution is here.  And I ask you to engage with us as we develop and evolve our portfolio based on customer needs and requests – post a related question or comment in the space below.

Data Storage Business Analyst Internship

HPE is about accelerating business outcomes with comprehensive solutions, from edge to cloud. And the Solutions team within the Storage business unit leads the company’s effort to create valuable ways to use, integrate and automate our intelligent storage products across leading business workloads and hybrid cloud use cases.

This Summer internship with the Storage Solutions team will comprise a cross-functional challenge and a tangible project outcome. This role will require applying your business, strategic and analytical thinking to develop a business case and high-level technical requirements. But will be done in conjunction with a technically aligned Intern who will manage the corresponding architecture, design, development and coding. A benefit and skill that you can expect at the end of the summer will be to have developed important collaborative skills, and the ability to work more effectively with engineers and those who develop technologies to extract value from data.


· Drive the development of a Solution Dashboard, from concept to requirements.

· Identify and remain centered on strategic goals and business-oriented project objectives.

· Interface with key stakeholders including Solution Managers, Program Managers and Solution Engineering. Gather priorities and requirements (functionality, usability, etc.)

· Leverage the knowledge of what exists for related dashboards, what has been done in the past, as well as related business systems and data sources (content management systems, online libraries, web repositories), in designing your project.

· The outcome should be a web-based dashboard that tracks Solution status in terms of key activities and collateral by using visualizations and metrics. It should also be dynamic in how data can be filtered by level of user, as well as providing the ability to drill down from high level status views to specific details and actual resources.

· Project includes a presentation within a company-wide end-of the-summer Intern Showcase.


· Currently pursuing an undergraduate degree in Business or related interdisciplinary study such as Informatics.

· Junior year student (3rd year of 4 year program).

·  Located in San Francisco bay area (no relocation).

·  Demonstrated ability to apply an analytical approach to business, driven by the intelligent use of data.

· Interest in conceiving of and managing technology offerings that solve business problems.

· Strong written and verbal communication skills and ability to excel within an open and collaborative team culture.

The internship will run from June to August, 2021.

Contact me if you’re interested: michael.harding <at>

HPE Storage Solutions let the good times roll at Microsoft Ignite with expert talks, new offerings

Though I’d rather be writing this blog over a coffee and beignet in New Orleans, I’m still thrilled to be sharing the news about our HPE Storage Solutions for Microsoft, and specifically what’s new for the Microsoft Ignite audience. As we all know, this year’s Microsoft Ignite is taking place on the Web rather than the Crescent City, but our Solution team is still treating it like it’s our annual “Mardi Gras”. With that in mind, here’s what new for our HPE Storage Solutions for Microsoft. Let the good times roll!  

Throw me somethin’, Mista!
HPE is an official sponsor of this year’s Microsoft Ignite show, which runs September 22-24.  Along with the presence within the online venue, HPE has created its own web presence to supplement things, our “Virtual Booth”, due to all the related activities and content we had available.  This pandemic has accelerated digital transformation across the company’s customer base, and in response HPE is stepping up the number of solutions to help these organizations achieve new ways of working and serving their end-customers, especially in the cloud.

A key part of HPE-at-Ignite presence will be a series of Expert Videos. Storage team experts will deliver a few of these sessions, coverings topics from new a SQL Server solution built on a newly re-engineered storage platform, to Hybrid cloud data expansion, to Big Data HCI.

The HPE Microsoft Ignite 2020 Virtual Booth landing page is the place to watch these video sessions, as well as get the latest Microsoft Storage Solution resources including technical whitepapers, solution briefs and other online assets.

Starting with Microsoft Ignite week and beyond, we’re also rolling out more expert video content that couldn’t fit on the Landing page. These sessions are being delivered through the HPE Storage BrightTalk channel and are spanning topics from SQL Server Big Data Clusters, to Storage Class Memory, Highly Available Windows File Services, Hybrid Cloud data mobility, and Microsoft VDI. These sessions are supported by our strategic partner, Intel.

Who Dat HPE MSA Gen 6?
The company recently broke the news on the newly updated HPE Modular Smart Array (MSA) storage, which features a new architecture, ASIC, chipsets and health monitoring capabilities. The product team shared the key details in that HPE MSA Gen6 Announcement, for this storage product that is reaching new levels of ease-of-use plus price/performance. I’d like to highlight what it means for our Microsoft storage solutions, which I covered in more detail in the SQL Server on HPE MSA solution release blog.

The higher levels of performance from the Gen6 platform will translate directly into more transactions for SQL Server databases, more apps hosted across your Hyper-V environment, and even greater Microsoft workload consolidation onto a single array. The new Tiering 2.0 algorithm alone delivers up to 45% more app acceleration than in the previous generation system.

Other enhancements will increase workload availability, such as MSA-DP+, an advanced RAID-based data protection feature which protects data and enables faster rebuilds, and the Health Check tool which makes it easy to help ensure optimal system operations.

HPE MSA storage has traditionally been used for entry-level and departmental SQL Server database environments. We expect this to continue, especially where organizations require on-premises control of their data and the hands-on ability to ensure services levels.  There are also new realizations shared on a Solution team member’s blog regarding the benefits of a single RAID-protected Storage array versus having to maintain multi-node clusters of HCI systems for the same workload and data capacity.

Innovation that takes the (King) cake
The solution development hasn’t stopped at HPE since the last Microsoft Ignite event. The Storage krewe has led a parade of new offerings to meet the needs of our broad Microsoft workload customer base:

Nimble Storage Extender for Azure Stack Hub – Need more data capacity for your Azure Stack Hub but don’t want to buy a whole new one? The HPE Storage Extender for Azure Stack Hub brings flexibility to the tightly defined Azure Stack Hub architecture, letting you just expand data capacity plus get the benefits of enterprise SAN data services.

Windows Admin Center (WAC) Extensions – We’ve rolled out a number of WAC Extensions to expand the visibility and manageability of our storage products within the WAC dashboard. These include Windows and Azure Stack HCI extensions for the HPE Apollo 4200, as well as Storage Extensions for the HPE Primera and HPE 3PAR storage platforms.

HPE InfoSight for Hyper-V – Breakthrough new HPE Storage AI capability that brings cross-stack analytics parity for Windows Hyper-V VMs that was previously only available for VMware ESX environments.

Accelerated SQL Server on HPE Nimble Storage – Takes new NVMe SSD caching approach to increase performance for OLTP and other demanding apps. Lab verified benefits of Storage Class Memory powered by Intel Optane SSDs show back-end latency reduced as much as 50%. Similar lab validated solution also available for HPE 3PAR storage.

SharePoint 2019 & Skype for Business Server 2019 on HPE Storage – Upgrade your SharePoint and Skype environments on-premises to the latest Server 2019 versions to modernize your infrastructure and take advantage of the latest in content collaboration and portal technologies.

SQL Server Big Data Clusters (BDC) – A single scale-out solution for both relational and Big Data, built on HPE Storage and the latest data center technologies including containers and Kubernetes. Manage more data from across the enterprise with your existing SQL Server tools and expertise.

In addition to these HPE Storage Solutions for Microsoft releases over the past year, there are even more in progress, including expanded testing and technical publications for the SQL Server BDC PolyBase feature, SQL Server leveraging HPE Primera storage enhancements, and SQL Server realizing new levels of performance with persistent memory (PMEM).

Get (the party) started
So if you haven’t already, definitely get registered for Microsoft Ignite. Then mix yourself a Hurricane or Sazerac, and go straight to the HPE Microsoft Ignite event Virtual Booth site to join the party with all our event activities, on-demand content and technical resources.

And don’t forget our Storage news of the show, how the new HPE MSA Gen6 Storage can be the life of your party for Microsoft workloads – with more application performance, yet simplified management that fits any IT budget. Learn more at: the SQL Server 2019 on the new HPE MSA Gen6 solution Blog  

New SQL Server Big Data Clusters solution takes the stage at HPE Virtual Discover Experience

The HPE Discover Virtual Experience starts Tuesday June 23rd, when 10s of thousands of people will join online to learn about new technologies to transform their businesses such as intelligent edge, hybrid cloud, IoT, Exascale computing and much more. Our team will be part of this event, showing off our newest solution for Microsoft SQL Server Big Data Clusters running in the HPE Container Platform. Here’s the direct link for session and speaker information or search us up once you’ve registered and join the event — we’re session “D139“.

The inspiration for our work is that data growth is taking off like a rocket, and in that spirit the HPE Storage team staged our approach to the new enterprise database capability from Microsoft: SQL Server 2019 Big Data Clusters. We lifted off with an initial enterprise-grade solution for SQL Server Big Data Clusters (BDC), and laid-in a course for more features, capabilities and scale. As introduced in previous blogs, SQL Server BDC uses a new architecture that combines the SQL Server database engine, Spark and Hadoop Distributed File System (HDFS) into a unified data platform.

Microsoft SQL Server 2019 features new Big Data Clusters capability

This approach escapes the gravitational constraints of traditional relational databases, now having the ability to read, write, and process big data from traditional SQL or Spark engines, letting organizations combine and analyze high-value relational data along with high-volume big data, all within their familiar SQL Server environment. Our first stage effort includes an initial implementation guide, collateral and a number of related activities including a live demo in this year’s HPE Discover Virtual Experience.

Following soon will be ‘stage 2’ where we’ll publish technical guidance on deploying your own BDC that takes advantage of data virtualization, also known as the Polybase feature. Polybase lets you virtualize and query other data sources from within SQL Server without having to copy and convert that outside data. It eliminates the time and expense of traditional extract, transform, and load (ETL) cycles, and perhaps more importantly, lets organizations leverage existing SQL Server expertise and tools to extract the value of third-party data sources from across the organizational data estate such as NoSQL, Oracle, and HDFS, to name just a few.

The last stage of this mission will add HPE Apollo 4200 storage systems for a cost-effective storage pool, especially for larger BDC deployments in the petabytes.

Info on our overall SQL Server BDC solution is available online in the new solution brief.

Putting BDC boots on the moon

There are a number of key considerations for deploying your own SQL Server BDC. It’s going to be a very different environment than what you may be familiar with for traditional Microsoft SQL Server. Rather than a Windows environment, with or without VMs, BDC requires the use of containers and along with running on Linux, the architecture will contain a number of possibly new technologies for traditional IT teams: Kubernetes, Apache Spark, Hadoop Distributed File System (HDFS), Kibana and Grafana.

Microsoft Azure Studio showing a dashboard for a Big Data Cluster

Many companies have begun to use Kubernetes as an efficient way to deploy and scale applications. It’s often referenced as a key part of a typical Continuous Integration and Continuous Deployment (CI/CD) process. And one survey puts the number at 78% of respondents using Kubernetes in production[1]. So bringing Kubernetes to SQL Server may be a timely way to merge a couple areas of significant investment for companies: traditional RDBMS and the evolving DevOps space.

Another unique feature of this solution is Container management. Our initial technical guidance includes the use of the HPE Container Platform. The HPE Container Platform provides a multi-tenant, multi-cluster management infrastructure for Kubernetes (K8s). Creating a highly available K8s cluster is as easy as importing the hosts into the platform and defining master/worker role. In addition, it simplifies persistent access to data with the integration of Container Storage Interface (CSI) drivers.  This makes connecting with HPE storage easy, not only providing persistent volumes, but enabling access to valuable array-based resources such as encryption and data protection features like snapshots. The latest HPE CSI package supports HPE Primera storage, HPE Nimble storage and HPE 3PAR storage. 

Key components of the initial solution include:

  • Microsoft SQL Server 2019 Big Data Clusters
  • HPE ProLiant DL380 Gen10 servers
  • CentOS Linux—a community-driven, open source Linux distribution
  • HPE Nimble Storage arrays for the master instance to provide integrated persistent storage
  • HPE Container Storage Interface (CSI) driver
  • Kubernetes to automate deployment, scaling, and operations of containers across clusters of hosts
  • HPE Container Platform for the deployment and management for Kubernetes clusters (optional)
  • HPE MapR as an integrated, persistent data store (optional)

Why HPE Storage for Big Data Clusters

HPE Nimble Storage provides high availability persistent container storage for the BDC Master Instance

The partnership of Microsoft and HPE stretches back to the same time that the Hubble space telescope was launched, about 30 years ago. This heritage of testing and co-development has helped ensure optimal performance for Microsoft business software on HPE hardware. Other important reasons to chose HPE for your BDC deployment:

  • HPE developed a standards-compliant CSI driver for Kubernetes to simplify storage integration.
  • HPE developed the HPE Container platform, providing the most advanced and secure Kubernetes-compatible container platform on the market.
  • HPE owns MapR, an established leading technology for big data management — now incorporated within the HPE Data Fabric offering — and another key part of the solution that helps span data management from on-premises to the cloud
  • Finally, HPE has had in the market a complete continuum of SQL Server solutions based on HPE Storage – from departmental databases to consolidated application environments, and from storage class memory accelerated to the most mission-critical scale-up databases. Adding BDC provides yet another option – now for scale-out data lakes – to customers who rely on HPE as a trusted end-to-end solution partner.

Get started

The HPE Storage with Microsoft SQL Server Big Data Clusters solution is available today. An initial reference architecture delivers the benefits of scale-out SQL Server on HPE Nimble enterprise-class data storage with the newest container management capability using the HPE Container Platform.

The HPE Storage with Microsoft SQL Server Big Data Clusters solution is a safe, first step for your IT team, but a giant leap forward for your organization to derive the most business value from its data estate, regardless of whether its relational, unstructured, on-premises or in the cloud.

Learn more about HPE Storage solutions for Microsoft and see us live at the HPE Virtual Discover Experience.

Are you struggling to manage more data, and more types of data from across the enterprise? Start your mission to manage your entire data estate with existing SQL Server expertise.  Read the new implementation guide: How to deploy Microsoft SQL Server 2019 Big Data Clusters on Kubernetes and HPE Nimble Storage.

Announcing the new HPE Storage Extender for Azure Stack Hub

In early 2016 Microsoft announced their Microsoft Azure Stack Hub, along with their new Windows Server 2016, as an extension of the Microsoft Azure Public Cloud. From the beginning, customers sought the benefits of Azure cloud, but deployed within their own datacenter. This new hybrid cloud solution brought them the hyperscale experience of consumption-based computing, along with the ability to run the very same applications you could run on Azure.

HPE Storage Extender for Microsoft Azure Stack Hub adds enterprise-class data capacity to Azure hybrid cloud.

This on-premises version of the cloud came with constraints, however. Not all of the same services in Azure are available in Azure Stack. It’s not ‘limitless’ as it is bounded by the capacity and compute contained within a particular customers’ Azure Stack deployment.  And that configuration itself is rigidly defined by Microsoft, and available only through a few certified partners, selling predefined bundles of servers, storage and networking.

Up until now, even if they didn’t need more compute or bandwidth, if a customer’s storage needs exceeded that of the configuration that they purchased, they would have to purchase another node (predefined set of compute/storage/networking) or an entire new Azure Stack offering. With Azure Stack, the promise of cloud agility came with strict boundaries. 

But that has all changed.

Expand Hybrid cloud capacity with the HPE Storage Extender

The new HPE Storage Extender for Azure Stack Hub solution leverages and delivers scripts and technical guidance on how to expand just the data storage capacity of your Azure Stack Hub environment. This optimized implementation is initially available on HPE Nimble Storage, and works in parallel with Microsoft’s published approach for expanding capacity for Azure Stack Hub by using any iSCSI storage resource.

HPE Nimble Storage brings intelligent, self-managing flash storage to your data center and hybrid cloud environments. It is an ideal platform for your expanded Azure Stack Hub, with high availability and Data Efficiency features, and guarantees both data availability of 99.9999%1 as well as the ability to store more data per terabyte of flash storage than other all-flash arrays. Designed for advanced storage technologies such as NVMe, HPE Nimble Storage delivers industry-leading capacity efficiency as well as a future-proof architecture. You can find guaranteed data-reduction program details in the Store More Guarantee documentation2.

Azure Stack Hub and Data storage

Azure Stack Hub scales between 4 and 16 hybrid nodes and 4 and 8 all-flash nodes. Although the Azure Stack Hub does not contain all of the features of Azure public cloud, it mimics the most common features, and provides an easy transition point for data and applications moving between the cloud and on-premises. The primary method of managing an Azure Stack Hub instance is through an almost identical management portal of the Azure public cloud.

Microsoft Azure Stack Hub currently provisions storage utilizing internal disk from hyperconverged nodes managed by Storage Spaces Direct (S2D). Up to this point, external storage has not been supported under the Microsoft Azure Stack Hub design options; the total capacity and performance available was capped by the maximum number of nodes in the scale unit, the disk drive configurations available from each OEM vendor, and the specific characteristics of the virtual machine type deployed.

The initial HPE Storage Extender is based on HPE Nimble storage but this solution aligns with the Microsoft approach that can support any iSCSI storage device

The tightly enforced configurations have been at odds with customers and partners requests for flexibility, and specifically for the ability to leverage external storage arrays to support key workloads. Along with additional data capacity, external storage arrays bring with them the ability to migrate and replicate data, along with higher data availability. This is why HPE developed a means to connect HPE Nimble Storage arrays as an external iSCSI storage option, in parallel with Microsoft’s technical template for how to connect to iSCSI storage with Azure Stack Hub3.

HPE brings innovation to Windows with the world’s most intelligent storage

The HPE Storage Extender for Azure Stack Hub solution provides the key elements to enable access to external data capacity, while maintaining a customer’s supported Azure Stack Hub configuration. The solution includes:

  • HPE Nimble Storage, officially Windows Server 2019 and 2016 certified
  • Windows Server 2016 Datacenter or Windows Server 2019 Datacenter (latest build recommended)
  • PowerShell DSC extension
  • Custom Script Extension
  • Solution Deployment Guide
  • HPE InfoSight

HPE InfoSight — the IT industry’s most established AI platform — is the key feature in enabling autonomous, self-managing data storage, and is an embedded part of HPE Nimble Storage, as well as other HPE Storage and Server products. HPE InfoSight has analyzed application patterns in 1,250 trillion data points over the last decade to predict and prevent disruptions across storage, servers, and virtual machines. This has resulted in savings of more than 1.5 million hours of lost productivity due to downtime. HPE InfoSight provides the intelligent foundation for all HPE storage products, creating the industry’s only end-to-end AI capability for self-managing storage.

Get started

The HPE Storage Extender for Azure Stack Hub solution is available today. It brings additional data capacity to Azure Stack Hub — without the cost of adding additional compute.

Thanks to HPE Nimble Storage, the HPE Storage Extender for Azure Stack Hub solution is an economical way to access and use more data within your Microsoft Hybrid cloud, while benefitting from improved data management, protection and availability. The solution includes technical guidance and scripts, and is supported by Microsoft as an approach aligned with Microsoft published technical templates.

Solution resources available at launch:

Launch webinar on the HPE Storage BrightTalk channel

Storage Extender solution brief

Learn more about HPE storage solutions for Microsoft @  


  1. “HPE Get 6-Nines Guarantee, HPE Nimble Storage” Published details available online.
  2. HPE Store More Guarantee for HPE Nimble Storage.
  3. Connect to iSCSI storage with Azure Stack Hub”, Oct 28, 2019

Insights from Deploying Microsoft Exchange at Scale on Azure Stack HCI

Microsoft Azure Stack HCI has established itself as a solid hyperconverged infrastructure offering, based on the leading operating system, Microsoft Windows Server 2019. IT staff are able to efficiently consolidate traditional workloads on this familiar platform, thanks to multiple technological features including both compute virtualization with Hyper-V as well as data storage virtualization with Storage Spaces Direct. There’s also support for the use of non-volatile memory express (NVMe) SSDs and persistent memory for caching in order to speed system performance.

However, with such dynamic technology in play at the OS layer, things get interesting when you add a sophisticated workload that also has its own intelligent performance enhancing features including storage tiering, a metacache database (MCDB), and dynamic cache. In this case we’re talking about Microsoft Exchange email, which recently introduced the new Microsoft Exchange Server 2019.

One Wall Street firm was a power user of Microsoft Exchange – with over 200,000 users, many having massive mailboxes of dozens up to 100 or more GBs in size. As part of their infrastructure planning, the customer wanted to compare the performance and cost of continuing to run Exchange on physical servers with external attached storage (JBOD), versus evolving to an Azure Stack HCI infrastructure. 

The combination of these products and technologies required complex testing and sizing that pushed the bounds of available knowledge at the time, generating learning useful for other companies who are also early in adopting various combinations of demanding enterprise workloads on top of Azure Stack HCI.

Field experts share their insight

“This customer had an interest in deploying truly enterprise-scale Exchange, and eventually the latest server version, using their HCI infrastructure,” began Gary Ketchum, Sr. System Engineer in the Storage Technology Center at HPE.  “Like vSAN or any other software-defined datacenter product, choosing the hardware is very important in order to consistently achieve your technical objectives.”

This observation especially holds true when implementing Storage Spaces Direct solutions. As stated in the Microsoft Storage Spaces direct Hardware requirements page, “Systems, components, devices, and drivers must be Windows Server Certified per the Windows Server Catalog. In addition, we recommend that servers, drives, host bus adapters, and network adapters have the Software-Defined Data Center (SDDC) Standard and/or Software-Defined Data Center (SDDC) Premium additional qualifications (AQs). There are over 1,000 components with the SDDC AQs.”

A key challenge of the implementation was in how to realize the targeted levels of improved flexibility, performance, and availability, within a much more complex stack of technologies, multiple virtualization layers, including potentially competing caching mechanisms.

Anthony Ciampa, Hybrid IT Solution Architect from HPE explains key functionality of the solution. “Storage Spaces Direct allows organizing physical disks into storage pools. The pool can easily be expanded by adding disks. The Virtual Machine VHDx volumes are created from the pool capacity providing fault tolerance, scalability, and performance. The resiliency enables continuous availability protecting against hardware problems. The types of resiliency are dependent on the number of nodes in the cluster.  The solution testing used a two-node cluster with two-way mirroring. With three or more servers it is recommended to use three-way mirroring for higher fault tolerance and increased performance.” HPE has published a technical whitepaper on Exchange Server 2019 on HPE Apollo Gen10 available today online.

Microsoft Azure Stack HCI on HPE Apollo 4200 Gen10 solution

At Microsoft Ignite 2019, HPE launched its solution for the new Microsoft HCI product, Windows Azure Stack HCI with HPE Apollo 4200 Gen10. This new software-defined hyperconverged offering, built on the high capacity yet dense Apollo storage server, delivered a new way to meet the needs of the emerging ‘Big Data HCI’ customer. A new deployment guide details solution components, installation, management and related best practices.

Exchange on Azure Stack HCI Solution Stack

The new Azure Stack HCI on HPE Apollo 4200 solution combines Microsoft Windows Server 2019 hyper-converged technology with the leading storage capacity/density data platform in its class. It serves a growing class of customers who want the benefits of a simpler on-premises infrastructure while still able to run the most demanding Windows analytics and data-centric workloads.

Findings from the field

Notes from the deployment team captured some of the top findings of this Exchange on Windows HCI testing, that will help others avoid problems as well as confidently speed these complex implementations.

  1. More memory not required – The stated guidance for Azure Stack HCI requires additional memory, specifically an SSD NVMe (cache tier) beyond JBOD physical deployment. However HPE’s Jetstress testing showed that similar performance was also possible from just JBOD. Thus the server hardware requirements are similar between Azure Stack HCI and JBOD, and even if the customer plans to deploy JBOD MCDB tier with Exchange 2019, the hardware requirements are still very similar. Note, there could be other cost factors to consider such as the cost of overhead for additional Compute and RAM within the Azure Stack HCI, as well as any other additional software licensing cost for running Azure Stack HCI.
  • Size cache ahead of data growth – The cache should be sized to accommodate the working set (the data being actively read or written at any given time) of your applications and workloads. If the active working set exceeds the size of the cache, or if the active working set drifts too quickly, read cache misses will increase and writes will need to be de-staged more aggressively, hurting overall performance.
  • More volumes the better – Volumes in Storage Spaces Direct provide resiliency to protect against hardware problems. Microsoft recommends the number of volumes is a multiple of the number of servers in your cluster. For example, if you have 4 servers, you will experience more consistent performance with 4 total volumes than with 3 or 5. However, testing showed that Jetstress provided better performance with 8 volumes per server compared to 1 or 2 volumes per server.

Where to get more info

Microsoft Azure Stack HCI on HPE Apollo 4200 Gen10 server is a new solution that addresses the growing needs of the Big Data HCI customer – those who are looking for an easy-to-deploy and affordable IT infrastructure with the right balance of capacity, density, performance, and security.  Early work with this solution, especially where it’s being combined with demanding and data intensive workloads, can create non-intuitive configuration requirements, so IT teams should seek out experienced vendors and service partners.  

A new deployment guide details solution components, installation, management and related best practices. Information in that document, along with this blog, and future sizing tools expected out from HPE, will continue to provide guidance for enterprise deployments of this new HCI offering.

The deployment guide is available online today at this link: <link to Deployment Guide>

HPE Brings Big Data to Hyperconverged Infrastructure with New Apollo Solution

If you were at Microsoft Ignite last month you may still have missed the launch of HPE’s latest hyperconverged infrastructure (HCI) solution: Microsoft Azure Stack HCI on HPE Apollo 4200 storage. It would be understandable, as Ignite was a major industry event packed with technology news, especially with lots of HPE show activity, including prominent HPE mainstage appearances for both Azure Stack and the new Azure Arc.
But among the new and enhanced solutions we demonstrated at the show, our presentations about Azure Stack HCI on HPE Apollo storage were well-received and timely given the growing emphasis on HCI, hybrid cloud and all things software-defined. The key message for this solution was that it is pioneering a new area in software-defined HCI for Windows Big Data workloads. The solution uniquely delivers the convenience of hyperconverged Infrastructure on a high-capacity platform for the most data-intensive applications.

The emergence of Big Data HCI
We’ve all heard about the explosive growth of data, and that we’re in an age of zettabytes. IDC made a specific prediction, that by 2024, just data created from AI, IoT and smart devices will exceed 110 zettabytes (source: IDC FutureScape: Worldwide Cloud Predictions 2020).
At the same time, organizations are trying to simplify their IT infrastructures to reduce cost, complexity and the need for specialized expertise. The conflict is that the applications required to harvest this explosion of data can be the most demanding in terms of performance and management. I’m seeing companies – even the largest most capable enterprises – are recognizing the value of easy-to-use hyperconverged infrastructure to alleviate some of the strain of delivering these demanding, data-centric workloads.
Azure Stack HCI on HPE Apollo 4200 storage is a new solution that addresses the needs of the growing “Big Data HCI” customer. Azure Stack HCI on HPE Apollo is built on the highest capacity Azure Stack HCI qualified 2U server, bringing an unmatched ability to serve big data workloads on a compact Windows software-defined HCI appliance.

HPE Apollo HCI solution key components
Azure Stack HCI is Microsoft’s software-defined HCI solution that pairs Windows Server 2019, Hyper-V, Storage Spaces Direct, and Windows Admin Center management, along with partner x86 hardware. It is used to run Windows and Linux VMs on-premises and at the edge with existing IT skills and tools.
Azure Stack HCI is a convenient way to realize benefits of Hybrid IT, because it makes it easy to leverage the cloud-based capabilities of the Microsoft Azure cloud. These cloud-based data services include: Azure Site Recovery, Azure Monitor, Cloud Witness, Azure Backup, Azure Update Management, Azure Network Adapter, and Azure Security Center to name a few.
The Azure Stack HCI solution program includes Microsoft-led validation for hardware, which ensures optimal performance and reliability for the solution. This testing extends to technologies such as NVMe drives, persistent memory, and remote-direct memory access (RDMA) networking. Customers are directed to use only Microsoft-validated hardware systems when deploying their Azure Stack HCI production environments.

HPE Apollo 4200 Gen 10 – largest capacity 2U Azure Stack HCI system

HPE Apollo 4200 Gen10 Server – leading capacity/throughput for Windows HCI
The HPE Apollo 4200 Gen10 server delivers leading scale and throughput for Azure Stack HCI. The HPE Apollo 4200 storage system can accommodate 392 TBs of data capacity within just a 2U form-factor. This leads all other Azure Stack HCI validated 2U solutions as seen in the Microsoft Azure Stack HCI catalog ( In addition, the HPE Apollo storage system is a leader in bandwidth, supporting 100Gb Ethernet and 200Gb Infiniband options. Customers are already running large scale, data-centric applications such as Microsoft Exchange on HPE Apollo systems, and can now add Azure Stack HCI as a means to simplify the infrastructure stack, while preserving performance and the space-efficient 2U footprint.
The HPE Apollo Gen10 system is future-proofed with Intel Cascade lake processors for more cores and faster processing, along with memory enhancements and support for NVMe storage. The HPE Apollo systems leverage a big data and high performance computing heritage, and have an established Global 500 customer track record.

Azure Stack HCI on HPE Apollo solution – more than just hardware
The HPE Apollo 4200 system is at the core of this Microsoft software-defined HCI solution, but there’s much more to the solution. HPE solution engineering teams perform testing on all solution designs, and publish technical whitepapers to provide guidance on implementation, administration, and performance optimization, for example the recent Microsoft Windows Server 2019 on HPE Apollo 4200 implementation guide. HPE also trains authorized reseller partners to help assure fast, successful deployments and fast time-to-solution for customers.
Windows Admin Center (WAC) is becoming the new standard interface for Windows system management. HPE is developing Extensions for WAC that will make it easier to manage HPE Apollo systems within Windows Server 2019 environments as well as specifically within Azure Stack HCI clusters.
As an HPE Storage solution, customers also enjoy high availability through HPE InfoSight predictive analytics that deliver the uptime benefits of AI to the datacenter.

Get started with HPE Apollo HCI
The Azure Stack HCI on HPE Apollo solution is available today. It’s the largest capacity 2U Azure Stack HCI validated solution available, and has been officially qualified for All-Flash, Hybrid SAS SSD, and NVMe providing options for affordable and high-performance data storage.
The Azure Stack HCI on HPE Apollo solution is the go-to choice for analytics and data-centric Windows workloads. Get easy to manage infrastructure with native Microsoft Windows administration. Available with the solution are published technical guidance including whitepapers and related resources, with WAC extensions on the way.
The launch webinar was recorded and is available on demand – watch it to learn more: