All treats, no tricks for Microsoft workloads on HPE infrastructure

[note: this never got posted by HPE in time for Halloween, so here it is – the original blog – enjoy] It’s a spooky time of year. But even as little ghouls and goblins fill the streets, your data center has become a little less scary – and more under your control. Organizations have consolidated business applications on storage arrays for decades. But with new decisions regarding cloud computing, there’s worry that running multiple workloads on-premises has somehow gotten more difficult.  This couldn’t be further from the truth.

Companies can still realize the “treats” of running multiple business applications on shared storage – such as high levels of performance, economic savings, and technical familiarity – without really any new “tricks” to worry about.

In fact, our HPE Storage solutions for Microsoft team just published some new testing showing exactly this: That multiple Microsoft business workloads can be run together on the same infrastructure resources, while realizing high levels of consistent performance.  And, thanks to ongoing technology advances, realize simplified management, even higher levels of availability, automated problem detection, along with on-prem security and hands-on control.

Fang-tastic new applications from Microsoft

The software developers in Redmond continue to evolve and improve upon their leading business applications, which helps them maintain dominant market shares across so many of their products. We tested a few of them, in order to show how typical, popular applications perform together on the same infrastructure stack. But just as important is that a mix of workloads – with different performance profiles and requirements – can run side-by-side and still deliver high, consistent output.

In the new whitepaper, the solution engineers ran Windows Server 2022, Microsoft Exchange 2019, and the latest SQL Server with databases performing both Transactional as well as Analytical workloads.  Here’s a quick review of what’s new in these applications:

  • Microsoft Exchange 2019 – Still the newest version of this leading business email package, this version of Exchange introduced the Metacache Database which improved performance by leveraging flash storage for a cache that speeds access to frequently used data. There are also enhanced Security features, as well as specific improvements in Data Loss Prevention, archiving, retention, and eDiscovery.
  • Windows Server 2022 – This new operating system brings with it advances in multi-layer security, plus hybrid capabilities with Azure. Also notable are improvements in scalability being able to run up to 48 TB RAM, 64 sockets, and 2048 logical processors. And specific improvements in areas such as Hyper-V live migration help enhance overall system availability, usability, and economics.
  • SQL Server 2022 – The current version of SQL Server was still 2019 at the time of this testing, but it’s useful to note that a new version of this leading enterprise database is about to be released, SQL Server 2022.  The new release will be especially Azure cloud-enabled, supporting Azure analytics capabilities with on-prem data, as well as bi-directional HA/DR to Azure SQL. This, plus continued performance and security innovations and new as-a-service consumption models, will drive IT to continue to refresh their most important transactional and analytical data on-prem with SQL Server.

“Boo-tiful” performance on HPE infrastructure

The mixed application testing centered on consolidating a SQL Server OLTP (HammerDB TPROC-C) workload and a SQL Server OLAP (HammerDB TPROC-H) workload in the same dHCI solution and showing how they both could realize consistent performance as multiple workloads are added. These databases would represent a company’s transactional system, say an online shopping system running alongside analytical databases for research and reporting. So we tested not just two different database environments, but two that are used very differently, and that demand different resources from the infrastructure – one more IOPs intensive vs. the other being more throughput-intensive.

To this workload mix was added two Exchange VMs that represented a typical email environment for this same small to midsized enterprise, supporting hundreds of mailboxes with simulated usage patterns. The chart below plots these multiple workloads in use, showing how over time, the system is still able to accommodate the needs of all the applications without showing much in the way of spikes or changes in overall performance. Other charts and data in this same report show consistent, sub-millisecond latency measured at the host, over this same time frame, while a mix of different workloads are powered up.

Caption: Multiple applications run while performance remains relatively consistent

Ghouls just want to have fun

Another dimension of the study was to illustrate how disaggregated Hyper Converged infrastructure delivers superior performance, always on availability, and is simple to scale both storage and compute without any downtime.

HPE Nimble Storage dHCI was the platform used for the analysis, and is a converged infrastructure offering that combines HPE Nimble storage arrays with HPE ProLiant DL servers, along with set-up and management software.  This product has been shown to deliver latencies as low as 200 microseconds for applications and VMs, and it comes with a 6-nines data availability guarantee that covers both the applications and the VMs – not the storage alone.

The power behind this high system availability is HPE InfoSight – it’s an AI-enabled monitoring program with more than a decade’s worth of data from across the HPE Storage installed base, enabling it to identify and notify of anomalies and predicted failures within customers’ systems.  

The promise of “disaggregated hyperconverged infrastructure” is to provide the performance and flexibility of separate IT resources with the management convenience of hyperconverged infrastructure. Scaling and upgrading of either compute, storage, or both, is simple and efficient without downtime; resiliency is built in with hardware redundancy and the ability to tolerate three simultaneous drive failures. It’s been found that this approach provides operational flexibility beneficial for mission-critical data bases and data warehouses.

It takes teamwork to make the scream work

IT teams can confidently refresh their traditional infrastructure environments on-prem, and continue to run a diverse mix of business applications, expecting great, consistent performance across all their enterprise workloads.

Specifically, we’ve proven that a mix of virtualized Microsoft mixed workloads running on an HPE Nimble dHCI platform, can ensure fast, consistent performance across different types of business workloads, with measured low latency even as additional load is added.  The solution delivers operational simplicity thanks to tight integration across the layers of the infrastructure stack.  And with the intelligence of HPE InfoSight and its automated monitoring, the product comes with 99.9999% data availability guaranteed.

Got treats? This blog highlights the recent mixed Microsoft workload study. Get all the details of the extensive performance testing that shows how to realize consistent, high performance for your Microsoft business workloads, running together on HPE Storage and server infrastructure. The new technical whitepaper is available for download today.

Advertisement

Insights from Deploying Microsoft Exchange at Scale on Azure Stack HCI

Microsoft Azure Stack HCI has established itself as a solid hyperconverged infrastructure offering, based on the leading operating system, Microsoft Windows Server 2019. IT staff are able to efficiently consolidate traditional workloads on this familiar platform, thanks to multiple technological features including both compute virtualization with Hyper-V as well as data storage virtualization with Storage Spaces Direct. There’s also support for the use of non-volatile memory express (NVMe) SSDs and persistent memory for caching in order to speed system performance.

However, with such dynamic technology in play at the OS layer, things get interesting when you add a sophisticated workload that also has its own intelligent performance enhancing features including storage tiering, a metacache database (MCDB), and dynamic cache. In this case we’re talking about Microsoft Exchange email, which recently introduced the new Microsoft Exchange Server 2019.

One Wall Street firm was a power user of Microsoft Exchange – with over 200,000 users, many having massive mailboxes of dozens up to 100 or more GBs in size. As part of their infrastructure planning, the customer wanted to compare the performance and cost of continuing to run Exchange on physical servers with external attached storage (JBOD), versus evolving to an Azure Stack HCI infrastructure. 

The combination of these products and technologies required complex testing and sizing that pushed the bounds of available knowledge at the time, generating learning useful for other companies who are also early in adopting various combinations of demanding enterprise workloads on top of Azure Stack HCI.

Field experts share their insight

“This customer had an interest in deploying truly enterprise-scale Exchange, and eventually the latest server version, using their HCI infrastructure,” began Gary Ketchum, Sr. System Engineer in the Storage Technology Center at HPE.  “Like vSAN or any other software-defined datacenter product, choosing the hardware is very important in order to consistently achieve your technical objectives.”

This observation especially holds true when implementing Storage Spaces Direct solutions. As stated in the Microsoft Storage Spaces direct Hardware requirements page, “Systems, components, devices, and drivers must be Windows Server Certified per the Windows Server Catalog. In addition, we recommend that servers, drives, host bus adapters, and network adapters have the Software-Defined Data Center (SDDC) Standard and/or Software-Defined Data Center (SDDC) Premium additional qualifications (AQs). There are over 1,000 components with the SDDC AQs.”

A key challenge of the implementation was in how to realize the targeted levels of improved flexibility, performance, and availability, within a much more complex stack of technologies, multiple virtualization layers, including potentially competing caching mechanisms.

Anthony Ciampa, Hybrid IT Solution Architect from HPE explains key functionality of the solution. “Storage Spaces Direct allows organizing physical disks into storage pools. The pool can easily be expanded by adding disks. The Virtual Machine VHDx volumes are created from the pool capacity providing fault tolerance, scalability, and performance. The resiliency enables continuous availability protecting against hardware problems. The types of resiliency are dependent on the number of nodes in the cluster.  The solution testing used a two-node cluster with two-way mirroring. With three or more servers it is recommended to use three-way mirroring for higher fault tolerance and increased performance.” HPE has published a technical whitepaper on Exchange Server 2019 on HPE Apollo Gen10 available today online.

Microsoft Azure Stack HCI on HPE Apollo 4200 Gen10 solution

At Microsoft Ignite 2019, HPE launched its solution for the new Microsoft HCI product, Windows Azure Stack HCI with HPE Apollo 4200 Gen10. This new software-defined hyperconverged offering, built on the high capacity yet dense Apollo storage server, delivered a new way to meet the needs of the emerging ‘Big Data HCI’ customer. A new deployment guide details solution components, installation, management and related best practices.

Exchange on Azure Stack HCI Solution Stack

The new Azure Stack HCI on HPE Apollo 4200 solution combines Microsoft Windows Server 2019 hyper-converged technology with the leading storage capacity/density data platform in its class. It serves a growing class of customers who want the benefits of a simpler on-premises infrastructure while still able to run the most demanding Windows analytics and data-centric workloads.

Findings from the field

Notes from the deployment team captured some of the top findings of this Exchange on Windows HCI testing, that will help others avoid problems as well as confidently speed these complex implementations.

  1. More memory not required – The stated guidance for Azure Stack HCI requires additional memory, specifically an SSD NVMe (cache tier) beyond JBOD physical deployment. However HPE’s Jetstress testing showed that similar performance was also possible from just JBOD. Thus the server hardware requirements are similar between Azure Stack HCI and JBOD, and even if the customer plans to deploy JBOD MCDB tier with Exchange 2019, the hardware requirements are still very similar. Note, there could be other cost factors to consider such as the cost of overhead for additional Compute and RAM within the Azure Stack HCI, as well as any other additional software licensing cost for running Azure Stack HCI.
  • Size cache ahead of data growth – The cache should be sized to accommodate the working set (the data being actively read or written at any given time) of your applications and workloads. If the active working set exceeds the size of the cache, or if the active working set drifts too quickly, read cache misses will increase and writes will need to be de-staged more aggressively, hurting overall performance.
  • More volumes the better – Volumes in Storage Spaces Direct provide resiliency to protect against hardware problems. Microsoft recommends the number of volumes is a multiple of the number of servers in your cluster. For example, if you have 4 servers, you will experience more consistent performance with 4 total volumes than with 3 or 5. However, testing showed that Jetstress provided better performance with 8 volumes per server compared to 1 or 2 volumes per server.

Where to get more info

Microsoft Azure Stack HCI on HPE Apollo 4200 Gen10 server is a new solution that addresses the growing needs of the Big Data HCI customer – those who are looking for an easy-to-deploy and affordable IT infrastructure with the right balance of capacity, density, performance, and security.  Early work with this solution, especially where it’s being combined with demanding and data intensive workloads, can create non-intuitive configuration requirements, so IT teams should seek out experienced vendors and service partners.  

A new deployment guide details solution components, installation, management and related best practices. Information in that document, along with this blog, and future sizing tools expected out from HPE, will continue to provide guidance for enterprise deployments of this new HCI offering.

The deployment guide is available online today at this link: <link to Deployment Guide>