All treats, no tricks for Microsoft workloads on HPE infrastructure

[note: this never got posted by HPE in time for Halloween, so here it is – the original blog – enjoy] It’s a spooky time of year. But even as little ghouls and goblins fill the streets, your data center has become a little less scary – and more under your control. Organizations have consolidated business applications on storage arrays for decades. But with new decisions regarding cloud computing, there’s worry that running multiple workloads on-premises has somehow gotten more difficult.  This couldn’t be further from the truth.

Companies can still realize the “treats” of running multiple business applications on shared storage – such as high levels of performance, economic savings, and technical familiarity – without really any new “tricks” to worry about.

In fact, our HPE Storage solutions for Microsoft team just published some new testing showing exactly this: That multiple Microsoft business workloads can be run together on the same infrastructure resources, while realizing high levels of consistent performance.  And, thanks to ongoing technology advances, realize simplified management, even higher levels of availability, automated problem detection, along with on-prem security and hands-on control.

Fang-tastic new applications from Microsoft

The software developers in Redmond continue to evolve and improve upon their leading business applications, which helps them maintain dominant market shares across so many of their products. We tested a few of them, in order to show how typical, popular applications perform together on the same infrastructure stack. But just as important is that a mix of workloads – with different performance profiles and requirements – can run side-by-side and still deliver high, consistent output.

In the new whitepaper, the solution engineers ran Windows Server 2022, Microsoft Exchange 2019, and the latest SQL Server with databases performing both Transactional as well as Analytical workloads.  Here’s a quick review of what’s new in these applications:

  • Microsoft Exchange 2019 – Still the newest version of this leading business email package, this version of Exchange introduced the Metacache Database which improved performance by leveraging flash storage for a cache that speeds access to frequently used data. There are also enhanced Security features, as well as specific improvements in Data Loss Prevention, archiving, retention, and eDiscovery.
  • Windows Server 2022 – This new operating system brings with it advances in multi-layer security, plus hybrid capabilities with Azure. Also notable are improvements in scalability being able to run up to 48 TB RAM, 64 sockets, and 2048 logical processors. And specific improvements in areas such as Hyper-V live migration help enhance overall system availability, usability, and economics.
  • SQL Server 2022 – The current version of SQL Server was still 2019 at the time of this testing, but it’s useful to note that a new version of this leading enterprise database is about to be released, SQL Server 2022.  The new release will be especially Azure cloud-enabled, supporting Azure analytics capabilities with on-prem data, as well as bi-directional HA/DR to Azure SQL. This, plus continued performance and security innovations and new as-a-service consumption models, will drive IT to continue to refresh their most important transactional and analytical data on-prem with SQL Server.

“Boo-tiful” performance on HPE infrastructure

The mixed application testing centered on consolidating a SQL Server OLTP (HammerDB TPROC-C) workload and a SQL Server OLAP (HammerDB TPROC-H) workload in the same dHCI solution and showing how they both could realize consistent performance as multiple workloads are added. These databases would represent a company’s transactional system, say an online shopping system running alongside analytical databases for research and reporting. So we tested not just two different database environments, but two that are used very differently, and that demand different resources from the infrastructure – one more IOPs intensive vs. the other being more throughput-intensive.

To this workload mix was added two Exchange VMs that represented a typical email environment for this same small to midsized enterprise, supporting hundreds of mailboxes with simulated usage patterns. The chart below plots these multiple workloads in use, showing how over time, the system is still able to accommodate the needs of all the applications without showing much in the way of spikes or changes in overall performance. Other charts and data in this same report show consistent, sub-millisecond latency measured at the host, over this same time frame, while a mix of different workloads are powered up.

Caption: Multiple applications run while performance remains relatively consistent

Ghouls just want to have fun

Another dimension of the study was to illustrate how disaggregated Hyper Converged infrastructure delivers superior performance, always on availability, and is simple to scale both storage and compute without any downtime.

HPE Nimble Storage dHCI was the platform used for the analysis, and is a converged infrastructure offering that combines HPE Nimble storage arrays with HPE ProLiant DL servers, along with set-up and management software.  This product has been shown to deliver latencies as low as 200 microseconds for applications and VMs, and it comes with a 6-nines data availability guarantee that covers both the applications and the VMs – not the storage alone.

The power behind this high system availability is HPE InfoSight – it’s an AI-enabled monitoring program with more than a decade’s worth of data from across the HPE Storage installed base, enabling it to identify and notify of anomalies and predicted failures within customers’ systems.  

The promise of “disaggregated hyperconverged infrastructure” is to provide the performance and flexibility of separate IT resources with the management convenience of hyperconverged infrastructure. Scaling and upgrading of either compute, storage, or both, is simple and efficient without downtime; resiliency is built in with hardware redundancy and the ability to tolerate three simultaneous drive failures. It’s been found that this approach provides operational flexibility beneficial for mission-critical data bases and data warehouses.

It takes teamwork to make the scream work

IT teams can confidently refresh their traditional infrastructure environments on-prem, and continue to run a diverse mix of business applications, expecting great, consistent performance across all their enterprise workloads.

Specifically, we’ve proven that a mix of virtualized Microsoft mixed workloads running on an HPE Nimble dHCI platform, can ensure fast, consistent performance across different types of business workloads, with measured low latency even as additional load is added.  The solution delivers operational simplicity thanks to tight integration across the layers of the infrastructure stack.  And with the intelligence of HPE InfoSight and its automated monitoring, the product comes with 99.9999% data availability guaranteed.

Got treats? This blog highlights the recent mixed Microsoft workload study. Get all the details of the extensive performance testing that shows how to realize consistent, high performance for your Microsoft business workloads, running together on HPE Storage and server infrastructure. The new technical whitepaper is available for download today.

Advertisement