Navigate data challenges and seize new Industry opportunities with HPE Storage solutions

HPE Storage is extending and expanding its commitment to Industry Storage solutions with more publications, event presence and new solution development. HPE Storage products have already been part of industry-focused solutions across all commercial sectors for decades, and this new level of investment is to enhance what we deliver and how we promote our unique and valuable vertically optimized Data storage solutions.

Applied innovations happening across industries

Amazing enterprise technology opportunities are being realized today within specific industries. Many innovative, market-leading organizations have been tracking new technology trends, but have not been able to implement them until they became more complete offerings. We believe we’re at a stage of applied innovation in tech, evidenced by what we’re now seeing, such as metaverse technologies for Digital Twins in manufacturing, video surveillance as part of smart cities, and virtual desktop infrastructure (VDI) for specific financial services usage.

The Storage team expects to build upon investments made in its GreenLake for Data Storage as a service platform, enabling new hybrid-cloud data solutions and industry-wide data management offerings. Initial industry solution work is planned for Financial Services, Manufacturing, Public Sector & Education, Healthcare and Life Sciences, and Telecommunications.

Not new solutions but a new focus

The approach the Storage team is taking will include more promotion such as blogs to better show the role of data storage within industry vertical solutions, more support for the channel, and strengthened partnerships that are core to these joint industry offerings.

A short list of some of these featured partners includes SAP who is recognized as a leading ISV in the Manufacturing space, VAST Data which enables solutions across a range of industry use cases, Epic and Meditech who are known in the medical Electronic Health Records market, Tookitaki for its anti-money laundering (AML) software, and data protection partners like Veeam and Commvault that protect important data stores such as those comprising Private Clouds for Government.

Making an initial deposit into the Financial Services industry

The financial services industry stands at a key juncture of a technological revolution. The adoption of complex technologies has become a pivotal factor in shaping the landscape of the financial sector, with a few specific challenges rising above the others:

  • Cybersecurity – With the increasing digitization of financial transactions, the risk of cyber threats has soared to unprecedented levels. Institutions must invest in robust security measures, such as advanced authentication protocols, encryption, and real-time threat detection systems, to safeguard sensitive data and protect their customers from potential breaches.
  • Regulatory Compliance – The financial services industry is subject to a labyrinth of regulatory frameworks, such as GDPR and PSD2, designed to ensure data privacy and fair competition. Financial institutions must navigate these complex regulations while simultaneously embracing new technologies, striking a delicate balance between innovation and compliance.
  • Legacy Systems – Many established financial institutions grapple with outdated legacy systems that hinder their ability to adapt and respond swiftly to market demands. Upgrading and modernizing these systems are imperative to enhance operational efficiency, streamline processes, and enable seamless integration with emerging technologies.

At the same time, there are important technology opportunities that are allowing bold institutions to get ahead of their competition:

  • Artificial Intelligence (AI) and Machine Learning (ML) – AI and ML have the potential to revolutionize the financial services industry. Intelligent algorithms can automate manual tasks, improve risk assessment models, and enhance fraud detection mechanisms, leading to enhanced customer experiences, reduced costs, and increased operational efficiency.
  • Scale out File storage – Demanding big data applications like Monte Carlo simulation, statistics and analytics are being used across the industry. They require high performance, petabyte scale file storage, using all-NVMe, and they’re now available as cloud-based, pay-per use platforms, but with on-premises data control and protection.
  • Trade from anywhere – The pandemic forced Trading firms to support virtualized desktops that met changing workplace needs and reduced space and IT cost, while maintaining high availability and data security. They leverage the latest NVIDIA graphic acceleration, vGPU, Data protection and disaster recovery capabilities.

We began our latest Industry work based on these challenges and opportunities facing Financial organizations.

HPE Storage Solutions for Financial Services

HPE Storage solutions for financial services are offerings for institutions performing tasks such as securities trading, money management, and digital banking. The solutions comprise cloud-connected data storage management for financial services applications, and what makes them stand out are the use of market-leading HPE infrastructure products.

Three data storage solutions are for Anti-Money Laundering, Quantitative Trading, and Trader workplaces:

  • Trader workplace — This solution lets Trading firms evolve from “servers under the desk” to “Desktop as a Service” meeting new delivery requirements and being available as a monthly HPE GreenLake subscription. It uses a performance-oriented VDI built on HPE Alletra dHCI with HPE Alletra 6000 all-NVMe storage.
  • Anti-money laundering — This solution helps Financial institutions prevent losses and meet compliance requirements, using Tookitaki’s award-winning AML solution delivered in an HPE GreenLake consumption model. The data layer is comprised of HPE Apollo 4200 storage servers.
  • Quantitative trading — This solution addresses the need for large file data storage for demanding big data uses such as Monte Carlo simulation, statistics and other analytics. HPE GreenLake for File Storage, built using technology from VAST Data, is our as-a-service offering that handles these big data file challenges.

A fundamental element to all these industry solutions will be HPE GreenLake for Data Storage. It’s the industry pioneering storage-as-a-service platform that provides a cloud operating experience for every workload that it runs. Customers enjoy simplified operations, and get to consume enterprise-class data storage resources as a service. This turns around the old image of enterprise IT products, into a vibrant new cloud-connected service. 

Cash in on the latest Financial Services solutions using HPE Data Storage

The financial services industry stands at a critical juncture, where the integration of complex technologies offers unprecedented opportunities for growth, efficiency, and customer experience enhancement. And HPE Storage Solutions for Financial Services deliver unique value across the sector. Find out more about our specific solutions available today. Read more about data storage solutions for the Financial Services industry in the new Solution brief:  https://www.hpe.com/psnow/doc/a00131254enw

Advertisement

Storage Solution Predictions for 2023

Last year our team had some fun with a Top 10 list of Predictions and we actually did pretty well. So for this year we wanted to push ourselves a little, with more wide ranging prognostications. The common thread, though, are still topics relevant to those of us delivering tech offerings across a global market, subject to all the economic, technological and political vagaries entailed therein. Here’s what our crystal ball tells us for 2023…

  1. Azure tops AWS – Microsoft continues to drive their software installed base to Azure. SQL Server 2022 is the latest to be updated but not at parity with fuller-featured Azure-based alternatives. This deprecation process has been underway with Office, Skype, SharePoint, Exchange, etc. Microsoft Cloud revenues should already be passing $100B which would seem to top AWS. Then add another maybe $60B worth of annualized installed base software ‘lifting & shifting’ to SaaS, and we should see a pronounced crossover sooner than expected.
  2. AWS acknowledged as the Fidelity of IT infrastructure – Fidelity investments didn’t invent the Mutual fund (MFS did in 1924), but despite starting more than 20 yrs later, Fidelity rocketed to the top of that business by proliferating a broad family of actively managed funds, some led by the investment superstars of their day, growing to 40 mil. investors and $10T in assets. Similarly, AWS changed the infrastructure game, not just by delivering Cloud-based infra, but by a proliferation of offerings – over 200 total, including 60+ different EC2 instance types alone.  (The question now – who will surpass AWS as the Vanguard of infra? Who’s the next BlackRock?)
  3. War fears subside – US-China relations will warm, with attention on Taiwan waning, China Covid lockdowns ending, and Asian supply chains freeing up. All involved will get back to the business of business. Similarly in Europe, the Ukrainian conflict will move towards a negotiated peace or stalemate. As the continent proves it can survive with less Russian energy, life there will stabilize and impacted markets will regain equilibrium. 
  4. Chip war heats up – Despite a US pledge of billions for domestic chip capacity, the reality of new design and fab lead-times measured in years will keep the US dependent on non-US processors, which is a positive for global trade & relations, but a continued concern for those worried about US reliance on foreign semiconductors.
  5. AI innovation jolts a domestic industry – We’ve become familiar with Google maps, Alexa, Roombas, and the occasional glimpse of a self-driving (though still not yet driverless) vehicle. But we’re due for some new ‘killer app’ (and I’m not talking about kamikaze drones, I hope) finding its way into one or more major US industries. With supply chain and geo-political strains not yet completely behind us, the stage is set for a significant disruption to a large, legacy business, due to a game-changing intelligent automation in 2023.
  6. Workers return – Whether it was the social isolation, threats from the boss, or the tug of ‘cake day’, workers young and old return to their cube farms. Some productivity increased, but probably at the expense of innovation. However, Hybrid work becomes the norm, and companies continue a facilities evolution – hoteling cubes, storage cubies, more transient/social space (fewer dedicated offices) and heavy investment in related tech: cloud-based office apps, VDI, end-point security, related networking upgrades. Also adoption of latest video, telepresence and ‘metaverse’ tech to get the most out of meetings that will now often be a mix of local and remote attendees.
  7. Teams meta-disrupted – Microsoft Teams is getting a lot more use in a post-pandemic world, but it’s got the design appeal of a 1970s bathroom. Yet we know more usable tools are possible, such as Slack. There’s a pent-up supply of ‘metaverse’-enabling tech (AR, VR, gesture, voice, wearables…) – and those related vendors are itching to find valuable use cases. Either as a Teams add-on or replacement, there’s the opportunity for someone to create the ‘Apple watch’ of desktop video collaboration.
  8. IT mega deal – There have been a lot of ‘tuck-in’ acquisitions done by IT leaders over the past few years, but this current wave of recession fear and stock dips will be enough to make the numbers work for at least one enterprise IT mega deal in 2023. Look for a marriage of convenience between a couple companies who have not been able to evolve their biz models to be sufficiently cloud computing-centric.
  9. Edge app pwned – As more apps and data capacity move to ‘the edge’ (e.g. IoT devices, Point-of-Sale/Service systems, telco network locations) we can expect at least one significant hack in the coming year that will proliferate across a compromised edge and significantly impact a regional or maybe even global user base.
  10. Data mining becomes a resource biz – We’re heard the expression that “data is the new oil”, but we have yet to see the first “Standard Oil” of data. There are data mining and list companies out there like Sisense and Axiom.  But they are still relatively small, and there haven’t been any rapacious moves to roll-up competitors, vertically integrate, or any other aggressive actions to build a dominant, data monopoly. But given the growing value of data, esp. to train hungry ML apps, I’m expecting we’ll see an ambitious actor make a move in 2023.

Any comments – positive, negative or otherwise – are appreciated. Or let’s check back again this time next year to see how we did.

All treats, no tricks for Microsoft workloads on HPE infrastructure

[note: this never got posted by HPE in time for Halloween, so here it is – the original blog – enjoy] It’s a spooky time of year. But even as little ghouls and goblins fill the streets, your data center has become a little less scary – and more under your control. Organizations have consolidated business applications on storage arrays for decades. But with new decisions regarding cloud computing, there’s worry that running multiple workloads on-premises has somehow gotten more difficult.  This couldn’t be further from the truth.

Companies can still realize the “treats” of running multiple business applications on shared storage – such as high levels of performance, economic savings, and technical familiarity – without really any new “tricks” to worry about.

In fact, our HPE Storage solutions for Microsoft team just published some new testing showing exactly this: That multiple Microsoft business workloads can be run together on the same infrastructure resources, while realizing high levels of consistent performance.  And, thanks to ongoing technology advances, realize simplified management, even higher levels of availability, automated problem detection, along with on-prem security and hands-on control.

Fang-tastic new applications from Microsoft

The software developers in Redmond continue to evolve and improve upon their leading business applications, which helps them maintain dominant market shares across so many of their products. We tested a few of them, in order to show how typical, popular applications perform together on the same infrastructure stack. But just as important is that a mix of workloads – with different performance profiles and requirements – can run side-by-side and still deliver high, consistent output.

In the new whitepaper, the solution engineers ran Windows Server 2022, Microsoft Exchange 2019, and the latest SQL Server with databases performing both Transactional as well as Analytical workloads.  Here’s a quick review of what’s new in these applications:

  • Microsoft Exchange 2019 – Still the newest version of this leading business email package, this version of Exchange introduced the Metacache Database which improved performance by leveraging flash storage for a cache that speeds access to frequently used data. There are also enhanced Security features, as well as specific improvements in Data Loss Prevention, archiving, retention, and eDiscovery.
  • Windows Server 2022 – This new operating system brings with it advances in multi-layer security, plus hybrid capabilities with Azure. Also notable are improvements in scalability being able to run up to 48 TB RAM, 64 sockets, and 2048 logical processors. And specific improvements in areas such as Hyper-V live migration help enhance overall system availability, usability, and economics.
  • SQL Server 2022 – The current version of SQL Server was still 2019 at the time of this testing, but it’s useful to note that a new version of this leading enterprise database is about to be released, SQL Server 2022.  The new release will be especially Azure cloud-enabled, supporting Azure analytics capabilities with on-prem data, as well as bi-directional HA/DR to Azure SQL. This, plus continued performance and security innovations and new as-a-service consumption models, will drive IT to continue to refresh their most important transactional and analytical data on-prem with SQL Server.

“Boo-tiful” performance on HPE infrastructure

The mixed application testing centered on consolidating a SQL Server OLTP (HammerDB TPROC-C) workload and a SQL Server OLAP (HammerDB TPROC-H) workload in the same dHCI solution and showing how they both could realize consistent performance as multiple workloads are added. These databases would represent a company’s transactional system, say an online shopping system running alongside analytical databases for research and reporting. So we tested not just two different database environments, but two that are used very differently, and that demand different resources from the infrastructure – one more IOPs intensive vs. the other being more throughput-intensive.

To this workload mix was added two Exchange VMs that represented a typical email environment for this same small to midsized enterprise, supporting hundreds of mailboxes with simulated usage patterns. The chart below plots these multiple workloads in use, showing how over time, the system is still able to accommodate the needs of all the applications without showing much in the way of spikes or changes in overall performance. Other charts and data in this same report show consistent, sub-millisecond latency measured at the host, over this same time frame, while a mix of different workloads are powered up.

Caption: Multiple applications run while performance remains relatively consistent

Ghouls just want to have fun

Another dimension of the study was to illustrate how disaggregated Hyper Converged infrastructure delivers superior performance, always on availability, and is simple to scale both storage and compute without any downtime.

HPE Nimble Storage dHCI was the platform used for the analysis, and is a converged infrastructure offering that combines HPE Nimble storage arrays with HPE ProLiant DL servers, along with set-up and management software.  This product has been shown to deliver latencies as low as 200 microseconds for applications and VMs, and it comes with a 6-nines data availability guarantee that covers both the applications and the VMs – not the storage alone.

The power behind this high system availability is HPE InfoSight – it’s an AI-enabled monitoring program with more than a decade’s worth of data from across the HPE Storage installed base, enabling it to identify and notify of anomalies and predicted failures within customers’ systems.  

The promise of “disaggregated hyperconverged infrastructure” is to provide the performance and flexibility of separate IT resources with the management convenience of hyperconverged infrastructure. Scaling and upgrading of either compute, storage, or both, is simple and efficient without downtime; resiliency is built in with hardware redundancy and the ability to tolerate three simultaneous drive failures. It’s been found that this approach provides operational flexibility beneficial for mission-critical data bases and data warehouses.

It takes teamwork to make the scream work

IT teams can confidently refresh their traditional infrastructure environments on-prem, and continue to run a diverse mix of business applications, expecting great, consistent performance across all their enterprise workloads.

Specifically, we’ve proven that a mix of virtualized Microsoft mixed workloads running on an HPE Nimble dHCI platform, can ensure fast, consistent performance across different types of business workloads, with measured low latency even as additional load is added.  The solution delivers operational simplicity thanks to tight integration across the layers of the infrastructure stack.  And with the intelligence of HPE InfoSight and its automated monitoring, the product comes with 99.9999% data availability guaranteed.

Got treats? This blog highlights the recent mixed Microsoft workload study. Get all the details of the extensive performance testing that shows how to realize consistent, high performance for your Microsoft business workloads, running together on HPE Storage and server infrastructure. The new technical whitepaper is available for download today.

Data Storage Infrastructure 2022 Predictions

My team put on their thinking caps and contributed towards a list of data storage-centric predictions for the upcoming year. Admittedly some are based on, or projected from, analyst insights that we are especially close to because of our day-to-day work. But some are more qualitative and were the product of own observations and related expectations. Enjoy, and let me know what you think (we’ll have to check back at the end of the year to see how we did).

HPE Storage Solutions for Microsoft workloads team conjuring 2022 predictions
  1. IT equipment supply issues continue deep into 2022 – Major IT equipment manufacturers work within an especially global value chain, with the products we design then getting produced in conjunction with developers, component suppliers and manufacturing partners on the other side of the world. And just based on our known logistics and component (such as processors) issues, we expect supply challenges to continue well into the new year. In addition, some major disruption in southeast Asia, whether political (e.g. China south sea), a pandemic resurgence, or extreme weather condition (e.g. flooding) could quickly impact a majority of the world’s top 10 busiest ports, including our supply lines, inventory… as well as sadly putting an end to your dream of getting that cool Apple iPhone 13 anytime soon as it would probably come via a Foxconn facility in Zhengzhou and the port of Shanghai. Luckily the broadening investment in real-time logistics info-tech may help supply chain players better see, optimize, and work around problems. Absent a major crisis, we should have IT trade back to ‘normal’ by the end of the year.
  2. Companies become more concerned with managing Data than Storage –  As more customers evolve to a ‘service-oriented’ model, such as via an HPE GreenLake based solution, they are becoming less concerned about the specifics of what infrastructure is being used to store the data. Whether the deployment is on something like HPE dHCI, software-defined x86 scale-out or traditional arrays, the trend is for more focus on the desired business outcomes around the data and ensuring requirements are met vs. discussions on storage deployment details.
  3. NVMe a part of every infrastructureIDC already predicts NVMe storage will be used by 91% of companies within the next 2 years, and we are seeing NVMe drives as a key part of our storage solutions. Using a set of NVMe drives is a natural pairing for software-defined infrastructures and intelligent applications that maintain a software-based cache – pin the cache to these speedy drives and you ensure low latency and fantastic workload performance. On a more strategic level, we are also seeing where NVMe as part of more distributed architectures (e.g. NVMe-oF using RDMA, FC, TCP) is going to realize more adoption due to the consolidation of standards at the system level. Manufacturers are continuing to drop GenZ related development and are standardizing around CXL (Compute Express Link). CXL is a new open interconnect standard to reduce the latency of data sharing between CPU and memory within a system node. This system design consolidation for higher-performance within a host and surrounding devices is expected to have a follow-on effect of allowing more innovation in the surrounding fabric, and this is expected to further spur the use of NVMe drives and related media within the system node, across the rack, the aisle, and the datacenter.
  4. Cyber-crime will continue despite government action – Cyber-crime continues to be an endemic problem requiring governmental response. Yet there appears to be a disconnect between the growth in the occurrence of ransomware, trojans, and live criminal actions against enterprise servers and storage, while legislative actions seem more focused on rules to protect data, mostly from a physical perspective, and requirements to purge storage media and servers before they’re decommissioned or disposed of. Whereas the requirement to get certificates of destruction may be a good opportunity for professional services firms to generate a new source of income, it does little to thwart the serious threat of cyber-attacks by criminals, overseas adversaries, and terrorists.
  5. Points of Data Integration will grow – Despite the increasing threat to our data outlined in the previous prediction, we expect to see continued growth of integrations between companies, partners, customers and systems. Past IDC reports and more recent predictions have detailed how organizations are having to manage more APIs as part of doing business, and that “…mastery of APIs… [is] a price of admission to competing on the digital business battlefield.” Look for new data storage specific integrations becoming available especially between hyperscaler clouds, popular IT dashboards, and enterprise data storage platforms.  
  6. Software defined storage will continue to grow – Though there’s still an important place for traditional block storage arrays, the compelling economics and hybrid cloud features of the newest software-defined storage products will continue to gain adoption and expanded use. Products such as Azure Stack HCI especially when combined with hardware components such as NVMe storage, PMEM and GPU will increase utilization as infrastructure for VDI, enterprise virtualization, and big data.
  7. Container storage adoption will continue slowly – This whole area of containers and Kubernetes is one that I’m sure a lot of us have been lured into, from the inherent techno-coolness and social media-fueled enthusiasm. However, more and more is being written about the slow adoption of these technologies. The uptake of Containers and related container-based data storage is lagging – no doubt a function of the technical complexity outweighing what benefits are being realized from the license cost savings of using free opensource software. But just as in the Monty Python sketch, Container-based storage is not dead yet, and still something to continue to watch out for.
  8. The Battle of the Edge will intensify – The idea of Edge Computing is still fairly new, coming into common usage maybe 5-6 years ago, and tightly linked with mobile computing and Internet of Things, specific to wearables, home automation systems, sensors, RFID tags and the like. Within this limited context, the entire market opportunity for the Edge was expected by leading analysts to be less than $3 billion in 2022. Chump change vs. the forecast of all IT spending to top $4 trillion that same year. And though a growing host of companies continue to jockey for this market sliver, a few seem oriented towards a compressed world view where the Edge is almost every compute resource outside the hyperscaler cloud. Through that lens, ‘Storage at the edge’ becomes way more than just things like NAS-attached video cameras, but also includes data storage within remote offices and even enterprise storage arrays within datacenters. Expect the battle for the edge to not just be about products but also philosophies.
  9. Data Management still required despite the cloud – We’re seeing the line between on-prem and cloud blur, with more deployments being at least hybrid cloud, and most new applications starting life native to a hyperscaler environment. It’s being written that “Cloud computing has won”, and while this may have relieved IT from operational tasks centric to that application, it hasn’t eliminated responsibilities around ensuring the availability, protection and access to the data. We expect that through 2022, though IT is still on the hook for data security and locality concerns, teams will continue to lack easy-to-use tools to manage data across clouds, and that a new product market will take shape around enterprise data management, operations and mobility.
  10. The Year of Hybrid Cloud – We’ve got to add this one with a big wink, because this has been a prediction out there for at least the last decade (see Wired article from 2012). Of course, ten years ago the perspective was more theoretical and centric to mitigating periodic ‘inbound spillover’ of excessive application demand – what we’ve since taken to calling ‘cloud bursting’. Hybrid cloud has since continued growing in popularity especially over the past five years. Today we’re seeing adoption of hybrid cloud products that enable the actual mixed usage of on-prem and hyperscaler-based services together and managed within a single pane of glass (think: Azure Stack HCI managed through Windows Admin center). So maybe 2022 IS finally the year of Hybrid Cloud?

Learn How On-Prem Database Infrastructure is Evolving with New Cloud Benefits

Today, most companies have an IT strategy that includes cloud (e.g. “Cloud First”, “Cloud Best”), but according to IDC, “…most Microsoft customers still run the vast majority of production workloads on Windows in the on-prem data center”. This is especially true for business and mission-critical workloads. The challenge customers face in moving them to the cloud includes the need for maintaining consistent performance and availability, maintaining control of the data, and lack of cloud system expertise. And when it comes to protecting data in the cloud, there are numerous considerations such as backup/restore performance, security and data sovereignty.

To address these challenges, new infrastructure offerings are evolving to add the best of Cloud to these traditional platforms. Today, HPE announced new pre-defined HPE GreenLake cloud services for Microsoft SQL Server. Key benefits of these new “as a service” offerings include just paying for the infrastructure that you use, while being able to maintain the hands-on control to run and protect your production workloads using familiar HPE storage, compute and networking hardware.

The full HPE GreenLake value includes even more, such as faster time to solution with pre-configured, ready-to-ship offerings in as few as 14 days, getting the capacity you need when you need it—in minutes, not months, accessing online pricing for simplicity and financial clarity, and having web access for self-service via HPE GreenLake Central. In addition, the offloading of routine operational tasks through GreenLake Managed Services is optionally available for these offerings.

The Scoop on the New as-a-Service Offerings

HPE GreenLake platform-as-a-service solutions offer customers a faster time to value with a turnkey cloud experience on premises. The solutions, in pre-sized but adjustable configurations, have been developed to deliver levels of availability, performance, functionality, and cost, to meet a range of needs.

Microsoft SQL Server is the most common workload deployed on HPE primary storage platforms, and now as a new HPE GreenLake service, available in four pre-defined configurations, customers have the option to refresh their environment with a workload-optimized and tested configuration, ready to support their database regardless of size. All of these new configurations feature HPE Alletra 6000 all-NVMe storage, recently proven to deliver increased  database performance through the benefits of an  All-NVMe array. It’s expected that this pay-per-use pricing, point and-click self-service and other cloud features will provide a 30-40% TCO hardware savings vs. the traditional upfront capex model.

There are also HPE GreenLake services available for Veeam and Splunk. Watch for more news regarding availability of pre-defined configurations.

Why HPE Storage and HPE GreenLake for running and protecting your business-critical databases and applications

Whether SQL Server or some other critical workload, HPE infrastructure delivers unmatched availability. HPE is the only vendor with a 100% availability guarantee on enterprise-class data storage. This is coupled with the unique HPE InfoSight that uses Predictive Analytics to ensure uptime.  In addition, HPE storage, in partnership with Veeam, provides hybrid cloud data protection and mobility features that span on-premises and cloud.

HPE makes all your business databases and applications faster with leading all-flash, Storage Class Memory, NVMe media, and now All-NVMe storage arrays. This helps all your workloads run faster, lets users work faster, and enables businesses to create value and innovate faster.

HPE GreenLake for Microsoft SQL Server delivers the familiar benefits of HPE storage and HPE server platforms, a consumption-based pay-per-use model, plus additional benefits of the HPE GreenLake service:

  • Cost savings from flexible, consumption-based model with simplified units of measure for billing
  • Risk reduction from a validated full stack solution right-sized for your implementation
  • Visibility of data, applications and infrastructure on-premises and under your direct control
  • Business agility by leveraging scalable pay-per-use architecture with cloud-based administration

Through HPE GreenLake, customers can manage, pay for, and grow their environment over time.

Next steps on the road to HPE storage solutions with HPE GreenLake

Microsoft SQL Server was already available as HPE GreenLake service, but as of today, can be quickly ordered in pre-sized configurations using HPE storage infrastructure to serve a range of organizational sizes and application requirements. These services all provide superior uninterrupted operations for mission-critical environments while delivering cost savings from the ability of only having to pay for the infrastructure you need, as you use it, with on-demand scalability.

Get the details on today’s HPE GreenLake news here. Learn more about Microsoft Storage solutions from HPE here.

Increase Business-critical Database Performance with All-NVMe Storage

Today’s SQL Server Situation

Microsoft SQL Server continues to be dominant within the established relational database management market. And traditional RDBMS is still a leading database approach in terms of familiarity, installed based, and spending. Closer to home, we see SQL Server as our #1 workload on both the HPE 3PAR Storage and HPE Nimble Storage installed bases after only VMware VMs. On HPE Nimble Storage, SQL Server alone occupies more storage arrays than Oracle, Citrix, DB2, SAS, MySQL and Splunk combined!

The newest version of the database system, SQL Server 2019, presents new opportunities for savings with features like ‘Big Data Clusters’. But also new areas of complexity for some IT teams including running within containers, Kubernetes container management, having built-in Apache Spark, the PolyBase feature, and the potential to use it within a data lake or platform for AI/ML.   

The implication for customers is that new deployments can vary widely by usage (e.g. OLTP vs. OLAP) and there’s a need for the underlying Infrastructure to be optimized based on the use objective (Performance, Capacity, Availability, Cost…)  Customers should seek out guidance related to the targeted solution to ensure deployment success.

What makes HPE stand out in the SQL Server Infrastructure market

First of all, HPE ensures SQL Server workload availability. It’s the only vendor with an unmatched 100% availability guarantee on enterprise-class data storage. This is coupled with the unique HPE InfoSight that uses Predictive Analytics to ensure uptime. In addition, HPE Storage provides hybrid cloud data protection and mobility features that span on-premises and cloud.

Secondly, HPE makes SQL Server faster with leading all-flash, Storage Class Memory, NVMe media, and now All-NVMe storage arrays. This helps databases and applications run faster, lets users work faster, and enables businesses to create value and innovate faster.

Finally, and maybe more importantly, HPE has a breadth of platform solutions for SQL Server. This complete line of storage solutions for SQL Server broadly meet customer needs from mission-critical to entry level, with gradients for levels of performance, availability, usability, scale, and economics.

Breadth of solutions for the range of Database challenges

Don’t take my word for it – here’s the rundown of the industry’s broadest range of SQL Server infrastructure solutions – from the most mission-critical, scale-up environment to mid-market and departmental offerings:

  1. SQL Server 2019 on HPE Alletra 9000 and HPE Primera Storage – Large enterprise business-critical SQL Server. Provides 100% guaranteed availability, highest levels of performance
  2. SQL Server 2019 on HPE Alletra 6000 and HPE Nimble Storage – Enterprise/mid-market SQL Server. Provides six 9s availability guaranteed, easier data mobility and protection
  3. SQL Server 2019 Big Data Clusters on HPE Storage – Scale-out SQL Server environment, serves as platform for a data lake. Manage relational and Big Data together, from across the organization
  4. SQL Server 2019 on HPE Nimble Storage with Storage Class Memory – Acceleration for demanding online transaction processing (OLTP), powered by Intel Optane SSDs. High-performance read cache for faster queries. Testing shows more than a 50% decrease in latency
  5. SQL Server 2019 on HPE SimpliVity – Enterprise-grade hyperconverged speeds up application performance, improves efficiency, resiliency, and restores VMs in seconds.
  6. SQL Server 2019 on HPE MSA Gen6 – Entry-level/departmental SQL Server. Simplicity, speed, affordability, and enterprise-class reliability.

Testing the new All-NVMe Storage

HPE is bringing to market all new, All-NVMe storage platforms, and we were fortunate to get early access to run performance tests with our SQL Server test tools.  First, about the environment.

We conducted our own internal testing using the Microsoft Data Warehouse Fast Track (DWFT) tool in a couple separate runs during March and April, 2021 in Ft Collins, CO and Houston, TX.

For compute we had the ProLiant DL380 Gen10, and on it was Windows Server 2019 along with SQL Server 2019. The server had four 32Gb Fibre Channel connections to the storage.

The storage array was the new HPE Alletra 6070. On it was the database (10 volumes) and tempDB (4 volumes).

A little about the DWFT tool – our motivation behind using this tool was to maintain consistency with performance tests we’ve been running now for almost the last decade.  It’s a familiar tool for customers and partners.  However, we’re seeing how it’s become dated in its ability to provide useful results because its been outpaced by the workload itself as well as the surrounding technology.  A specific issue is regarding the new SQL Server 2019 feature, Memory optimized tempDB.  We saw the same thing happen with the JetStress tool which had been used for many years on Microsoft Exchange, yet with the latest version of the application, the tool can’t report on the MetaCache database.

So, we ran our originally planned DWFT tests of the HPE Alletra 6070 vs. the same stack on the HPE Nimble AF80. However, based on the results I’m about to share, we recognized the need, as well as the opportunity, to re-do some testing which will better portray the benefits of the new platform.

SQL Server on HPE Alletra 6000 All-NVMe Flash Storage
Initial testing proved increased levels of enterprise database performance of the HPE Alletra 6000 All-NVMe Flash Storage system.  We demonstrated greater lab-verified app performance vs. earlier generation All-Flash system using the traditional Microsoft Data Warehouse Fast Track tool.  However, as addressed earlier, you can expect even greater real-world performance gains, which we’ll show in future, follow-up testing.

Proven performance gains:

  • +30.4% Measured Query throughput (Queries/Hr/TB)
  • +29.4% Relative throughput
  • +28.3% Measured Scan rate Physical (MB/Sec)

These are all Column Store measures, and indicate enhanced performance when working with large volumes of data as found in a data warehouse deployment.  We saw less compelling results in the Row Store metrics, but we feel this was more an issue with the test tool than the product being tested.  The DWFT results under-report the enhancements of the new Alletra platform, thus our plan to re-do the test with contemporary tools such as HammerDB, VDBench and TPC benchmarks. We expect to show significant performance gains with all-NVMe storage for data warehouses, transactional data processing and all types of business-critical databases.

3X Improved Database Price/Performance

Despite the muted performance data, we did get a compelling result when it comes to the Price/Performance of the new platform. We first looked at the cost of the old vs. new systems.

The original HPE Nimble AF80 system that we tested has a list price of $682,500. In contrast, the new HPE Nimble Storage All NVME Flash 6070 was only slightly more, with an estimated list price of $748,187, and note, this includes the related Data Services Cloud Console subscription. The new system cost was only up 9.6%.

Going back to our performance testing, we know we saw an over 30% gain. Specifically this came from the HPE Nimble AF80 system performance of 1,927 queries/Hour/TB vs. the HPE Nimble Storage All NVME Flash 6070 performance of 2,512 queries/Hour/TB, resulting in that 30.4% improvement.

So putting that together, for only a 10% additional investment, you’ll realize 30% increased performance, or a 3X Price/Performance gain with All-NVMe!

Make your move now to SQL Server on HPE Alletra 6000 Storage

HPE offers a broad range of Database Consolidation and Migration Services through HPE Pointnext professional services. They can provide the initial advice to help ensure a successful SQL Server deployment, as well as a HPE Database Migration service to deliver a smooth migration or fast database consolidation. Finally, HPE Pointnext provides support and ongoing training and readiness services to keep your Microsoft SQL Server environment operating efficiently and effectively.

Also on the way are new SQL Server “as a Service” offerings through HPE GreenLake.

Be aware of important upcoming dates:

  • SQL Server 2016 reaches the end of Mainstream support in July 2021
  • SQL Server 2012 reaches end of Extended support July 2022
  • SQL Server 2017 reaches end of Mainstream support Oct 2022

Regardless of your version of SQL Server, you will realize new levels of performance with the latest SQL Server on our newest HPE All-NVMe Storage.

Check out the new webpage for the HPE Alletra storage systems. The page is now live and loaded with information and resources on this exciting new storage platform. <Link to HPE Alletra on the web>

Welcome the all-new Azure Stack HCI

It’s a big day for hyperconverged infrastructure (HCI).  The world’s most cloud-connected HCI just got an overhaul, and Microsoft is (re)launching their Azure Stack HCI today <link to Microsoft announcement blog> .

Microsoft’s Azure Stack HCI will be a big factor in the white-hot HCI space. It’s positioned to serve Hyper-V and Azure cloud centric customers who would otherwise have to make do with Nutanix or vSAN. Azure Stack HCI more optimally delivers modern infrastructure that simplifies the management of on-prem resources while seamlessly connecting them to the cloud.

And let’s be clear – this isn’t marketing speak. Azure Stack HCI is now a new and separate O/S, designed for real Hybrid management. Customers will use the Azure Portal as well as Windows Admin Center (WAC) for resource visibility, management, billing, troubleshooting… everything.  

Azure Stack HCI Infrastructure as a Service topology

Goodbye Azure Stack HCI, Hello Azure Stack HCI!

The all-new Azure Stack HCI, available either as a download or pre-installed on hardware, replaces the current Azure Stack HCI offering which was built around Windows Server 2019 features. And with the new software comes new functionality including stretch clustering for DR – built-in async or sync replication that you can use for local high-availability or across metro distances, with optional encryption. That’s powerful HA for free, that you may not get with other HCI products.

Other advantages of the new Azure Stack HCI include a set-up wizard within WAC, and integrated billing with Azure cloud.  Did I mention the licensing is per core/per month, and gets bundled into your existing Azure account?  No need to track separate licenses, CALs, etc.

HPE and Azure Stack HCI

This is an important time for our Azure Stack HCI on Apollo 4200 solution in that it’s gotten better than ever – thanks to newly qualified drives, it now accommodates over twice the data capacity! The Azure Stack HCI on HPE Apollo solution was already the best solution for ‘Data centric’ workloads – where you need high capacity per node to run workloads like SQL Server, Exchange, Analytics, secondary storage and the like. But now with the support for even larger capacity media, it can accommodate 829 Terabytes of data within its highly space-efficient 2U footprint, and almost 17PBs in a single rack.

If you need more Performance or flexibility in your Azure Stack HCI architecture, then look no further than a solution built on HPE ProLiant servers.  Or if you prefer the entire offering – software and hardware – all as-a-Service, then HPE has that coming soon in a GreenLake offering. 

With HPE you’ll find the same broad portfolio of solutions around this HCI product, including our own Azure Stack HCI on the HPE Apollo 4200 solution. Microsoft just updated the new program catalog so you’ll find just a few of our HPE validated solutions listed as of today, but that list will re-expand over the coming days.  

Next steps

So if you’re with an enterprise looking for a 60 node deployment in your datacenter, or an organization who just wants to try things out with a 2 node cluster, you can start by taking a look at our current portfolio of all HPE solutions for Azure Stack HCI online here.  Info specifically on the Azure Stack HCI on HPE Apollo solution is here.  And I ask you to engage with us as we develop and evolve our portfolio based on customer needs and requests – post a related question or comment in the space below.

Data Storage Business Analyst Internship

HPE is about accelerating business outcomes with comprehensive solutions, from edge to cloud. And the Solutions team within the Storage business unit leads the company’s effort to create valuable ways to use, integrate and automate our intelligent storage products across leading business workloads and hybrid cloud use cases.

This Summer internship with the Storage Solutions team will comprise a cross-functional challenge and a tangible project outcome. This role will require applying your business, strategic and analytical thinking to develop a business case and high-level technical requirements. But will be done in conjunction with a technically aligned Intern who will manage the corresponding architecture, design, development and coding. A benefit and skill that you can expect at the end of the summer will be to have developed important collaborative skills, and the ability to work more effectively with engineers and those who develop technologies to extract value from data.

Responsibilities

· Drive the development of a Solution Dashboard, from concept to requirements.

· Identify and remain centered on strategic goals and business-oriented project objectives.

· Interface with key stakeholders including Solution Managers, Program Managers and Solution Engineering. Gather priorities and requirements (functionality, usability, etc.)

· Leverage the knowledge of what exists for related dashboards, what has been done in the past, as well as related business systems and data sources (content management systems, online libraries, web repositories), in designing your project.

· The outcome should be a web-based dashboard that tracks Solution status in terms of key activities and collateral by using visualizations and metrics. It should also be dynamic in how data can be filtered by level of user, as well as providing the ability to drill down from high level status views to specific details and actual resources.

· Project includes a presentation within a company-wide end-of the-summer Intern Showcase.


Qualifications

· Currently pursuing an undergraduate degree in Business or related interdisciplinary study such as Informatics.

· Junior year student (3rd year of 4 year program).

·  Located in San Francisco bay area (no relocation).

·  Demonstrated ability to apply an analytical approach to business, driven by the intelligent use of data.

· Interest in conceiving of and managing technology offerings that solve business problems.

· Strong written and verbal communication skills and ability to excel within an open and collaborative team culture.

The internship will run from June to August, 2021.

Contact me if you’re interested: michael.harding <at> hpe.com

HPE Storage Solutions let the good times roll at Microsoft Ignite with expert talks, new offerings

Though I’d rather be writing this blog over a coffee and beignet in New Orleans, I’m still thrilled to be sharing the news about our HPE Storage Solutions for Microsoft, and specifically what’s new for the Microsoft Ignite audience. As we all know, this year’s Microsoft Ignite is taking place on the Web rather than the Crescent City, but our Solution team is still treating it like it’s our annual “Mardi Gras”. With that in mind, here’s what new for our HPE Storage Solutions for Microsoft. Let the good times roll!  

Throw me somethin’, Mista!
HPE is an official sponsor of this year’s Microsoft Ignite show, which runs September 22-24.  Along with the presence within the online venue, HPE has created its own web presence to supplement things, our “Virtual Booth”, due to all the related activities and content we had available.  This pandemic has accelerated digital transformation across the company’s customer base, and in response HPE is stepping up the number of solutions to help these organizations achieve new ways of working and serving their end-customers, especially in the cloud.

A key part of HPE-at-Ignite presence will be a series of Expert Videos. Storage team experts will deliver a few of these sessions, coverings topics from new a SQL Server solution built on a newly re-engineered storage platform, to Hybrid cloud data expansion, to Big Data HCI.

The HPE Microsoft Ignite 2020 Virtual Booth landing page is the place to watch these video sessions, as well as get the latest Microsoft Storage Solution resources including technical whitepapers, solution briefs and other online assets.

Starting with Microsoft Ignite week and beyond, we’re also rolling out more expert video content that couldn’t fit on the Landing page. These sessions are being delivered through the HPE Storage BrightTalk channel and are spanning topics from SQL Server Big Data Clusters, to Storage Class Memory, Highly Available Windows File Services, Hybrid Cloud data mobility, and Microsoft VDI. These sessions are supported by our strategic partner, Intel.

Who Dat HPE MSA Gen 6?
The company recently broke the news on the newly updated HPE Modular Smart Array (MSA) storage, which features a new architecture, ASIC, chipsets and health monitoring capabilities. The product team shared the key details in that HPE MSA Gen6 Announcement, for this storage product that is reaching new levels of ease-of-use plus price/performance. I’d like to highlight what it means for our Microsoft storage solutions, which I covered in more detail in the SQL Server on HPE MSA solution release blog.

The higher levels of performance from the Gen6 platform will translate directly into more transactions for SQL Server databases, more apps hosted across your Hyper-V environment, and even greater Microsoft workload consolidation onto a single array. The new Tiering 2.0 algorithm alone delivers up to 45% more app acceleration than in the previous generation system.

Other enhancements will increase workload availability, such as MSA-DP+, an advanced RAID-based data protection feature which protects data and enables faster rebuilds, and the Health Check tool which makes it easy to help ensure optimal system operations.

HPE MSA storage has traditionally been used for entry-level and departmental SQL Server database environments. We expect this to continue, especially where organizations require on-premises control of their data and the hands-on ability to ensure services levels.  There are also new realizations shared on a Solution team member’s blog regarding the benefits of a single RAID-protected Storage array versus having to maintain multi-node clusters of HCI systems for the same workload and data capacity.

Innovation that takes the (King) cake
The solution development hasn’t stopped at HPE since the last Microsoft Ignite event. The Storage krewe has led a parade of new offerings to meet the needs of our broad Microsoft workload customer base:

Nimble Storage Extender for Azure Stack Hub – Need more data capacity for your Azure Stack Hub but don’t want to buy a whole new one? The HPE Storage Extender for Azure Stack Hub brings flexibility to the tightly defined Azure Stack Hub architecture, letting you just expand data capacity plus get the benefits of enterprise SAN data services.

Windows Admin Center (WAC) Extensions – We’ve rolled out a number of WAC Extensions to expand the visibility and manageability of our storage products within the WAC dashboard. These include Windows and Azure Stack HCI extensions for the HPE Apollo 4200, as well as Storage Extensions for the HPE Primera and HPE 3PAR storage platforms.

HPE InfoSight for Hyper-V – Breakthrough new HPE Storage AI capability that brings cross-stack analytics parity for Windows Hyper-V VMs that was previously only available for VMware ESX environments.

Accelerated SQL Server on HPE Nimble Storage – Takes new NVMe SSD caching approach to increase performance for OLTP and other demanding apps. Lab verified benefits of Storage Class Memory powered by Intel Optane SSDs show back-end latency reduced as much as 50%. Similar lab validated solution also available for HPE 3PAR storage.

SharePoint 2019 & Skype for Business Server 2019 on HPE Storage – Upgrade your SharePoint and Skype environments on-premises to the latest Server 2019 versions to modernize your infrastructure and take advantage of the latest in content collaboration and portal technologies.

SQL Server Big Data Clusters (BDC) – A single scale-out solution for both relational and Big Data, built on HPE Storage and the latest data center technologies including containers and Kubernetes. Manage more data from across the enterprise with your existing SQL Server tools and expertise.

In addition to these HPE Storage Solutions for Microsoft releases over the past year, there are even more in progress, including expanded testing and technical publications for the SQL Server BDC PolyBase feature, SQL Server leveraging HPE Primera storage enhancements, and SQL Server realizing new levels of performance with persistent memory (PMEM).

Get (the party) started
So if you haven’t already, definitely get registered for Microsoft Ignite. Then mix yourself a Hurricane or Sazerac, and go straight to the HPE Microsoft Ignite event Virtual Booth site to join the party with all our event activities, on-demand content and technical resources.

And don’t forget our Storage news of the show, how the new HPE MSA Gen6 Storage can be the life of your party for Microsoft workloads – with more application performance, yet simplified management that fits any IT budget. Learn more at: the SQL Server 2019 on the new HPE MSA Gen6 solution Blog  

New SQL Server Big Data Clusters solution takes the stage at HPE Virtual Discover Experience

The HPE Discover Virtual Experience starts Tuesday June 23rd, when 10s of thousands of people will join online to learn about new technologies to transform their businesses such as intelligent edge, hybrid cloud, IoT, Exascale computing and much more. Our team will be part of this event, showing off our newest solution for Microsoft SQL Server Big Data Clusters running in the HPE Container Platform. Here’s the direct link for session and speaker information or search us up once you’ve registered and join the event — we’re session “D139“.

The inspiration for our work is that data growth is taking off like a rocket, and in that spirit the HPE Storage team staged our approach to the new enterprise database capability from Microsoft: SQL Server 2019 Big Data Clusters. We lifted off with an initial enterprise-grade solution for SQL Server Big Data Clusters (BDC), and laid-in a course for more features, capabilities and scale. As introduced in previous blogs, SQL Server BDC uses a new architecture that combines the SQL Server database engine, Spark and Hadoop Distributed File System (HDFS) into a unified data platform.

Microsoft SQL Server 2019 features new Big Data Clusters capability

This approach escapes the gravitational constraints of traditional relational databases, now having the ability to read, write, and process big data from traditional SQL or Spark engines, letting organizations combine and analyze high-value relational data along with high-volume big data, all within their familiar SQL Server environment. Our first stage effort includes an initial implementation guide, collateral and a number of related activities including a live demo in this year’s HPE Discover Virtual Experience.

Following soon will be ‘stage 2’ where we’ll publish technical guidance on deploying your own BDC that takes advantage of data virtualization, also known as the Polybase feature. Polybase lets you virtualize and query other data sources from within SQL Server without having to copy and convert that outside data. It eliminates the time and expense of traditional extract, transform, and load (ETL) cycles, and perhaps more importantly, lets organizations leverage existing SQL Server expertise and tools to extract the value of third-party data sources from across the organizational data estate such as NoSQL, Oracle, and HDFS, to name just a few.

The last stage of this mission will add HPE Apollo 4200 storage systems for a cost-effective storage pool, especially for larger BDC deployments in the petabytes.

Info on our overall SQL Server BDC solution is available online in the new solution brief.

Putting BDC boots on the moon

There are a number of key considerations for deploying your own SQL Server BDC. It’s going to be a very different environment than what you may be familiar with for traditional Microsoft SQL Server. Rather than a Windows environment, with or without VMs, BDC requires the use of containers and along with running on Linux, the architecture will contain a number of possibly new technologies for traditional IT teams: Kubernetes, Apache Spark, Hadoop Distributed File System (HDFS), Kibana and Grafana.

Microsoft Azure Studio showing a dashboard for a Big Data Cluster

Many companies have begun to use Kubernetes as an efficient way to deploy and scale applications. It’s often referenced as a key part of a typical Continuous Integration and Continuous Deployment (CI/CD) process. And one survey puts the number at 78% of respondents using Kubernetes in production[1]. So bringing Kubernetes to SQL Server may be a timely way to merge a couple areas of significant investment for companies: traditional RDBMS and the evolving DevOps space.

Another unique feature of this solution is Container management. Our initial technical guidance includes the use of the HPE Container Platform. The HPE Container Platform provides a multi-tenant, multi-cluster management infrastructure for Kubernetes (K8s). Creating a highly available K8s cluster is as easy as importing the hosts into the platform and defining master/worker role. In addition, it simplifies persistent access to data with the integration of Container Storage Interface (CSI) drivers.  This makes connecting with HPE storage easy, not only providing persistent volumes, but enabling access to valuable array-based resources such as encryption and data protection features like snapshots. The latest HPE CSI package supports HPE Primera storage, HPE Nimble storage and HPE 3PAR storage. 

Key components of the initial solution include:

  • Microsoft SQL Server 2019 Big Data Clusters
  • HPE ProLiant DL380 Gen10 servers
  • CentOS Linux—a community-driven, open source Linux distribution
  • HPE Nimble Storage arrays for the master instance to provide integrated persistent storage
  • HPE Container Storage Interface (CSI) driver
  • Kubernetes to automate deployment, scaling, and operations of containers across clusters of hosts
  • HPE Container Platform for the deployment and management for Kubernetes clusters (optional)
  • HPE MapR as an integrated, persistent data store (optional)

Why HPE Storage for Big Data Clusters

HPE Nimble Storage provides high availability persistent container storage for the BDC Master Instance

The partnership of Microsoft and HPE stretches back to the same time that the Hubble space telescope was launched, about 30 years ago. This heritage of testing and co-development has helped ensure optimal performance for Microsoft business software on HPE hardware. Other important reasons to chose HPE for your BDC deployment:

  • HPE developed a standards-compliant CSI driver for Kubernetes to simplify storage integration.
  • HPE developed the HPE Container platform, providing the most advanced and secure Kubernetes-compatible container platform on the market.
  • HPE owns MapR, an established leading technology for big data management — now incorporated within the HPE Data Fabric offering — and another key part of the solution that helps span data management from on-premises to the cloud
  • Finally, HPE has had in the market a complete continuum of SQL Server solutions based on HPE Storage – from departmental databases to consolidated application environments, and from storage class memory accelerated to the most mission-critical scale-up databases. Adding BDC provides yet another option – now for scale-out data lakes – to customers who rely on HPE as a trusted end-to-end solution partner.

Get started

The HPE Storage with Microsoft SQL Server Big Data Clusters solution is available today. An initial reference architecture delivers the benefits of scale-out SQL Server on HPE Nimble enterprise-class data storage with the newest container management capability using the HPE Container Platform.

The HPE Storage with Microsoft SQL Server Big Data Clusters solution is a safe, first step for your IT team, but a giant leap forward for your organization to derive the most business value from its data estate, regardless of whether its relational, unstructured, on-premises or in the cloud.

Learn more about HPE Storage solutions for Microsoft and see us live at the HPE Virtual Discover Experience.

Are you struggling to manage more data, and more types of data from across the enterprise? Start your mission to manage your entire data estate with existing SQL Server expertise.  Read the new implementation guide: How to deploy Microsoft SQL Server 2019 Big Data Clusters on Kubernetes and HPE Nimble Storage.