Storage Solution Predictions for 2023

Last year our team had some fun with a Top 10 list of Predictions and we actually did pretty well. So for this year we wanted to push ourselves a little, with more wide ranging prognostications. The common thread, though, are still topics relevant to those of us delivering tech offerings across a global market, subject to all the economic, technological and political vagaries entailed therein. Here’s what our crystal ball tells us for 2023…

  1. Azure tops AWS – Microsoft continues to drive their software installed base to Azure. SQL Server 2022 is the latest to be updated but not at parity with fuller-featured Azure-based alternatives. This deprecation process has been underway with Office, Skype, SharePoint, Exchange, etc. Microsoft Cloud revenues should already be passing $100B which would seem to top AWS. Then add another maybe $60B worth of annualized installed base software ‘lifting & shifting’ to SaaS, and we should see a pronounced crossover sooner than expected.
  2. AWS acknowledged as the Fidelity of IT infrastructure – Fidelity investments didn’t invent the Mutual fund (MFS did in 1924), but despite starting more than 20 yrs later, Fidelity rocketed to the top of that business by proliferating a broad family of actively managed funds, some led by the investment superstars of their day, growing to 40 mil. investors and $10T in assets. Similarly, AWS changed the infrastructure game, not just by delivering Cloud-based infra, but by a proliferation of offerings – over 200 total, including 60+ different EC2 instance types alone.  (The question now – who will surpass AWS as the Vanguard of infra? Who’s the next BlackRock?)
  3. War fears subside – US-China relations will warm, with attention on Taiwan waning, China Covid lockdowns ending, and Asian supply chains freeing up. All involved will get back to the business of business. Similarly in Europe, the Ukrainian conflict will move towards a negotiated peace or stalemate. As the continent proves it can survive with less Russian energy, life there will stabilize and impacted markets will regain equilibrium. 
  4. Chip war heats up – Despite a US pledge of billions for domestic chip capacity, the reality of new design and fab lead-times measured in years will keep the US dependent on non-US processors, which is a positive for global trade & relations, but a continued concern for those worried about US reliance on foreign semiconductors.
  5. AI innovation jolts a domestic industry – We’ve become familiar with Google maps, Alexa, Roombas, and the occasional glimpse of a self-driving (though still not yet driverless) vehicle. But we’re due for some new ‘killer app’ (and I’m not talking about kamikaze drones, I hope) finding its way into one or more major US industries. With supply chain and geo-political strains not yet completely behind us, the stage is set for a significant disruption to a large, legacy business, due to a game-changing intelligent automation in 2023.
  6. Workers return – Whether it was the social isolation, threats from the boss, or the tug of ‘cake day’, workers young and old return to their cube farms. Some productivity increased, but probably at the expense of innovation. However, Hybrid work becomes the norm, and companies continue a facilities evolution – hoteling cubes, storage cubies, more transient/social space (fewer dedicated offices) and heavy investment in related tech: cloud-based office apps, VDI, end-point security, related networking upgrades. Also adoption of latest video, telepresence and ‘metaverse’ tech to get the most out of meetings that will now often be a mix of local and remote attendees.
  7. Teams meta-disrupted – Microsoft Teams is getting a lot more use in a post-pandemic world, but it’s got the design appeal of a 1970s bathroom. Yet we know more usable tools are possible, such as Slack. There’s a pent-up supply of ‘metaverse’-enabling tech (AR, VR, gesture, voice, wearables…) – and those related vendors are itching to find valuable use cases. Either as a Teams add-on or replacement, there’s the opportunity for someone to create the ‘Apple watch’ of desktop video collaboration.
  8. IT mega deal – There have been a lot of ‘tuck-in’ acquisitions done by IT leaders over the past few years, but this current wave of recession fear and stock dips will be enough to make the numbers work for at least one enterprise IT mega deal in 2023. Look for a marriage of convenience between a couple companies who have not been able to evolve their biz models to be sufficiently cloud computing-centric.
  9. Edge app pwned – As more apps and data capacity move to ‘the edge’ (e.g. IoT devices, Point-of-Sale/Service systems, telco network locations) we can expect at least one significant hack in the coming year that will proliferate across a compromised edge and significantly impact a regional or maybe even global user base.
  10. Data mining becomes a resource biz – We’re heard the expression that “data is the new oil”, but we have yet to see the first “Standard Oil” of data. There are data mining and list companies out there like Sisense and Axiom.  But they are still relatively small, and there haven’t been any rapacious moves to roll-up competitors, vertically integrate, or any other aggressive actions to build a dominant, data monopoly. But given the growing value of data, esp. to train hungry ML apps, I’m expecting we’ll see an ambitious actor make a move in 2023.

Any comments – positive, negative or otherwise – are appreciated. Or let’s check back again this time next year to see how we did.

Advertisement

New SQL Server Big Data Clusters solution takes the stage at HPE Virtual Discover Experience

The HPE Discover Virtual Experience starts Tuesday June 23rd, when 10s of thousands of people will join online to learn about new technologies to transform their businesses such as intelligent edge, hybrid cloud, IoT, Exascale computing and much more. Our team will be part of this event, showing off our newest solution for Microsoft SQL Server Big Data Clusters running in the HPE Container Platform. Here’s the direct link for session and speaker information or search us up once you’ve registered and join the event — we’re session “D139“.

The inspiration for our work is that data growth is taking off like a rocket, and in that spirit the HPE Storage team staged our approach to the new enterprise database capability from Microsoft: SQL Server 2019 Big Data Clusters. We lifted off with an initial enterprise-grade solution for SQL Server Big Data Clusters (BDC), and laid-in a course for more features, capabilities and scale. As introduced in previous blogs, SQL Server BDC uses a new architecture that combines the SQL Server database engine, Spark and Hadoop Distributed File System (HDFS) into a unified data platform.

Microsoft SQL Server 2019 features new Big Data Clusters capability

This approach escapes the gravitational constraints of traditional relational databases, now having the ability to read, write, and process big data from traditional SQL or Spark engines, letting organizations combine and analyze high-value relational data along with high-volume big data, all within their familiar SQL Server environment. Our first stage effort includes an initial implementation guide, collateral and a number of related activities including a live demo in this year’s HPE Discover Virtual Experience.

Following soon will be ‘stage 2’ where we’ll publish technical guidance on deploying your own BDC that takes advantage of data virtualization, also known as the Polybase feature. Polybase lets you virtualize and query other data sources from within SQL Server without having to copy and convert that outside data. It eliminates the time and expense of traditional extract, transform, and load (ETL) cycles, and perhaps more importantly, lets organizations leverage existing SQL Server expertise and tools to extract the value of third-party data sources from across the organizational data estate such as NoSQL, Oracle, and HDFS, to name just a few.

The last stage of this mission will add HPE Apollo 4200 storage systems for a cost-effective storage pool, especially for larger BDC deployments in the petabytes.

Info on our overall SQL Server BDC solution is available online in the new solution brief.

Putting BDC boots on the moon

There are a number of key considerations for deploying your own SQL Server BDC. It’s going to be a very different environment than what you may be familiar with for traditional Microsoft SQL Server. Rather than a Windows environment, with or without VMs, BDC requires the use of containers and along with running on Linux, the architecture will contain a number of possibly new technologies for traditional IT teams: Kubernetes, Apache Spark, Hadoop Distributed File System (HDFS), Kibana and Grafana.

Microsoft Azure Studio showing a dashboard for a Big Data Cluster

Many companies have begun to use Kubernetes as an efficient way to deploy and scale applications. It’s often referenced as a key part of a typical Continuous Integration and Continuous Deployment (CI/CD) process. And one survey puts the number at 78% of respondents using Kubernetes in production[1]. So bringing Kubernetes to SQL Server may be a timely way to merge a couple areas of significant investment for companies: traditional RDBMS and the evolving DevOps space.

Another unique feature of this solution is Container management. Our initial technical guidance includes the use of the HPE Container Platform. The HPE Container Platform provides a multi-tenant, multi-cluster management infrastructure for Kubernetes (K8s). Creating a highly available K8s cluster is as easy as importing the hosts into the platform and defining master/worker role. In addition, it simplifies persistent access to data with the integration of Container Storage Interface (CSI) drivers.  This makes connecting with HPE storage easy, not only providing persistent volumes, but enabling access to valuable array-based resources such as encryption and data protection features like snapshots. The latest HPE CSI package supports HPE Primera storage, HPE Nimble storage and HPE 3PAR storage. 

Key components of the initial solution include:

  • Microsoft SQL Server 2019 Big Data Clusters
  • HPE ProLiant DL380 Gen10 servers
  • CentOS Linux—a community-driven, open source Linux distribution
  • HPE Nimble Storage arrays for the master instance to provide integrated persistent storage
  • HPE Container Storage Interface (CSI) driver
  • Kubernetes to automate deployment, scaling, and operations of containers across clusters of hosts
  • HPE Container Platform for the deployment and management for Kubernetes clusters (optional)
  • HPE MapR as an integrated, persistent data store (optional)

Why HPE Storage for Big Data Clusters

HPE Nimble Storage provides high availability persistent container storage for the BDC Master Instance

The partnership of Microsoft and HPE stretches back to the same time that the Hubble space telescope was launched, about 30 years ago. This heritage of testing and co-development has helped ensure optimal performance for Microsoft business software on HPE hardware. Other important reasons to chose HPE for your BDC deployment:

  • HPE developed a standards-compliant CSI driver for Kubernetes to simplify storage integration.
  • HPE developed the HPE Container platform, providing the most advanced and secure Kubernetes-compatible container platform on the market.
  • HPE owns MapR, an established leading technology for big data management — now incorporated within the HPE Data Fabric offering — and another key part of the solution that helps span data management from on-premises to the cloud
  • Finally, HPE has had in the market a complete continuum of SQL Server solutions based on HPE Storage – from departmental databases to consolidated application environments, and from storage class memory accelerated to the most mission-critical scale-up databases. Adding BDC provides yet another option – now for scale-out data lakes – to customers who rely on HPE as a trusted end-to-end solution partner.

Get started

The HPE Storage with Microsoft SQL Server Big Data Clusters solution is available today. An initial reference architecture delivers the benefits of scale-out SQL Server on HPE Nimble enterprise-class data storage with the newest container management capability using the HPE Container Platform.

The HPE Storage with Microsoft SQL Server Big Data Clusters solution is a safe, first step for your IT team, but a giant leap forward for your organization to derive the most business value from its data estate, regardless of whether its relational, unstructured, on-premises or in the cloud.

Learn more about HPE Storage solutions for Microsoft and see us live at the HPE Virtual Discover Experience.

Are you struggling to manage more data, and more types of data from across the enterprise? Start your mission to manage your entire data estate with existing SQL Server expertise.  Read the new implementation guide: How to deploy Microsoft SQL Server 2019 Big Data Clusters on Kubernetes and HPE Nimble Storage.

HPE Brings Big Data to Hyperconverged Infrastructure with New Apollo Solution

If you were at Microsoft Ignite last month you may still have missed the launch of HPE’s latest hyperconverged infrastructure (HCI) solution: Microsoft Azure Stack HCI on HPE Apollo 4200 storage. It would be understandable, as Ignite was a major industry event packed with technology news, especially with lots of HPE show activity, including prominent HPE mainstage appearances for both Azure Stack and the new Azure Arc.
But among the new and enhanced solutions we demonstrated at the show, our presentations about Azure Stack HCI on HPE Apollo storage were well-received and timely given the growing emphasis on HCI, hybrid cloud and all things software-defined. The key message for this solution was that it is pioneering a new area in software-defined HCI for Windows Big Data workloads. The solution uniquely delivers the convenience of hyperconverged Infrastructure on a high-capacity platform for the most data-intensive applications.

The emergence of Big Data HCI
We’ve all heard about the explosive growth of data, and that we’re in an age of zettabytes. IDC made a specific prediction, that by 2024, just data created from AI, IoT and smart devices will exceed 110 zettabytes (source: IDC FutureScape: Worldwide Cloud Predictions 2020).
At the same time, organizations are trying to simplify their IT infrastructures to reduce cost, complexity and the need for specialized expertise. The conflict is that the applications required to harvest this explosion of data can be the most demanding in terms of performance and management. I’m seeing companies – even the largest most capable enterprises – are recognizing the value of easy-to-use hyperconverged infrastructure to alleviate some of the strain of delivering these demanding, data-centric workloads.
Azure Stack HCI on HPE Apollo 4200 storage is a new solution that addresses the needs of the growing “Big Data HCI” customer. Azure Stack HCI on HPE Apollo is built on the highest capacity Azure Stack HCI qualified 2U server, bringing an unmatched ability to serve big data workloads on a compact Windows software-defined HCI appliance.

HPE Apollo HCI solution key components
Azure Stack HCI is Microsoft’s software-defined HCI solution that pairs Windows Server 2019, Hyper-V, Storage Spaces Direct, and Windows Admin Center management, along with partner x86 hardware. It is used to run Windows and Linux VMs on-premises and at the edge with existing IT skills and tools.
Azure Stack HCI is a convenient way to realize benefits of Hybrid IT, because it makes it easy to leverage the cloud-based capabilities of the Microsoft Azure cloud. These cloud-based data services include: Azure Site Recovery, Azure Monitor, Cloud Witness, Azure Backup, Azure Update Management, Azure Network Adapter, and Azure Security Center to name a few.
The Azure Stack HCI solution program includes Microsoft-led validation for hardware, which ensures optimal performance and reliability for the solution. This testing extends to technologies such as NVMe drives, persistent memory, and remote-direct memory access (RDMA) networking. Customers are directed to use only Microsoft-validated hardware systems when deploying their Azure Stack HCI production environments.

HPE Apollo 4200 Gen 10 – largest capacity 2U Azure Stack HCI system

HPE Apollo 4200 Gen10 Server – leading capacity/throughput for Windows HCI
The HPE Apollo 4200 Gen10 server delivers leading scale and throughput for Azure Stack HCI. The HPE Apollo 4200 storage system can accommodate 392 TBs of data capacity within just a 2U form-factor. This leads all other Azure Stack HCI validated 2U solutions as seen in the Microsoft Azure Stack HCI catalog (Microsoft.com/HCI). In addition, the HPE Apollo storage system is a leader in bandwidth, supporting 100Gb Ethernet and 200Gb Infiniband options. Customers are already running large scale, data-centric applications such as Microsoft Exchange on HPE Apollo systems, and can now add Azure Stack HCI as a means to simplify the infrastructure stack, while preserving performance and the space-efficient 2U footprint.
The HPE Apollo Gen10 system is future-proofed with Intel Cascade lake processors for more cores and faster processing, along with memory enhancements and support for NVMe storage. The HPE Apollo systems leverage a big data and high performance computing heritage, and have an established Global 500 customer track record.

Azure Stack HCI on HPE Apollo solution – more than just hardware
The HPE Apollo 4200 system is at the core of this Microsoft software-defined HCI solution, but there’s much more to the solution. HPE solution engineering teams perform testing on all solution designs, and publish technical whitepapers to provide guidance on implementation, administration, and performance optimization, for example the recent Microsoft Windows Server 2019 on HPE Apollo 4200 implementation guide. HPE also trains authorized reseller partners to help assure fast, successful deployments and fast time-to-solution for customers.
Windows Admin Center (WAC) is becoming the new standard interface for Windows system management. HPE is developing Extensions for WAC that will make it easier to manage HPE Apollo systems within Windows Server 2019 environments as well as specifically within Azure Stack HCI clusters.
As an HPE Storage solution, customers also enjoy high availability through HPE InfoSight predictive analytics that deliver the uptime benefits of AI to the datacenter.

Get started with HPE Apollo HCI
The Azure Stack HCI on HPE Apollo solution is available today. It’s the largest capacity 2U Azure Stack HCI validated solution available, and has been officially qualified for All-Flash, Hybrid SAS SSD, and NVMe providing options for affordable and high-performance data storage.
The Azure Stack HCI on HPE Apollo solution is the go-to choice for analytics and data-centric Windows workloads. Get easy to manage infrastructure with native Microsoft Windows administration. Available with the solution are published technical guidance including whitepapers and related resources, with WAC extensions on the way.
The launch webinar was recorded and is available on demand – watch it to learn more:
https://www.brighttalk.com/webcast/16289/374384/simplify-your-big-data-infrastructure-with-azure-stack-hci-on-hpe-apollo