Deeper Dive inside Exchange on Apollo

Our solution engineers are constantly configuring, testing, benchmarking and documenting our Microsoft solutions, providing vital guidance to our customers and resellers so that they can quickly and confidently deploy them. I had a moment to go back over some recent work we did on our Microsoft Exchange Solution, and I wanted to share some highlights that can be missed if you didn’t have the time to read through these detailed technical publications.

The widening orbit of Apollo

First off, the HPE Apollo 4200 Gen10 has been our go-to Storage server for some time. A key offering in the HPE Storage product line, it serves as the company’s lead platform for Software-defined storage solutions. It shares a heritage with the Apollo HPC compute systems (e.g. model 2000s, 6000s) around HPC storage and big-data usage. Storage use evolved to include object storage, scale-out file systems, backup, archive and other data-intensive workloads. Of course Microsoft Exchange is the grand-daddy of business email, a classic data-intensive application, esp. if you’re regularly sharing multi-MB sized powerpoints.

As we continue to build-out our Microsoft solution business, we’ve hit upon a powerful use scenario for Apollo as a leading platform for Azure Stack HCI. You’ll continue to see more news and innovation coming from us in this area in the coming months.

Innovative design provides some of the highest storage density in a 2U system

HPE Apollo 4200 Gen10 server with Exchange 2016 ESRP
Our team undertook testing based on Microsoft’s testing benchmark/certification known as the Exchange Solution Reviewed Program. The program combines a storage testing harness (Jetstress) with solution publishing guidelines, and has been used by Storage OEMs for over the past decade as a standard testing framework. Customers and resellers have been eager consumers of the publications as a way to compare results across vendors and to help design their Exchange storage architectures and deployments.

This specific testing used the HPE Apollo 4200 Gen10 storage system, which is a 2U server available either in a 24 large form-factor (LFF) drive configuration or a 48 small form-factor (SFF) drive configuration.

The remarkable feature of the 4200 is that the drive cages use two trays, with the second tray accessed by simply pulling the front tray forward on internal rails and can be done while the server is operational. This unique chassis design produces probably the highest storage density available in a 2U server in the industry.

The bigger brains of the Apollo 4200 Gen10
The HPE Apollo 4200 Gen10 contains 16 DIMM slots, which support up to 1TB RDIMM (registered) or 2TB of LRDIMM (Load reduced) memory. Individual DIMM capacity has doubled in this latest platform update, going from 32GB to 64GB RDIMMs and 64GB to 128GB LRDIMMs.

The CPUs have been upgraded to use next-gen Intel® Xeon® Scalable processors, the same ones designed for AI/Deep learning workloads, and future-proofed with support for Intel Optane PMEM. The upgraded CPU enables an increase from 24 cores to 28 cores and with the clock speed increasing from 2.1 to 2.3GHz.
There are 5 PCIe slots accessible from the rear of the unit. The 16x slots support 100Gbe Ethernet connections, and the 24x slots support up to 6 NVMe SSDs.

Get flexible with the Apollo
Another feature of the HPE Apollo 4200 Gen 10 is its flexibility. Multiple configurations are possible, depending on capacity and performance needs, making this a valuable platform for Exchange deployments of any size.

The rear drive cage alone can be re-configured in a number of ways:

  1. Five low-profile PCIe slots with two processors
  2. Or, Four LFF rear drive cages, along with the Five low-profile PCIe slots with two processors
  3. Or, Two SFF rear drive cages + two full-height half-length risers, and the Five low profile PCIe slots with two processors
  4. Or, Six SFF rear NVMe cages, along with the Five low-profile PCIe slots with two processors

The Ultimate Exchange Building Block
This HPE solution for Microsoft Exchange Server 2016 was designed using a building block approach, with a multi-copy database design using Exchange Database Availability Groups (DAGs). A DAG is a group of up to 16 mailbox servers that host a set of databases, providing automatic recovery from failures that affect databases or individual servers.

Designed as a building block for various size enterprise email needs

This solution used a single DAG in two building blocks, which are two servers in the primary site and two servers in the secondary site, to support 4,000 mailboxes per building block with a 25 GB capacity, and a messaging profile of 200 messages sent and received per user, per day. Using the building block approach, customers can scale their Exchange environment to a size that fits their needs.

The Microsoft Jetstress testing validated that the storage subsystem was capable of the IOPS needed to support the configured number of mailboxes as well as providing additional headroom for growth.

Where to get the rest of the story
The HPE Apollo 4200 Gen10 with Exchange 2016 ESRP document is available online today. Entitled, “HPE Apollo 4200 Gen10 Server 4000 mailbox resiliency Microsoft Exchange 2016 using 8 TB 7.2K large form factor drives”, it is based on Microsoft’s ESRP 4.0 testing framework. The technical whitepaper contains details on the tested performance results, configuration best practices, product photos, and includes extensive Jetstress output.
The document can be downloaded here.

Office Space 2019 revisited

I was going to write a blog on the new office space my group is moving into, and my thoughts on it, and realized someone had already written about it: It’s interesting the universality of opinions on modern corporate Open Office design. In our case the move was a bit more stark, as we were exiting a start-up environment of comradery over free food, intensity of a shared mission, tight collaboration across organizations, and even a garden with bee hives(!) – which has been reengineered to an open office design, with piped-in ambient water sounds, as part of a far flung global organization.

“An open plan design will move the goalpost and give us low hanging fruit we can leverage 110%”

Granted, much has been written about open plan offices being better for collaborative work and that they may be better aligned with the habits of a younger work force. It’s certainly more space efficient – taking the traditional average space per worker ranging to 250 square feet, now with these high density designs, down to 100 sf or less.

My take-away? Clearly office design needs to map to business objectives the same way compensation programs, supply chain, financial strategy, and corp dev have to. Being within the vortex of this change makes it challenging to be completely objective, but being a participant provides the credibility of directly experiencing the pros and cons. I’m seeing the Open Plan approach should be applied primarily to realize more efficient collaboration for integrative work-teams, not in cases where the workgroup process requires quiet attention to detail, thoughtful strategic thinking, or significant private/sensitive communications, to name a few.

Recent Developments in SQL Server Performance and Data Protection

Customers, partners and members of the HPE Community converged on Las Vegas this past week for Discover 2019. It’s the company’s largest customer event which provides hands-on learning and training on HPE products as well as direct contact with members of the teams who make them.

One of the topics covered in the show was Hybrid cloud, and how to keep important data protected and available. New performance statistics were released from recent testing, with one of them relating to Microsoft SQL Server. This comes at a time when the company has released an update to Recovery Manager Central for SQL Server (RMC-S), which has been a popular means to ensure Application consistent database copies.

HPE Storage solutions for MS SQL Server
HPE has been investing to deliver outstanding levels of both Performance and Manageability for SQL Server. First off, HPE Data Storage makes SQL Server Faster. HPE has multiple lines of All-Flash storage arrays HPE 3PAR StoreServ, HPE Nimble Storage, and now the all new HPE Primera, that let you run Microsoft SQL Server in production at the highest levels of performance and availability. With HPE flash storage you get reliably fast performance: up to 3.8 million IOPS (I/Os per second) on 3PAR, and about 2 million IOPS on a cluster of Nimble storage all-flash arrays. And all at a consistent sub-millisecond latency.

Recently we added to this performance boost with the addition of Memory-Driven Flash. This is an all-flash array with an NVMe Storage Class Memory (SCM) cache. Recent testing with a SQL Server database on Memory-Driven Flash has shown a 59% decrease in latency (sec/read) compared to when SCM is disabled. Decreased latency translates to faster database query responses and a better user experience.

We’ve also invested to make SQL Server administration better. HPE delivers the most consistent availability with Intelligent Storage and HPE InfoSight, delivering 99.9999% guarenteed uptime. This means your data remains available, and in the rare event that there is an issue, the storage proactively alerts you to potential problems. HPE Storage comes with copy data management capabilities, and extends them across your hybrid cloud. It’s easy to create SQL-consistent snapshots or archival copies to Azure. HPE Storage arrays support data workflows and tasks that can manage live copies on and off-premises for DR, test/dev, reporting, analytics, patching, & upgrading. And HPE Storage enhances SQL Server DevOps as we are the only vendor to offer rich data services for both Windows and Linux Containers, and have interop with all the leading container platforms including Docker, Kubernetes, Mesos. HPE storage was the first to provide persistent volumes for containers.

How Recovery Manager Central improves SQL Server data protection
HPE Recovery Manager Central software, and specifically the RMC-S plug-in, allows SQL Server administrators to protect SQL Server instances and databases with application-consistent recovery points. The snapshots are managed on HPE 3PAR storage, and you can use RMC-S with either HPE StoreOnce, HPE Data Protector, Symantec NetBackup, or Symantec Backup Exec to protect and restore these snapshots.
Recent testing has shown that creating a database clone with RMC-S takes just 1 minute and 22 seconds. This is compared to the traditional and very manual process of creating a clone with SQL Server which can take 45 mins. The automated process of RMC vastly outperforms the manual process of creating new database, copying the schema, and then inserting and verifying data within every individual table. Compared to the traditional approach, RMC-S lets you copy a database in one step and 32 times faster.

Where to get more info
There’s much more information available on the advantages of HPE storage for Microsoft SQL Server, as well as for all the important Microsoft business workloads such as Windows Server, Exchange and newer solutions like Azure Stack and Azure Stack HCI. Here are some key resources below:

Webinar: Get more from your Data with HPE Microsoft Storage solutions
Reference Architecture: Microsoft SQL Server with HPE 3PAR Memory-Driven Flash
Blog: Cut SQL Server Latency in Half with Memory-Driven Flash Storage
HPE Storage Microsoft Solutions webpage:

Converged Graffiti

There’s a very wicked ’55 Chevy lookin’ for you

I was nostalgic when I read the story about VxBlock hitting 10.  I had been working in Converged computing going back some time, and helped grow the FlexPod business.  So to acknowledge this passage of time also recognizes just how old this space is, esp. in technology terms (i.e. ‘Internet dog years’).  This product started as an entire freestanding business – the Acadia then VCE joint venture back in 2009. The article cites how EMC once had a full line of Vblocks – which evolved into VxRack, VxRail and VxFlex among other offerings – but today this whole business and converged product line has since been trimmed to one single model, the VxBlock 1000, because as they say themselves, “they’re outdated”.

File that under C.S.

The Converged market is in the throes of a metamorphic change. What was once the big story, popular Certified Reference and Integrated systems like VxBlock and FlexPod revolutionizing the datacenter, are now shrinking and in decline, about a third of the market and dropping over 6% annually.  The overall space is still growing, but the data shows that it’s all due to Hyperconverged infrastructure (HCI).

flexpod cake
FlexPod birthday cake

Then why celebrate? The same reason why we celebrated in the FlexPod team. It’s still an important milestone – it marks the passage of time, the accomplishment of another year of business, it’s certainly about the comradery of those in the business ecosystem, and an appreciation that this offering continues to pay the bills for those workers at Dell/EMC.

But is it a time to celebrate for IT customers?

If brains were dynamite you couldn’t blow your nose

Like the ending of American Graffiti, with the main character staring out the window thinking about what might have been, the folks at Dell must have been thinking longingly about the good ‘ol days as they cut into their VxBlock birthday cake. The Converged market for them is a slow glide, riding out the themes of a bygone era of IT. Their website promotes “turnkey converged”, “power” and “mission critical” like it’s a 60’s muscle car.  But we’re now solidly within the jet age of Hyperconverged, software-defined infrastructure and cloud-based computing.

I love it when guys peel out

HCI is the new hot rod in town. It’s overtaking all things converged, integrated and “Stack/Block”.  HCI is the fastest growing segment in that infrastructure space, and has surpassed the legacy reference architecture and converged segments.  It’s currently on a 57% annual ascent and now accounts for almost half of an over $16B market.  The customer has spoken: convenience of the more compact form-factor with both storage and compute combined is preferred over the proclaimed benefits of separate “best in class” components or the ability to scale storage resources separately from compute.

For info on an ideal x86 platform for software-defined HCI look no further than the HPE Apollo 4200.  It’s built for demanding, data-centric workloads, and currently hosts Windows-based HCI and vSAN environments.  Continue to watch this space for more news on new offerings in the area coming from Microsoft and HPE.

Cut SQL Server Latency in Half with Memory-Driven Flash Storage

Memory-Driven Flash is a new class of enterprise storage that combines the benefits of memory-speed storage media with the traditional economics and manageability of a hybrid flash architecture.

HPE Memory-Driven Flash is being used to enable business databases and applications to achieve new levels of performance while adding virtually no additional administration. Specific studies are showing Microsoft SQL Server database reads are up to 50% faster, which in turn speeds transactions and improves end-user experience.

This architectural approach uses new Storage Class Memory (SCM) as a caching tier to store frequently used blocks of data and enable access to them at near Memory speed, rather than requiring the system to read the data from the storage media tier.  HPE’s productization has involved tiering back-end storage media with a layer of Non-Volatile Memory Express (NVMe) solid state drives (SSD), which adds persistent, near-memory performance on top of all-flash SSD. SCM represents the industry’s latest innovation: a low-latency persistent storage media that bridges the performance gap between DRAM system memory and NAND SSD storage. The SCM price/performance level is 10X beneath DRAM and as much as 100X above NAND.

Cache in on the MDF opportunity

In some ways MDF is the modern day evolution of traditional hybrid or ‘adaptive flash’ tiering.  Whereas in the past the tiering was between SSDs and spinning disk hard drives, this new instantiation is SCM over SSD.  And just as with the previous generation of systems, the unique value of a manufacturer’s product was both superior mechanical design, but also in the intelligent algorithms that identified the ‘hot’ frequently accessed data to cache and optimally managed the precious cache capacity. The image below illustrates how with MDF, the SCM layer can serve up Reads directly from this higher-performance media, rather than accessing SSD. The higher the SCM hit rate, the lower the latency.  This approach primarily serves to reduce latency rather than increase system IOPS or some other measure of performance.

NVMe SCM as a cache to speed access to random read data.

NVMe is an open logical device interface protocol for accessing nonvolatile storage over the PCIe interface. The low latency NVMe protocol bypasses and eliminates the overhead of other standard storage protocols such as SAS or SCSI. The NVMe SCM device used in HPE 3PAR storage systems has the NVMe PCIe controller interface as part of the unit, so the NVMe SCM is attached directly to the PCIe bus, and uses the NVMe protocol for faster communication with the HPE 3PAR controller.

Cold, hard cache

The math behind the MDF caching is to maintain data on a higher-performing tier of Storage Class Memory that can deliver Reads at an average of just 10 microseconds. This saves considerable time, in preventing the need to access a data tier of solid state storage that may deliver Reads at 90 microseconds (and compare that to a hard drive tier that would take 10 thousand microseconds to perform that same Read operation).

The results of MDF can be seen both from the perspective of the array itself or the host system accessing the data. In either case, when the Storage Class Memory feature is engaged and populated with hot data, the benefit of reading data from the cache has been shown to reduce system latency by 50% on average.

The chart below shows the results achieved in the lab with Microsoft SQL Server 2017 running on a ProLiant Gen10 server and an HPE 3PAR 9450 array with Storage Class Memory.

Perfmon Avg. Disk sec/Read response time for the SQL Server database disk using Memory-Driven Flash

From the host perspective, in this test there was shown to be a 59% decrease in latency (sec/Read) with Memory-Driven Flash compared to when the NVMe SCM is disabled.  Decreased latency translates to faster database query responses.

From the admin and user perspective, Memory-Driven Flash is simple to use.  It’s an embedded component of the system – another storage device that serves as a selectable level of cache in the storage system.

How to Cache in on MDF

Memory-Driven Flash will provide a substantial reduction in I/O response times for small block OLTP type read intensive workloads. However, it will not be beneficial for write intensive or large sequential read or write workloads. Here are some specific guidelines to keep in mind for MDF:

  • Memory-Driven Flash does not requires any special SQL Server configuration
  • Memory-Driven Flash works with one or more groups of volumes – for best results only enable volume sets that need the boost in performance
  • Memory-Driven Flash benefits random read data sizes of 4 KiB to 64 KiB, such as read intensive OLTP type workloads – SQL Server workloads have pre-fetch I/O sizes that are typically 8 KB
  • Memory-Driven Flash does not benefit write I/O response times
  • Memory-Driven Flash can benefit multiple applications and virtual volume sets concurrently

Get the details on MDF

A new technical whitepaper was just published that details the testing performed on SQL Server running on HPE MDF.  <Here’s a link to the paper>

Track the latest news and happenings with HPE Microsoft Storage solutions on twitter at @mhardi01.


5 Strategies to address Microsoft Business Applications End of Support

There are important Microsoft applications reaching end of support as soon as next month, and organizations are scrambling to make sure they will not only be compliant with up-to-date software, but more importantly have the systems and infrastructure in place that will carry them and their business successfully through to the next refresh cycle, years from now.

A recent blog on Supportageddon and other things you didn’t know about HPE Microsoft Solutions called out the applications that need to be considered, such as Windows Server 2008, SQL Server 2008 and Exchange Server 2010.  And it also provided valuable related eos-imagesolutions available from HPE.  But it didn’t go into much detail on modernization strategies or specific upgrade approaches. New findings are being published on these topics, and the following are top strategies to consider in addressing your own Supportageddon challenge.

  1. Think through your right mix

For your overall portfolio to achieve the agility and speed you are seeking, you’ll want to evaluate a mix of workload modernization options. This ‘right mix’ means striking a balance between on-premises, cloud-based and hybrid cloud IT deployments.  In a recent analyst conference, IDC shared that by 2023, 30% of IT systems in enterprise DCs and Edge locations will be running pubic cloud-sourced services.  And this is the average – we know of many organizations who have embraced a ‘Cloud First’ policy to the extent that all new applications are residing off-premises.  But each company needs to find the right balance for themselves.   A study by 451 Research Advisory team took the results of a 1,800 IT Decision-maker survey and developed a “Right Mix” tool, enabling organizations to have data-driven guidance regarding on- and off-premises cloud mix decisions. Results of this study are available via consultative engagements.

  1. Modernize & consolidate your on-premises infrastructure

Streamlining your IT operations starts with making your infrastructure more streamlined – with storage, servers and networking that can deliver more effective capacity, performance and throughput in the same space and energy footprint, and ideally at a lower cost per output.  Along with new, more efficient traditional hardware, other options include Hyperconverged products, as well as hybrid solutions that leverage cloud-based volumes, compute and Backup/DR capabilities.  There are numerous permutations of Microsoft workloads and infrastructure platforms to consider, with the right one available to match your right mix objectives.

  1. Microsoft Azure Stack hybrid strategies

Azure Stack is an integrated hardware/software solution that allows organizations to deploy a reduced set of Azure Cloud services in their own data center. A key benefit of Azure Stack is that if you write to the Azure API, you have a “write-once/deploy on-prem or cloud” option for apps in the Azure Public Cloud or on-premises, without having to change a single line of code. Azure Stack is a relatively new offering with a limited installed base, but a Piper Jaffray survey indicated that 72% of Azure Public Cloud customers intend to deploy on Azure Stack over the next 3 years. If you’re contemplating this solution in your datacenter, there are related Data Protection and Storage Networking solutions that will help safeguard and accelerate your data.

  1. Cloud first strategies

As mentioned earlier, many enterprises are taking a ‘cloud first’ approach, made popular by the U.S. government mandate a decade ago.  This sea change can be seen in the numbers, with 35% of all production apps expected to be cloud-native by 2022, totaling 500M new cloud-native apps (source: IDC).  With enterprises moving to Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) solutions, a top cloud service provider to consider is Microsoft Azure. What makes Azure stand out is the ability to use products with which you’re familiar, such as SQL Server and Exchange, but as online versions, or in a hybrid mode that delivers the best of on-premises control with cloud-based ‘limitless’ resources.

  1. Database modernization

Databases are so central to many organization that they earn their own strategy considerations. Companies should explore ways to accelerate business with a modern data platform, while achieving better economics, more performance, improved security, and greater agility. Speeding time to insight has been the central driver for new big data analytics and visualization projects. Options exist today, either on-premises or in the cloud, that will deliver faster transactions and queries, as well as potentially simplified management while still accommodating more data and data types. Consider new polybase architectures that can layer data and graphical management on top of traditional RDBMS and non-structured data, as a means achieve a data lake without creating siloed resources and expertise.

Solve your EOS problems with HPE Microsoft solutions

A new webinar on how to get more from your data with Microsoft on HPE Storage was just recorded and made available through In this webinar, featuring yours truly, we share how to plan major Microsoft upgrades, improve performance with flash memory, ensure data availability for Azure Stack, and how HPE InfoSight brings AI intelligence to data center infrastructure.

Access recorded webinar on here

Supportageddon and other things you didn’t know about HPE Microsoft Solutions

HPE Storage does more than SQL server

When I first took this role as Product Manager for the Microsoft Storage solutions at HPE, it really seemed that all we did was store MS SQL Server data on HPE Nimble and HPE 3PAR storage.  It turned out that HPE has a whole line of Microsoft Storage Solutions delivered to customers across many of our Storage products — Nimble, 3PAR StorServ, MSA, Apollo, and related networking and services.

In fact, our biggest solution area so far this year is Microsoft Exchange on Apollo. Traditionally we’ve served many very large organizations with this solution, and we expect that to continue with the upgrade to Exchange 2019.  To note – this solution typically uses the Apollo platform, which you may be more familiar with in relation to Big Data analytics.

But the HPE Apollo 4200 Gen10  is an ideal platform for Exchange – esp. when you match it up vs. the new Exchange Server 2019 Preferred Architecture (PA): 2U, x86 server, dual socket, up to 48 total physical processor cores.  Up to 512GB of memory – exceeding the 256GB in the PA. We accommodate up to 54 hot-pluggable drives within the server chassis, way beyond the 12 needed in the PA. Apollo meets the requirement to mix HDD and solid-state storage SAS or SATA SSD within the same chassis – and we added a rear drive cage for NVMe options as future-proofing.  The 4200 also comes with a 96W battery-backed Smart Array write cache controller.


Supportageddon is coming

So what exactly do I mean by ‘Supportageddon’ and does this have something to do with a big snow storm?  Many of you depend on Microsoft apps for business-critical systems, and many of these popular apps are going EOS as soon as this April!  By EOS I really mean “End of Extended Support”.  This impacts a number of areas very important to enterprise IT:

  • Security: There will be no access to critical security updates, opening the potential for business interruptions
  • Compliance: As support ends, your organization may fail to meet to meet compliance standards and industry regulations
  • Maintenance: Maintaining legacy servers, firewalls, intrusion systems, etc. gets expensive quickly

Specific versions going EOS that we see especially critical to our customer base are Windows Server 2008 (and R2), SQL Server 2008, and Exchange Server 2010.  In the case of Exchange for instance, which goes EOS Jan 2020, though the software will continue to run, Microsoft is telling customers to migrate as soon as possible. There will be no support extensions.

Don’t think “Cloud First” – think “Hybrid Cloud First”

Hang on, “Cloud First” was the New way to think about IT, right?  Well, actually if by new you mean 2010, then sure. That was when US federal government agencies received the mandate to start using cloud computing in their IT operations.  Since then, there has been a lot of design and engineering going into IT products, which have been evolving from being completely on-prem equipment, to cloud-compatible, to cloud-ready, and many nowadays being ‘built for cloud’ and even multi-cloud use.

Having said that, there are still a large number of companies who will happily choose a refresh of what they already have in place. A great example is Burkhalter group who did a considerable study of how best to meet the needs of their growing business – cloud vs. on-premises – and decided to keep Exchange on-prem, and upgraded to an HPE 3PAR Storage and Synergy composable compute infrastructure.  They reduced admin costs 20% with the cloud-like scale and automation they got with the HPE solution in their own datacenter.

With the products available today, smart customers are thinking “Hybrid Cloud first” and are looking for infrastructure and app solutions that bridge the on-prem/cloud divide for them.  For instance, back to Exchange, the latest version can natively support both on-prem and Office365-based users together in the same instance.  Similarly, there is data storage infrastructure available with service extensions like HPE Cloud Volumes and HPE Cloud Bank Storage so that at the infrastructure level, data volumes can reside on-prem, or in the cloud, and retain mobility to move back or across cloud providers. For production data, secondary data, or back-up/DR.

Hardware still makes a difference

Who’d have thought this can be, today in the age of all things “Software defined” and “Cloud”.  But actually the hardware you chose makes a big difference – more applications are requiring specific hardware features and capabilities, such as embedded AI intelligence.

Here’s a recent blog specifically about how “How HPE hardware brings out the best in Microsoft Exchange Server 2019”.  Enjoy.

Track the latest news and happenings with HPE Microsoft Storage solutions on twitter at @mhardi01.

Windows Server 2019 is officially Available!

I’m thrilled to welcome the official release of Windows Server 2019, after an extended launch period. This operating system which has been preeminent in the datacenter for so long, continues to deliver new areas of innovation, and I see it as being especially relevant and impactful to what I’ll be working on, as part of the Microsoft Storage Solution business at HPE, in the years to come.


Our team has been hustling to complete certifications and planning around this release, and our hats are off to our partner for getting it published.

Introduction to Windows Server 2019

Microsoft Windows Server 2019 is an operating system recently released as part of the Windows NT family. Traditionally Microsoft has led the operating system market, and has accounted for 83% of desktop O/S market share (source: Statista) as well as 88% of on-premises Server share (source: Spiceworks).

Key themes of the Windows Server 2019 release include hybrid IT, security, application platform, and hyper-converged infrastructure.  Key new features of this version include: Windows Subsystem for Linux (WSL), Support for Kubernetes (Beta), Storage Spaces Direct, Storage Migration Service, Storage Replica, System Insights, and an improved Windows Defender.

HPE and Windows

HPE and Microsoft share a 30 year partnership of deep technical collaboration, significant multi-year mutual investments, and exciting global co-promotions.


The latest version of Windows Server is a key enabling technology for a number of new and enhanced Microsoft Server products, including: Microsoft Exchange Server 2019 (which only runs on Windows Server 2019), along with important Microsoft business server products such as Microsoft SQL Server 2019, SharePoint Server 2019 and a number of others that have recently released or are anticipated in calendar year 2019.

The Storage solution team at HPE is focused on ensuring continued smooth infrastructure operations as our customers upgrade to Windows Server 2019. In addition, we work to help make sure our customers get enterprise-class performance and availability with the product and its related business apps, and we’ll deliver on this with continued efforts around testing, reference architectures, integrations, and training.

Windows Server 2019 and HPE Storage

As of this writing, we have key Storage platforms certified for Windows Server 2019, including HPE 3PAR StorServ Storage, HPE Nimble Storage, and HPE XP Storage.  You’ll also find HPE Smart Array storage controllers among the list of our Windows Server 2019 Certified products. Note that up-to-date Microsoft Windows certification status is available at the Windows Server Catalog site:

Congrats again to HPE partner, Microsoft. To learn more about the breadth of HPE Microsoft Storage solutions available today, visit

Changing Composition of Solutions in the Cloud Era

There’s been a traditional view of ‘Solutions’ in technology.  It’s often attributed to IBM, whose heritage in this area goes back a century when Thomas Watson was instructing sales people to sell large scale ‘tabulating solutions’, and to leave the small office equipment sales to other companies.

problem-solutionOften in Tech, Solutions management and marketing has been an exercise in providing the remaining 15% of an offering beyond what the core product can provide. The development and marketing of a Solution can be a function of a time-to-market crunch for a new product, or a means to address new uses or verticals for established products.

For instance, a relational database is a product that generically meets the needs of customers who want to store structured data, but customers would probably also want to protect that data, so rather than wait until that feature is built, it’s faster and often better to just partner with a third-party, in this case, a backup software company, to provide a more complete solution.  Traditional solution components therefore start with a core product offering, add one or more third-party products, and often some related service, and voilà, you have a solution.  Create a solution brief, do some training, supporting communications, and you’re in business.

Because solutions have their roots in a selling environment, not surprisingly there have been formalized Sales approaches around it. These efforts included variations on ‘solution selling’ delivered internally by businesses and externally by consulting services.  One course I took in this area was “SVS”, short for “Strategic Value Selling” which I studied while at EDS back in the mid 90s.  A related book I enjoyed reading around that time was “Selling to VITO”.  All of it still relevant today.  Identify the customer’s problem and focus your pitch, and solution, around that.

Key factors driving change in Solutions

But there has been a big change in the IT world since the 90s: cloud computing.  Gartner sees Public Cloud as a $300B market by 2021. The vast majority of today’s organizations, and even the federal government, are thinking ‘cloud first’.  Trying to craft a traditional value proposition within a product category that’s radically evolving from being strictly on-premises to in-the-cloud (or some combination of the two) can be a frustrating experience.  Central to this challenge is the changing nature of the solution itself — because the shifting context of the IT deployment fundamentally changes what’s the problem being solved.

I’ve identified at least three key changes in the technology marketspace that are leading us towards a shift in how we think about Solution Management and Marketing:

  1. Change in the Sales-Marketing relationship – In a recent Boston Consulting Group Article, “Building an Integrated Marketing and Sales Engine for B2B”, it details how the prevalence of web content and online marketing along with the familiarity of online product research and purchasing, has created a dynamic where more of the technology ‘sale’ happens before there is even contact with a Sales rep. Today’s B2B buyer is, “… is younger, digitally engaged, and doing more and more business online and on a smartphone.” And that according to recent research from Google, the average B2B buyer is two-thirds of the way through the journey before talking to sales.
  2. Change in the IT product form-factor – What we are producing and selling today is drastically different from a decade ago. IT spend is shifting from tangible products being bought and operated on-premises, to ‘Everything as a service’.  According to IDC, by 2018, at least 40% of IT spending will be cloud-based. This will grow to comprise over 50% of all IT infrastructure, software, services, and technology spending by 2020.  The impact here is that to create a solution around a tangible product is a different exercise vs. one around a cloud-based service.  We are seeing a growth business in Cloud Consulting services: some evolving from traditional IT consultants (Avanade) and off-shore outsourcers (Wipro, Tata), some sprang from tech manufacturers (IBM iX), some from VARs and SIs, and some were truly cloud-native boutiques. Some have grown, merged, died or gotten exits. Services is a more prominent component of modern IT solutions, whether hosted and delivered at scale, or provided bespoke.
  3. The rise of the Platform – I’m not aware of when I first picked up on the idea of ‘Product as a Platform’ but the management consultants have apparently been writing about it since at least 2016. And companies like Microsoft have shown how successful a platform approach can be, such as in the instance of the personal computer, i.e. launching an open system that can support a broad range of third-party applications and hardware. This is in contrast to the Mac with its sealed cases which I don’t believe ever got to be more than 10% of the PC market. More recently, is a key example of a Software as a Service offering that serves as a platform for a huge ecosystem of add-ons and related services (‘AppExchange’). Amazon AWS is the ascendant example of a platform that has come to dominate the cloud; and this specific offering is at the heart of the sea change in how IT is being consumed, and therefore sold.

Collectively, these 3 changes are driving an evolution in how tech Solutions need to be conceived and delivered.  This following diagram shows the shift in the composition of a Solution: from Product to Platform, from complementary third-party offerings to integration, and a greater mix of Services that represent a greater part of the vendor’s value add.ChangingSolutions


Here’s an example: A traditional data storage-related solution could involve an external storage array (e.g. HPE 3PAR or HPE Nimble), plus a complementary product such as backup software from Veeam, Veritas or Zerto, and maybe some planning or deployment consulting service.  Package it up with a solution brief, Sales and channel training, a promotional offer or discount from that one tech partner, and some related Marketing and PR activity, and you’ve launched a solution.

Contrast that traditional solution with what may be the 2019 incarnation: the array as a ‘platform’ for on-prem data capacity, coupled with a broader ecosystem of partners including Azure and AWS who supply cloud-based capacity for colder data storage, or disaster protection. This solution is enabled with Integration in the form of a related web interface and/or REST API development.

Another key aspect are many more related services.  Besides capacity-based pricing and financing of the capital expense, there can be differentiated service levels and support, there can be monitoring, management, initial deployment consulting, follow-on management consulting, or the entire device itself could even be hosted in a co-lo and sold as a Service, priced per GB/Month. This on-demand delivery, buy-only-what-you-need pricing, as well as product positioning more as an enabling device, makes the old IT world look more like the mobile phone space. The customer still wants powerful devices that they can control, but the business model becomes more about the service as measured in minutes and degree of use.

In summary, not all traditional IT infrastructure products will go the way of Tintri tomorrow.  And just taking more of an appliance approach isn’t a guarantee for success – just ask HTC or Cirtas.  But thinking more flexibly about offering format, as well as business model, will help ensure a more successful evolution as the IT space, and the role of Solutions, change.

Are you observing similar changes in how you buy products, or bring them to market?  Share your experiences.

Party like it’s (almost) 1999

I was listening to Sirius XM the other day, and often when I’m rolling through the “70’s on 7” and “80’s on 8” stations I find myself apologizing to my offspring about how bad the music was back then. But then you hear a classic like Prince’s “Let’s Go Crazy” from 1984, or so many of Aerosmith’s songs from the 70’s, and you remember that there were great songs back then.  Another big hit from prince was “1999”, which was also recorded in the 80’s.  And just as the song directs us to party like it’s 1999, we should heed that advice.

I remind myself of this, because I’ve experienced muted returns from having been a little too defensive in my portfolio, including a whipsaw in gold which has really dropped off over the summer; though there does seem to be a slight recovery in the yellow metal with news of inflation in a rising CPI, some currency fluctuations, and the ongoing threat of global instability thanks to the current administration.

In other words, I stopped letting my portfolio party a little too soon. 

The indicators I had been wary of: high valuations with the S&P 500 PE ratio at over 25 putting it at almost the 4th highest point since they starting tracking it in the 1800’s, and the Shiller version of this same ratio at the 2nd highest point ever (last time it was higher was in 2000), a protectionist administration that looks a lot like Hoover, and a flattening Yield Curve.

A lot has been recently written on the topic of the yield curve. The stories typically underline the descriptive nature of this measure of the cost of government debt vs. maturity, in relation to the economic outlook: sloped is healthy, flat is not, and negative is recessionary. There’s little contention over the truthfulness of the indicator, only debate re: aspects of its predictive ability. I’d contend that for most of us, whether the 2-10 hitting a negative .25 point threshold is less in vogue than the 1-10yr spread with a 4 week inversion, the bigger takeaway is that the smarter guys in the market – the debt guys who need to worry about the value of money 30 years from now – are saying that the economy is cooling, and that the longer-term prospects for an invested dollar are becoming relatively less attractive.

When the 10-2 spread goes negative, sell-offs and recessions follow

In 1999, the ‘2-10’ spread, or the difference between the 10 year T-Note yield less the yield of the 1 year T-Note, was going negative, which meant we had about a year to go before the equity party was over.  As you can see in this chart from the Federal Reserve Bank of St. Louis, historically there’s a year lag between the indicator hitting zero and the stock market tanking (‘Black Monday’ 1989, Dot Com crash of 2000, 2008 meltdown), and a little longer before the economy enters a recession. 

As of this writing, the yield curve is showing a 33 basis point spread between the 2 and 10 years, so we’re close to a flat curve and that important crossover, but not there yet.

Timing is key.  In 1999, the market ran up almost another 26% to its peak in 2000. 

So for your investment portfolio, don’t get too defensive, too soon.  And party like it’s almost 1999.