SFD24: Studies In Autonomy & ES[G]

I’ve finally had a chance to catch my breath from Gestalt IT’s Storage Field Day #24 (SFD24) last week in Santa Clara, CA. It was a great chance to catch up with Stephen Foskett’s team at GestaltIT and with many of my fellow delegates from past Tech Field Day events. Best of all, we got to hear from four keys vendors who focus specifically on the most-often ignored aspect of modern computing environments: where we keep our organization’s data to insure its maximum availability, accessibility, and security. From my perspective, two major themes dominated our vendors’ messages: the expansion of autonomous resources to monitor and manage complex storage resources, and the implicit benefits of SSDs for meeting IT organizations’ environmental, social, and governance (ESG) goals.

Dell: The Big Dog In the Room

So the big dog at SFD24 – Dell – talked to us about three of their key offerings – PowerMax, PowerStore and PowerFlex – and the innovations they’re introducing in upcoming releases.

PowerStore offered up some new machine-learning-assisted volume configuration tools that autonomously anticipated typical storage requirements, thus hopefully helping overwhelmed storage admins in everyday duties; meanwhile, their PowerFlex product is aimed at providing some pretty serious cloud-based enterprise storage for clients like AWS (who also presented at SFD24 – go figure! – but more on that in a bit).

What I found most interesting was Dell’s relatively new CloudIQ offering, part of their PowerMax line. It uses autonomous anomaly detection to warn against potential ransomware attacks and other security perturbations by identifying asymmetric encryption attempts within file systems – a typical sign that something is amiss within ever-more-complex storage arrays. CloudIQ also provides health risk reports that classify business continuity problems so an IT organization’s harried storage admin – or, these days, whichever DBA or DevOps developer who might have been relegated to that role, as the number of qualified and experienced admins continue to decline – so that any threats can be quickly classified and acted upon with appropriate force.

AWS: Then Reality Set In.

From my perspective, our presenters from AWS focused less on product offerings and more on the current state of reality in so many IT organizations today: Enterprise storage customers are really not just limited to DBAs and DevOps teams; rather, it’s the actual consumers of the data, especially data scientists, data engineers, and business analysts, that are driving the crucial needs of their organizations.

That means a lot of time and energy is consumed by having to move data quickly and reliably, often between different public clouds like Oracle Cloud Infrastructure (OCI), Microsoft Azure, and of course AWS. That effort comprises moving huge volumes of data in both file and block format – perhaps even complete RDBMS instances’ data! – to take advantage of particular cloud offerings. It’s not entirely unusual these days to see an Oracle RAC database running on AWS storage, but just as likely to see it placed within a Microsoft Azure stack.

What really caught my attention was their Storage Lens offering. It offers methods to observe and analyze exactly how storage is being used through about 30 storage-specific metrics, of which at least a dozen of the most pertinent ones cost nothing to access. These services are already available autonomously, and if you don’t like the way the data is presented, you can download the metrics and process them within your own chosen infrastructure. Having played the role of part-time storage administrator in my past life, trying to figure out exactly who is using what storage and how they’re using it – JSON documents? PDFs? movies and images? – can be frustrating, especially when I’m doing double-duty as a part-time DBA, so anything to demystify those questions and the related costs they incur is welcome.

Pure Storage: SSDs As Paths to ES(G)

I love it when salespeople make gutsy moves, and the team from Pure Storage did just that: They kicked off their presentations by discussing how their Pure1 SSDs and storage arrays accomplished ESG goals. (While I can’t extrapolate that SSDs will directly lead to better corporate governance like hiring more diverse workforces and insuring pay equity, I’ll cede them the first two letters.) What really impressed me is that Pure Storage focuses on an “evergreen” manufacturing strategy to produce their SSDs and arrays – essentially, every new SSD they build will fit in current arrays, and vice versa – which definitely overcomes the need to constantly install new storage racks, controllers, and storage devices in data centers. Pure Storage’s research claims that their product line reduces power usage by as much as 80% over other manufacturer’s SSD arrays and even more when compared to HDDs.

And though they spent less time on it, the theme of autonomous and/or assisted storage management came to the fore when they talked a bit about their Purity upgrade strategy for their Pure1 offerings. Again, overburdened storage administrators can potentially benefit from self-service, guided upgrades of SSD storage and arrays without worrying about the complexities of the upgrade process itself.

{Full disclosure: I’m currently engaged with a separate sales team at Pure Storage right now to promote some of their other storage offerings, but I’m playing the role of a crusty old school DBA in our discussions and taking nothing at face value.}

Solidigm: Wait, You Built an SSD How Big?!?

Closing out our final day at SFD24, the team from Solidigm presented on their SSD solutions aimed at ever-larger data storage requirements, as well as the need to access large datasets at maximum speed and efficiency. Though they spent a little too much time telling us about use cases they’d encountered, their story-telling was solid and even a bit retro. (Let’s just say I never expected to hear an allusion to that venerable prophet of anime, Speed Racer, which I grew up on as a kid in the before times.)

Solidigm also announced their latest SSD would clock in at 64TB using their quad-level cell (QLC) technology and talked about the next level of density – penta-level cell (PLC) SSDs – which they just released a few months ago.

As someone who remembers hearing at a conference just ten years ago that one day soon, HDDs would be only found in a museum and we’d be using SSDs exclusively, these new storage capacities and densities are mind-blowing. HDDs are still here, of course, but they’re not adept at filling another niche in the future that we discussed with the Solidigm folks: the ability of retasking “old” SSDs for new life. Even an older SLC, MLC or TLC device isn’t completely worn out, and it could be useful for storing a few TB of valuable data for an edge computing use case – say, mated with a Raspberry Pi / Arduino board to store more data closer to the edge for immediate analytics. That reusability is unique to SSDs, and that bodes well for the greener future of computing.

Wrapping Up: What Comes Next?

Since in my past life I’d worked for Hitachi Data Systems for two years and still count many friends and colleagues from that venture, I intensely enjoyed SFD24. It was exciting to see just how much SSD technology has expanded and improved in the last 10 years since I left HDS, but equally interesting that “rotating rust” (and yes, even venerable magnetic tape!) still has a place in most enterprise storage environments. The next five years are likely to prove even more fascinating as SSD capacity and resilience continues apace, especially as ESG concerns continue to factor into IT organizations’ future plans.