Keeping a Reborn Technology Advocacy Program Robust? That’s Up To Us Oracle ACEs.

Winter Is Here.

As I wrap up my technology-advocacy-related travels for 2022 and plan out my (hopeful!) schedule for 2023, it’s a perfect time to reflect on the successful rebirth of the Oracle ACE Program this past summer. The new team that Jennifer Nicholson, our program’s key liaison, has put together in just a a few months has been astounding, and that achievement calls for some well-deserved acknowledgment.

First, A Bit of [Personal] History

I’ve been part of the ACE Program since early 2014. I believe I’m one of the very last ACE Directors to have been awarded that status without progression through the ranks of ACE Associate and ACE Pro (as we call those contribution levels today). In retrospect, that change was certainly warranted.

Tunis, 2014: First ever MENA tour

I still remember how excited I was at my first-ever ACE dinner at the Venetian that year, and how another ACE Director almost immediately asked to participate in the first-ever OTN Middle East tour that summer.

Wow, I remember thinking, I’m going to places I’ve never been before – Tunisia! Saudi Arabia! Dubai! – and best of all, I get a chance to speak to a diverse crowd of people from a completely different culture. I was hooked.

6 Continents. Still Counting.

Over the last few years I got a chance to visit Tokyo, Japan during an APAC Tour and most of South America – Colombia, Ecuador, Argentina, Paraguay, Uruguay, Chile, Brazil – during LAOUC tours.

Of course, there were shorter trips “across the pond” to EMEA: Finland, Sweden, Norway, Denmark (Nordic OTN) as well as UKOUG (Liverpool, UK), DOAG (Nuremburg, DE), POUG (Poland), and ILOUG (Israel).

And I kept up my speaking schedule within North America too, at a plethora of regional conferences: UTOUG, COUG/MOUG, RMOUG, BCOUG, NYOUG, NEOOUG, COLLABORATE, and Kscope.

Challenging? Heck, yeah!

I had to develop new presentations every year, learn Oracle Cloud and Autonomous Database, and even delve back into application development – APEX, Machine Learning & Analytics, JSON, even edge computing.

The key thing here: I couldn’t have done any of this without the constant support from the ACE Program and the community of other ACEs. They uplifted me, encouraged me, helped me understand how important it was to connect with Product Managers at Oracle.

Most of all, gave me the opportunity to provide learning opportunities for folks coming to sessions to learn, connect, kibbutz, and maybe even be entertained though my lame attempts at humor.

And then, suddenly without warning, everything changed.

Hey, Who Turned the Lights Off?

In the spring of 2021, the ACE Program suddenly … changed. I’m not sure what the ultimate cause was, but I suspect a major shift in how the program was viewed within Oracle. And this isn’t that uncommon in huge organizations: A new player comes to the fore, different ideas are proposed, budget constraints shift suddenly, those in favor are no longer favored.

But suddenly, as if someone had reached into the breaker box and pulled the main switch down to OFF, the direction and future of the program was in a constant state of flux, and its very reason for existence seemed to be called into question.

Acknowledge me!

Needless to say, this was extremely disturbing to our community of ACEs. We’re egotistical, opinionated, driven, and as easy to herd towards a goal as a cargo container full of angry wet cats.

That’s what makes us great advocates for Oracle tech, by the way: We’re not afraid to tell a PM that their product absolutely sucks, or that their use case documentation is non-sensible, or that they’re not understanding what their customers out in the field really want – right now! – and why that demand is actually important and reasonable.

We’re sort of like secret shoppers: We’re completely happy to tell you what your customer is really nervous to say to your face. Acknowledge us!

Wait … What Just Happened?

For whatever reason, that acknowledgment suddenly disappeared, replaced by an aggressive marketing orientation towards capturing the hearts and minds of thousands of younger developers – the kind of folks I hung with at Java One at OCW2022 while demonstrating and explaining the Raspberry Pi Supercluster.

To be clear, I’m not pointing fingers here: We desperately need to attract the younger folks to the fact that Oracle’s converged database philosophy makes sense in today’s world, and you don’t necessarily need to download and install yet another open source database to do what you can already do within Oracle 19c.

What frustrated us? We ACEs already knew this – in fact, many of us were already telling that story as part of our messaging.

ACE Program: Reborn!

Thankfully in mid-2022, the ACE Program was moved under the aegis of the Oracle Database team. This couldn’t have happened without Jenny Tsai-Smith, Gerald Venzl, and other key PMs at Oracle realizing there was still huge potential bottled up with our ACE community.

With the reopening of the economy post-COVID, we could again bring significant value to the new messaging around Oracle 23c. Jen Nicholson’s new team includes two deeply motivated, special people – Oana-Aurelia Bonu and Sapna Banga – whom I’ve gotten a chance to know better through recent OUG events.

Developer-Forward Orientation

I’m absolutely in favor of the new developer-forward orientation we’ve all seen as of OCW 2022. We’re focusing on making it even easier for DevOps folks to use the power of the database and features already included within it – spatial, graph, machine learning, analytics, and non-standard data formats like JSON and HIVE, no matter where the data lives.

Developers can build out new applications with tools like APEX and Visual Builder at light speed, and take advantage of microservice architectures within OCI.

Juan Loiza at OCW22 ACE Dinner

Our ACE dinner at OCW 2022 celebrated the return of our program. It was serendipitous to gather in the very same restaurant I met many of my now-venerable colleagues back in 2014, and even more exciting to have EVPs Juan Loiza make a brief speech and share dinner with us, along with so many key Oracle PMs I’d not seen since before COVID times.

But that’s not where this story ends.

This Baby Still Needs Feeding

Even though our program has been reborn, it’s still an infant in some ways: We have a new team of players within Oracle helping us maintain it, but without us ACEs letting Oracle know how much we appreciate that support, there’s always the chance it could become malnourished again, suffer sickness, and slowly fade away.

Feed the baby!

So, my fellow Oracle ACEs, as you sit down for family get-togethers during this season of light and joy, please take the time to send a message back to the ACE Program’s leadership (and even better, to the PMs and powers that be at EVP level and above, if you have that reach!) to let them know just how much we appreciate the effort and funding that went into restoring our beloved tech advocacy program.

Remember: It takes a lot of energy and devotion to herd us angry wet cats!

Farewell, Twitter. Mastodon Is My New Social Media Overlord.

A new adventure begins!

Just a quick blog post to let everyone who’s followed me on Twitter as @JimTheWhyGuy: I’ve now shifted over to Mastodon, and you can follow me here.

Fear not – you can absorb, critique, chuckle at, or throw shade at my usual wit and wisdom on all things related to technology – especially Oracle as usual, but lately more focused on Oracle APEX, Machine Learning & Analytics, and Graph & Spatial – on Mastodon instead of Twitter.

So … what happened? Well, to be perfectly honest, I simply do not know yet. Apparently a recent Tweet must have tripped some new algorithm and right in mid-posting about the happenings at UKOUG Breakthrough22, I found my handle had been permanently suspended. I’ve asked for clarification as to exactly which Twitter rules were violated, and I’ve filed numerous appeals daily, but no one has responded to explain precisely what was the root cause of the suspension.

To be 100% clear: I heartily approve of content moderation, and I hope to eventually find out what I’d tweeted that broke the rules so that I can speedily remove that content – I’m sure the folks at Twitter must be overwhelmed lately with millions of similar requests, and my heart goes out to them! – but there’s just too much to talk about these days to wait any longer.

The most interesting side effect? I’ve suddenly found an extra hour or three on my hands daily. I’m going to leverage that “found time” to focus on providing quality content to those who deserve to read it, instead of descending into social media maelstroms every few hours. Come along for the ride, my friends and colleagues – it’s a brave new world out here!

SFD24: Studies In Autonomy & ES[G]

I’ve finally had a chance to catch my breath from Gestalt IT’s Storage Field Day #24 (SFD24) last week in Santa Clara, CA. It was a great chance to catch up with Stephen Foskett’s team at GestaltIT and with many of my fellow delegates from past Tech Field Day events. Best of all, we got to hear from four keys vendors who focus specifically on the most-often ignored aspect of modern computing environments: where we keep our organization’s data to insure its maximum availability, accessibility, and security. From my perspective, two major themes dominated our vendors’ messages: the expansion of autonomous resources to monitor and manage complex storage resources, and the implicit benefits of SSDs for meeting IT organizations’ environmental, social, and governance (ESG) goals.

Dell: The Big Dog In the Room

So the big dog at SFD24 – Dell – talked to us about three of their key offerings – PowerMax, PowerStore and PowerFlex – and the innovations they’re introducing in upcoming releases.

PowerStore offered up some new machine-learning-assisted volume configuration tools that autonomously anticipated typical storage requirements, thus hopefully helping overwhelmed storage admins in everyday duties; meanwhile, their PowerFlex product is aimed at providing some pretty serious cloud-based enterprise storage for clients like AWS (who also presented at SFD24 – go figure! – but more on that in a bit).

What I found most interesting was Dell’s relatively new CloudIQ offering, part of their PowerMax line. It uses autonomous anomaly detection to warn against potential ransomware attacks and other security perturbations by identifying asymmetric encryption attempts within file systems – a typical sign that something is amiss within ever-more-complex storage arrays. CloudIQ also provides health risk reports that classify business continuity problems so an IT organization’s harried storage admin – or, these days, whichever DBA or DevOps developer who might have been relegated to that role, as the number of qualified and experienced admins continue to decline – so that any threats can be quickly classified and acted upon with appropriate force.

AWS: Then Reality Set In.

From my perspective, our presenters from AWS focused less on product offerings and more on the current state of reality in so many IT organizations today: Enterprise storage customers are really not just limited to DBAs and DevOps teams; rather, it’s the actual consumers of the data, especially data scientists, data engineers, and business analysts, that are driving the crucial needs of their organizations.

That means a lot of time and energy is consumed by having to move data quickly and reliably, often between different public clouds like Oracle Cloud Infrastructure (OCI), Microsoft Azure, and of course AWS. That effort comprises moving huge volumes of data in both file and block format – perhaps even complete RDBMS instances’ data! – to take advantage of particular cloud offerings. It’s not entirely unusual these days to see an Oracle RAC database running on AWS storage, but just as likely to see it placed within a Microsoft Azure stack.

What really caught my attention was their Storage Lens offering. It offers methods to observe and analyze exactly how storage is being used through about 30 storage-specific metrics, of which at least a dozen of the most pertinent ones cost nothing to access. These services are already available autonomously, and if you don’t like the way the data is presented, you can download the metrics and process them within your own chosen infrastructure. Having played the role of part-time storage administrator in my past life, trying to figure out exactly who is using what storage and how they’re using it – JSON documents? PDFs? movies and images? – can be frustrating, especially when I’m doing double-duty as a part-time DBA, so anything to demystify those questions and the related costs they incur is welcome.

Pure Storage: SSDs As Paths to ES(G)

I love it when salespeople make gutsy moves, and the team from Pure Storage did just that: They kicked off their presentations by discussing how their Pure1 SSDs and storage arrays accomplished ESG goals. (While I can’t extrapolate that SSDs will directly lead to better corporate governance like hiring more diverse workforces and insuring pay equity, I’ll cede them the first two letters.) What really impressed me is that Pure Storage focuses on an “evergreen” manufacturing strategy to produce their SSDs and arrays – essentially, every new SSD they build will fit in current arrays, and vice versa – which definitely overcomes the need to constantly install new storage racks, controllers, and storage devices in data centers. Pure Storage’s research claims that their product line reduces power usage by as much as 80% over other manufacturer’s SSD arrays and even more when compared to HDDs.

And though they spent less time on it, the theme of autonomous and/or assisted storage management came to the fore when they talked a bit about their Purity upgrade strategy for their Pure1 offerings. Again, overburdened storage administrators can potentially benefit from self-service, guided upgrades of SSD storage and arrays without worrying about the complexities of the upgrade process itself.

{Full disclosure: I’m currently engaged with a separate sales team at Pure Storage right now to promote some of their other storage offerings, but I’m playing the role of a crusty old school DBA in our discussions and taking nothing at face value.}

Solidigm: Wait, You Built an SSD How Big?!?

Closing out our final day at SFD24, the team from Solidigm presented on their SSD solutions aimed at ever-larger data storage requirements, as well as the need to access large datasets at maximum speed and efficiency. Though they spent a little too much time telling us about use cases they’d encountered, their story-telling was solid and even a bit retro. (Let’s just say I never expected to hear an allusion to that venerable prophet of anime, Speed Racer, which I grew up on as a kid in the before times.)

Solidigm also announced their latest SSD would clock in at 64TB using their quad-level cell (QLC) technology and talked about the next level of density – penta-level cell (PLC) SSDs – which they just released a few months ago.

As someone who remembers hearing at a conference just ten years ago that one day soon, HDDs would be only found in a museum and we’d be using SSDs exclusively, these new storage capacities and densities are mind-blowing. HDDs are still here, of course, but they’re not adept at filling another niche in the future that we discussed with the Solidigm folks: the ability of retasking “old” SSDs for new life. Even an older SLC, MLC or TLC device isn’t completely worn out, and it could be useful for storing a few TB of valuable data for an edge computing use case – say, mated with a Raspberry Pi / Arduino board to store more data closer to the edge for immediate analytics. That reusability is unique to SSDs, and that bodes well for the greener future of computing.

Wrapping Up: What Comes Next?

Since in my past life I’d worked for Hitachi Data Systems for two years and still count many friends and colleagues from that venture, I intensely enjoyed SFD24. It was exciting to see just how much SSD technology has expanded and improved in the last 10 years since I left HDS, but equally interesting that “rotating rust” (and yes, even venerable magnetic tape!) still has a place in most enterprise storage environments. The next five years are likely to prove even more fascinating as SSD capacity and resilience continues apace, especially as ESG concerns continue to factor into IT organizations’ future plans.

Big Guns Attract Attention, But Not Always Interest

It always surprises me when the largest IT vendors offer up less-than-compelling stories about their latest product offerings – you’d think they’d have the most creative storytellers on staff, after all! – but more often than not, I’ve found myself drifting a bit while listening to their sessions and waiting for the prized “Oh – one more thing” moment. In my previous blog post I discussed four much smaller vendors that presented their stories at Tech Field Day #25 that struck me as driving real value in the current multi-cloud marketplace, but it would be unfair to the “big guns” that also attended our conference to ignore what they brought to that space.

VMWare: Migrating Apps to the Cloud Ain’t Beanbag

I’ve been using VMWare technology for at least 15 years. I used their robust VMs to build out one of the first working models of Oracle RAC database technology when I was teaching courses for Oracle University that we used to teach over 2000 students from 2005 – 2009.

VMWare’s vRealize family of products offer DevOps teams, SREs, and PMs the ability to search for applications that an IT organization wants to move from a typical on-premises / in-house environment and migrate them to a multi-cloud environment. They presented several examples of how their tools facilitate the various phases of complex migration strategies, including even identifying which applications might be hiding “under the covers” but still need migration to the cloud, based on network activity and other metrics.

The final presenters from VMWare had the unfortunate task of talking about a stultifyingly boring topic – making intelligent sense of application activity logs – and actually made it (slightly) less boring. Their vRealize Log Insight Cloud product offers DevOps teams some interesting tools to take a closer look at how applications are performing in real time based on application logs and help focus DevOps activity towards meaningful interventions – say, moving an app to different cloud network endpoints to shorten user response times.

MinIO: SDS Is Where It’s At, Because Cloud Demands It

OK, so VMWare had the virtualized cloud side covered pretty well, especially from the network, CPU and memory perspective. MinIO discussed their software-defined storage (SDS) that’s currently compatible with all of the Big 3 public cloud providers, with some key customers using it for multi-PB-sized deployments.

The bottom line from the MinIO folks? We’re absolutely headed towards a multi-cloud future, and so that means IT organizations’ storage needs are best handled through software-defined storage instead of traditional spinning rust / SSD combinations housed internally as well. The big advantage their approach touts is that is therefore possible to keep storage demands completely separate from demand for compute resources.

Intel: Optane Increases Everything’s Octane

Intel talked about their latest innovations regarding their Optane family of products, which includes CPU, persistent memory (PMEM), and SSD storage technologies.

They also talked at length about their Compute Express Link (CXL) technology that allows their Intel-based CPU, memory, and storage devices to more effectively share resources for data exchange to overcome I/O bound workloads while also lowering the overall complexity of the software stack.

And finally it was nice to see some mention of the technology near and dear to my heart: Oracle Databases, Machine Learning, and Analytics! The X8M release of the Oracle Exadata Database Machine actually incorporates Intel’s PMEM technology into its storage cells and leverages that as an extended cache for columnar storage for database operations; Oracle DBs based on Exadata – including Oracle’s Autonomous Database available in its public cloud – thus absolutely scream performance-wise as compared to non-columnar databases.

Conclusions

In today’s tech news, Broadcom announced their proposed acquisition of VMWare by mid-week (umm … whaa?). In recent discussions with my delegate colleagues, even Oracle Corporation has been bounced around as a suggestion for a buyout parent … but quite honestly, it seems to me an Intel-VMWare pairing might make a lot more sense. The robust tools that VMWare has built for cloud migration would team nicely with Intel’s continual improvements to the CPUs, memory, and storage that underpin a majority of the hardware stack hosting many of the “Big 3” public cloud infrastructure.

Oh, and one other thing: These “big guns” should probably consider taking up our GestaltIT hosts to pre-review their presentations before we delegates actually get our first look. There are some serious benefits from professional overlook of their sessions before we ever get to see them, especially if it lets us focus on the benefits of each offering instead of having to suffer through a dozen or so slides filled with shameless self-promotion marketing pitches.

Small & Fierce Tech Upstarts Dominate

I enjoyed Cloud Field Day 13 (CFD13) in Santa Clara, CA back in February 2022 so much that I jumped at the chance to team up with a whole new set of colleagues at yet another Tech Field Day event in early April – Tech Field Day 25 – to review some amazing technology from seven different vendors. I have to admit that though the “big guns” at the event (more in my next blog post) brought their A-teams to present on their newest offerings, it seemed to me that the smaller vendors were more than willing to try punching above their weight and present their stories with a lot more verve and nuance.

Nasuni: Not Our Parent’s File Storage

As a past technology advocate for Hitachi Data Storage and as an Oracle DBA consultant, I’ve always been keenly interested in file storage technology. How much it’s progressed exponentially in the last decade was evident when Nasuni showed off their Nasuni File Services platform and its capabilities built to handle the demands of today’s cloud computing environments, with its UniFS file system underpinning storage for AWS, GCP, and Azure.

Ten years ago, we worried about losing critical files or chunks of databases; today’s key challenges include rapid restoration of files that’ve been compromised due to ransomware attacks. One especially innovative idea that they talked about is their Nasuni Labs portal. It’s a free, public, GitHub-based repository of open-source projects that they and their customer base has built over the past few years. The idea here is to provide faster uptake among and allow collaboration between storage solution professionals to solve specific problems within their environments – an excellent way to show support for your client base.

Apica: Let’s Break This App, Oh, a Few Bazillion Times

I started out my IT career as an application developer, and I still have a few broken keyboards to prove my frustration when I missed testing out a key (or not-so-key!) feature with adequate workloads and complexity. So I was intrigued when Apica presented their Active Monitoring Platform that offers full lifecycle testing that focuses on a user’s journey through an application, rather than just simulating random tests or recorded keystrokes. Apica’s toolset makes it possible to test applications at scale as well by quickly building reusable, scripted test cases to be used to ramp up thousands of simultaneous users running millions of test points – including purposely introducing invalid data! – against a pre-production application. Apica also offers application monitoring that’s integratable with popular support tools like Grafana or Splunk.

Keysight: So … How Many Users Can Our Network Really Handle?

Even with reliable cloud storage and the capability to hammer an application with millions of tests, one key component of a reliable user experience is still lacking: how well the actual network itself can handle extreme user demands. Keysight, an offspring of the venerable Hewlett-Packard, a company well-known for its passion to innovate – covered that need with a demo of their new CyPerf product. In today’s world of distributed zero trust networks composed of complex virtualized components – VPNs, VCNs, edge computing devices, and of course, the switches hooked to my Oracle database servers! – CyPerf can simulate actual network traffic across environments so administrators and testers can evaluate just how well current, expected workload levels will perform, as well as the impact of upscaling that traffic as demand increases seasonally.

Fortinet: Detecting & Deflecting Virtual Knives

I’d first encountered the Fortinet team at CFD13 and was blown away by their offerings at that event. I’m the first person to admit that I’m no internet security expert, but I do keep a close eye on trends in security penetration hijinks and techniques that bad actors typically use. Fortinet’s team showed off their FortiWeb product with live demos of how three different and increasingly sophisticated security breach vectors of attack – bot-based user simulation, data scraping, and missing user input sanitation – were detected, analyzed, and almost immediately contained within seconds through Fortinet’s dual layers of machine learning, all with virtually no human intervention.

It was the most interesting presentation at the event – just the right bit of marketing with plenty of pertinent demonstration of how their security toolset just worked. My only infinitestimal critique: Their technical expert playing the security admin role could be a bit more enthusiastic when she blocked the final rather sophisticated attempt, perhaps just smiling wickedly and saying sweetly, “Well, this is what happens when you bring a virtual knife to a security gunfight.” 😉

Elvis Is Everywhere. So Is Kubernetes.

Elvis Is Everywhere.

As I attended my first-ever Tech Field Day event – Cloud Field Day 13 (CFD13) – in Santa Clara, CA last week, the gritty cult music video Elvis Is Everywhere from the late 1980s kept running through my head. (In this grainy clip, Mojo Nixon and Skid Roper contend that the real secret of life, the universe and everything is that we all have a little Elvis Presley inside us and we’re all moving towards a perfect state of Elvis-ness because of Elvis-lution.)

And if you replace “Elvis” with “Kubernetes,” then you’ve got a glimmer of how I see the current state of DevOps and advanced computing today after attending CFD13: Kubernetes is everywhere, and everybody is using it – even when it may not necessarily make perfect sense to do so. It was amazingly enlightening to sit down with 11 other professionals from across the globe in a hybrid three-day event as we heard from eight different vendors – some huge, some a bit smaller – about the challenges of managing Kubernetes (usually abbreviated to K8, if you’ve been trapped under a virtual rock like me and didn’t already know that) in hybrid cloud, mono-cloud, and on-premises computing environments.

BTC Is Ubiquitous. (No, Not That BTC)

For those of you who know my background already, you probably realize why I was like a fish out of water for the first few hours of CFD13. Every vendor presenting their solutions focused on the Big Three Clouds – Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP) – or BTC, as I term them. And no, we didn’t talk about Bitcoin ($BTC) at all, unless you consider our post-event podcast on the evils of cryptocurrency and its relation to Web 3.0 as relevant.

It wasn’t until the last day that our final vendor presentation from Fortinet briefly acknowledged that yeah, Oracle Cloud Infrastructure actually existed and that their tools could help manage K8 apps in that public cloud environment just as well as any of the BTCs. Yes, Oracle does indeed have a public cloud, it’s pretty damn robust, and in many cases it’s significantly cheaper to operate than the BTCs, especially when there’s considerable data egress volumes, the oft-unspoken-of slayer of DevOps budgets when a developer or QA tester accidentally issues a query that quietly pulls several decades of your company’s sales history across the network. (Stepping off my mandatory ACE Director soapbox for now.)

Like Your New Summer Intern, K8 Demands Care & Feeding

How much care and feeding? Generally, that depends. The vendors that presented their wares to us aligned their approaches to effective K8 management across three magisteria: storage, networking and virtualization, and infrastructure and governance.

Storage. NetApp talked about their ONTAP offering that provides cloud-based storage for K8 applications, Pure discussed their PureFusion product that provides storage-as code, and KastenIO showed off their K10 offering for policy-based data management.

Networking + Virtualization. VMWare talked up their Tanzu application platform as their centralized solution for handling all aspects of K8 security, networking, and connectivity, and Metallic IO demonstrated how their Data Management as a Service (DMAAS) offering as a solution for monitoring the security of their K8 environments at multiple levels.

Again With the Whiteboard

Infrastructure & Governance. I present frequently at Oracle User Group events every few weeks, and I’ve found that relevant use cases resonate the most with my audiences. The folks from RackNGo impressed me the most of any vendor as they highlighted features of the latest release of Digital Rebar. Their demo focused on a not-uncommon conflict: the grizzled insider who’d already built their K8 infrastructure versus the newcomer CTO with an attitude of “I know I’m the new guy, but I’ve got this great vision for our computing infrastructure you are gonna love!” with his prized whiteboard always at the ready. Their interaction reminded me of an aging Captain Kirk’s complaint in The Simpsons’ Star Trek XII: So Very Tired parody: “Again with the whiteboard.” Check out their video on the CFD YouTube channel for the play-by-play of that use case.

Another crucial aspect of K8 management is testing out exactly how K8 environments can handle expected workloads, and the folks at StormForge showed us how their Optimize Pro and Optimize Live toolsets helped predict expected performance vs. live application performance. Finally, Fortinet demoed in real time how their offerings fared against some real-world security incursions against their (I am not making this up) their Damn Vulnerable Web Application.

But Can the Plumbing Take It?

What really surprised me was how little discussion there was about the underlying infrastructure – the databases that power all these K8 clusters, the physical network components and firmware that facilitates flexible virtual networking, and the SSDs, storage arrays, and storage networks that provide retention for massive data sources. During my four decades in the trenches as a DBA / application developer, I’ve been keenly aware of all that infrastructure and how my SQL code, physical and logical data models, and even storage I/O rates affected my applications’ responsiveness.

Our vendors’ presentations seemed to focus completely on enabling K8 DevOps / MLOps activities with scant regard for those “old-school” concerns. Look, I get it: K8 is completely OSS, and our CFD13 presenters are providing some desperately-needed governance tools. But there’s still a a part of me wondering if this K8 craze is so focused on enabling massive scale-out and extremely rapid development cycles when it just might focus a bit more on what seasoned IT professionals know already works: writing efficient code, designing well-formed data models, and giving at least a passing thought to the pounding those physical infrastructure layers are likely to take when we ignore proven IT development methodologies.

2021: Magic 8 Ball, Broken.

Looking at the predictions I made for 2021 at the end of last year, my technical foresight seemed to be on target. However, I did not foresee a violent insurrection at my country’s Capitol, a divided electorate (and more than a few politicians!) refusing to accept the results of an election, and an ever-growing disdain for painfully-obtained expertise, single sources of truth, and critical thinking skills.

But a new administration in Washington DC has also brought a glimmer of hope for our rapidly-changing world, especially with new government programs to address climate change and a deteriorating infrastructure – both physical and digital! – and that means incredible opportunities for all of us in the world of IT. So here are my predictions for 2022 and beyond:

Decision By Digital Accelerates

The uptick in IT organizations and companies that want to accelerate their digital decision-making is utterly amazing, and it doesn’t look like it’ll slow down anytime soon. For example, I recently sat in on a briefing from Intel on how they are enabling the use of AI and ML throughout their organization, the problems they’re facing, and how they’re judging which business use cases are proper candidates for automation.

Electrification 2.0

The extreme weather events of 2021 have made it obvious that our civilization needs to combat climate change now, and it’s become obvious that we need to shift away from fossil fuels and towards alternative energy. The good news is that the goals of COP26 appear to be reinforced by several elements of the Biden’s administration’s infrastructure plans. For example, $7.5B has been allocated just to improve the electrical charging grid we’ll need in the USA to support electric vehicles (EVs) – a topic I’ll be expounding more on in 2022 as a real-world use case for IT projects during my sessions at upcoming conferences.

More Often Than Not, It’s Still a People Problem

As my friend and colleague Liron Amitzi and I have discovered during conversations with our guests in this year’s podcast episodes for Beyond Tech Skills, when you finally delve into what technical problems are slowing down an IT project a team, it’s almost always a people problem. IT organizations are struggling to deal with diversity, equity and inclusion (DEI) issues, making new hybrid workplaces work for everyone, and most of all, retain mission-critical talent. With COVID-19 hopefully receding into merely endemic status in 2022, IT teams will continue to be hard-pressed to provide business solutions no matter where an employee or contractor lives or works or what time zone she works in.

The Great Reshuffle

Whether you call it the Great Resignation or the Great Reshuffle, there are millions of people simply deciding to throw in the towel on their jobs and call it quits. Conversely, many folks are simply taking advantage of a premium market for their prized technical skills, so 2021 has been a hectic year for employers, employees, and gig workers. 2022 may finally see experienced old-timers leaving the workforce permanently to take early retirement in droves, and that means IT organizations will need to focus on knowledge transfer and perhaps offer consulting opportunities for mentoring their younger counterparts as the transition continues.

A New New Normal

Finally, some key reasons for the Great Reshuffle have mercifully come to the forefront. Many professionals are stressed out beyond capacity to cope, and many organizations have finally recognized that their employees’ mental health has been ignored for much too long. Thankfully, IT has stepped up creatively, including Smartphone applications that offer us the ability to reach out to a professional advisor to help us cope with those stresses. The phrase It’s OK Not To Be OK is evidence we’ve acknowledged at last that a person’s mental health is just as important to their well-being as their physical health, and I’ll be podcasting, writing, and presenting about that a lot in the coming year.

Time For a (Re)Branding …

I first went public with my JimTheWhyGuy brand in early 2018, just after getting inspired during a user conference I was attending in San Antonio, TX. Like Thor’s thunderbolt, I realized no one was really following me on Twitter because my handle was so obtuse to locate. You’ve seen my last name: It has hardly any consonants, and when someone asks me, “How do you pronounce that?” my reply is typically, “With extreme difficulty.”

Even worse, I was squandering my followers’ interest on several platforms, especially LinkedIn, and it was time for a change. (You can read more about that thunderbolt and its implications here.)

Things have changed a lot since then, and I’m not just talking about the onset of COVID19 as well as my decision mid-pandemic to take a sabbatical from working full-time. I’ve finally achieved a personal goal of 10,000 connections across the globe on LinkedIn; I’ve been appointed to serve on the board of ODTUG (and will hopefully get elected to a second term); and I’ve even started a podcast with my friend and colleague Liron Amitzi.

So I’ve concluded it’s the perfect time to finally rebrand myself officially as JimTheWhyGuy. I’ve built this new portal as a one-stop-shop for my long-time followers to locate my most recent presentations, check out my observations via this blog, and even catch a laugh or ten from some of my recent videos. Take a look around, tell me if you like what you see, and don’t hesitate to recommend me as a speaker / presenter / humorist / futurist to your friends, your colleagues, and the organizations you frequent.

IoT: A Brave, Not So New Frontier

The Internet of Things (IoT) is at the center of the transformation of manufacturing, public utilities, transportation, logistics, and Smart Home technology. It’s something I foresee as key to the New Electrification wave that’s certainly coming to the USA as we transition away from fossil fuels towards technology like Green Hydrogen, new (and incredibly safer!) nuclear technology, and improved batteries for storage of alternative energy resources from solar panels and wind turbines.

One of my more popular sessions in 2021 has been how Oracle Database technology – specifically, Fast Ingest and Fast Lookup capabilities for Oracle 19c and beyond – are valuable features for accommodating the potentially enormous throughput that collecting, storing, and retrieving IoT data will require.

If you’d like to check out my presentation in slide show format, feel free to grab it here from slideshare.net, and be sure to take a deeper look at the code examples in my two-part article series on those features at ODTUG’s TechCeleration portal here. Prepare to have your mind expanded!

At Long Last, Our Podcast Is Launched!

Six months ago, it was just a simple idea based on a few brief conversations. Today, it’s finally a reality: the Beyond Tech Skills podcast.

Some background is in order. My good friend, colleague, and fellow Oracle ACE Director Liron Amitzi and I had been talking in person over an adult beverage or three (pre-COVID, of course!) and then texting and chatting for the past few years about the state of the IT industry. We found that even with our cultural differences – he’s originally from Israel but now lives in Vancouver, BC, and 20 years younger than me – we were remarkably like-minded about the fantastic opportunities IT has to offer to so many diverse folks around the world.

But we also saw there were enormous gaps:

  • Folks often focused so much on the technology itself or their coding skills that they ignored the other two-thirds of what makes a truly great professional: the soft skills they needed to be successful, including the importance of gaining detailed business knowledge as well as how to communicate clearly and work together as a team.
  • IT organizations were spending incredible amounts of time trying to find qualified candidates for positions at all levels, mainly because they had no idea how to interview people properly. Even worse, many great candidates didn’t get connected with great companies simply because they didn’t know how to handle the interview process.
  • Finally, as we looked back over our combined 60 years of experience, we were disturbed by the ongoing lack of diversity in many IT organizations. Both of us remember a time just a few decades back when the “white bro developer” culture didn’t exist and the advantage that different backgrounds and viewpoints yielded across the entire software development lifecycle.

We decided it was time to take action. We’re leading off our podcast with a series of episodes on the interview process itself: how to find the right candidates for your IT organization – the ones with the right fit and finish to match your teams’ goals.

But we’re not stopping there! We’ve got a great series of interviews that will tackle all of the issues we’re most concerned about, including DEI (diversity, equity, and inclusion) and how we can all make a difference. We’ll be talking to many of our colleagues across multiple technologies and industries over the next several months.

We’re planning to publish a new episode every second Wednesday starting on 11 February, but for now, please check out our introductory episode – just point your favorite podcast app at Beyond Tech Skills – and you can always stay up to date on our entire list of episodes at BeyondTechSkills.com.