SFD24: Studies In Autonomy & ES[G]

I’ve finally had a chance to catch my breath from Gestalt IT’s Storage Field Day #24 (SFD24) last week in Santa Clara, CA. It was a great chance to catch up with Stephen Foskett’s team at GestaltIT and with many of my fellow delegates from past Tech Field Day events. Best of all, we got to hear from four keys vendors who focus specifically on the most-often ignored aspect of modern computing environments: where we keep our organization’s data to insure its maximum availability, accessibility, and security. From my perspective, two major themes dominated our vendors’ messages: the expansion of autonomous resources to monitor and manage complex storage resources, and the implicit benefits of SSDs for meeting IT organizations’ environmental, social, and governance (ESG) goals.

Dell: The Big Dog In the Room

So the big dog at SFD24 – Dell – talked to us about three of their key offerings – PowerMax, PowerStore and PowerFlex – and the innovations they’re introducing in upcoming releases.

PowerStore offered up some new machine-learning-assisted volume configuration tools that autonomously anticipated typical storage requirements, thus hopefully helping overwhelmed storage admins in everyday duties; meanwhile, their PowerFlex product is aimed at providing some pretty serious cloud-based enterprise storage for clients like AWS (who also presented at SFD24 – go figure! – but more on that in a bit).

What I found most interesting was Dell’s relatively new CloudIQ offering, part of their PowerMax line. It uses autonomous anomaly detection to warn against potential ransomware attacks and other security perturbations by identifying asymmetric encryption attempts within file systems – a typical sign that something is amiss within ever-more-complex storage arrays. CloudIQ also provides health risk reports that classify business continuity problems so an IT organization’s harried storage admin – or, these days, whichever DBA or DevOps developer who might have been relegated to that role, as the number of qualified and experienced admins continue to decline – so that any threats can be quickly classified and acted upon with appropriate force.

AWS: Then Reality Set In.

From my perspective, our presenters from AWS focused less on product offerings and more on the current state of reality in so many IT organizations today: Enterprise storage customers are really not just limited to DBAs and DevOps teams; rather, it’s the actual consumers of the data, especially data scientists, data engineers, and business analysts, that are driving the crucial needs of their organizations.

That means a lot of time and energy is consumed by having to move data quickly and reliably, often between different public clouds like Oracle Cloud Infrastructure (OCI), Microsoft Azure, and of course AWS. That effort comprises moving huge volumes of data in both file and block format – perhaps even complete RDBMS instances’ data! – to take advantage of particular cloud offerings. It’s not entirely unusual these days to see an Oracle RAC database running on AWS storage, but just as likely to see it placed within a Microsoft Azure stack.

What really caught my attention was their Storage Lens offering. It offers methods to observe and analyze exactly how storage is being used through about 30 storage-specific metrics, of which at least a dozen of the most pertinent ones cost nothing to access. These services are already available autonomously, and if you don’t like the way the data is presented, you can download the metrics and process them within your own chosen infrastructure. Having played the role of part-time storage administrator in my past life, trying to figure out exactly who is using what storage and how they’re using it – JSON documents? PDFs? movies and images? – can be frustrating, especially when I’m doing double-duty as a part-time DBA, so anything to demystify those questions and the related costs they incur is welcome.

Pure Storage: SSDs As Paths to ES(G)

I love it when salespeople make gutsy moves, and the team from Pure Storage did just that: They kicked off their presentations by discussing how their Pure1 SSDs and storage arrays accomplished ESG goals. (While I can’t extrapolate that SSDs will directly lead to better corporate governance like hiring more diverse workforces and insuring pay equity, I’ll cede them the first two letters.) What really impressed me is that Pure Storage focuses on an “evergreen” manufacturing strategy to produce their SSDs and arrays – essentially, every new SSD they build will fit in current arrays, and vice versa – which definitely overcomes the need to constantly install new storage racks, controllers, and storage devices in data centers. Pure Storage’s research claims that their product line reduces power usage by as much as 80% over other manufacturer’s SSD arrays and even more when compared to HDDs.

And though they spent less time on it, the theme of autonomous and/or assisted storage management came to the fore when they talked a bit about their Purity upgrade strategy for their Pure1 offerings. Again, overburdened storage administrators can potentially benefit from self-service, guided upgrades of SSD storage and arrays without worrying about the complexities of the upgrade process itself.

{Full disclosure: I’m currently engaged with a separate sales team at Pure Storage right now to promote some of their other storage offerings, but I’m playing the role of a crusty old school DBA in our discussions and taking nothing at face value.}

Solidigm: Wait, You Built an SSD How Big?!?

Closing out our final day at SFD24, the team from Solidigm presented on their SSD solutions aimed at ever-larger data storage requirements, as well as the need to access large datasets at maximum speed and efficiency. Though they spent a little too much time telling us about use cases they’d encountered, their story-telling was solid and even a bit retro. (Let’s just say I never expected to hear an allusion to that venerable prophet of anime, Speed Racer, which I grew up on as a kid in the before times.)

Solidigm also announced their latest SSD would clock in at 64TB using their quad-level cell (QLC) technology and talked about the next level of density – penta-level cell (PLC) SSDs – which they just released a few months ago.

As someone who remembers hearing at a conference just ten years ago that one day soon, HDDs would be only found in a museum and we’d be using SSDs exclusively, these new storage capacities and densities are mind-blowing. HDDs are still here, of course, but they’re not adept at filling another niche in the future that we discussed with the Solidigm folks: the ability of retasking “old” SSDs for new life. Even an older SLC, MLC or TLC device isn’t completely worn out, and it could be useful for storing a few TB of valuable data for an edge computing use case – say, mated with a Raspberry Pi / Arduino board to store more data closer to the edge for immediate analytics. That reusability is unique to SSDs, and that bodes well for the greener future of computing.

Wrapping Up: What Comes Next?

Since in my past life I’d worked for Hitachi Data Systems for two years and still count many friends and colleagues from that venture, I intensely enjoyed SFD24. It was exciting to see just how much SSD technology has expanded and improved in the last 10 years since I left HDS, but equally interesting that “rotating rust” (and yes, even venerable magnetic tape!) still has a place in most enterprise storage environments. The next five years are likely to prove even more fascinating as SSD capacity and resilience continues apace, especially as ESG concerns continue to factor into IT organizations’ future plans.

Big Guns Attract Attention, But Not Always Interest

It always surprises me when the largest IT vendors offer up less-than-compelling stories about their latest product offerings – you’d think they’d have the most creative storytellers on staff, after all! – but more often than not, I’ve found myself drifting a bit while listening to their sessions and waiting for the prized “Oh – one more thing” moment. In my previous blog post I discussed four much smaller vendors that presented their stories at Tech Field Day #25 that struck me as driving real value in the current multi-cloud marketplace, but it would be unfair to the “big guns” that also attended our conference to ignore what they brought to that space.

VMWare: Migrating Apps to the Cloud Ain’t Beanbag

I’ve been using VMWare technology for at least 15 years. I used their robust VMs to build out one of the first working models of Oracle RAC database technology when I was teaching courses for Oracle University that we used to teach over 2000 students from 2005 – 2009.

VMWare’s vRealize family of products offer DevOps teams, SREs, and PMs the ability to search for applications that an IT organization wants to move from a typical on-premises / in-house environment and migrate them to a multi-cloud environment. They presented several examples of how their tools facilitate the various phases of complex migration strategies, including even identifying which applications might be hiding “under the covers” but still need migration to the cloud, based on network activity and other metrics.

The final presenters from VMWare had the unfortunate task of talking about a stultifyingly boring topic – making intelligent sense of application activity logs – and actually made it (slightly) less boring. Their vRealize Log Insight Cloud product offers DevOps teams some interesting tools to take a closer look at how applications are performing in real time based on application logs and help focus DevOps activity towards meaningful interventions – say, moving an app to different cloud network endpoints to shorten user response times.

MinIO: SDS Is Where It’s At, Because Cloud Demands It

OK, so VMWare had the virtualized cloud side covered pretty well, especially from the network, CPU and memory perspective. MinIO discussed their software-defined storage (SDS) that’s currently compatible with all of the Big 3 public cloud providers, with some key customers using it for multi-PB-sized deployments.

The bottom line from the MinIO folks? We’re absolutely headed towards a multi-cloud future, and so that means IT organizations’ storage needs are best handled through software-defined storage instead of traditional spinning rust / SSD combinations housed internally as well. The big advantage their approach touts is that is therefore possible to keep storage demands completely separate from demand for compute resources.

Intel: Optane Increases Everything’s Octane

Intel talked about their latest innovations regarding their Optane family of products, which includes CPU, persistent memory (PMEM), and SSD storage technologies.

They also talked at length about their Compute Express Link (CXL) technology that allows their Intel-based CPU, memory, and storage devices to more effectively share resources for data exchange to overcome I/O bound workloads while also lowering the overall complexity of the software stack.

And finally it was nice to see some mention of the technology near and dear to my heart: Oracle Databases, Machine Learning, and Analytics! The X8M release of the Oracle Exadata Database Machine actually incorporates Intel’s PMEM technology into its storage cells and leverages that as an extended cache for columnar storage for database operations; Oracle DBs based on Exadata – including Oracle’s Autonomous Database available in its public cloud – thus absolutely scream performance-wise as compared to non-columnar databases.

Conclusions

In today’s tech news, Broadcom announced their proposed acquisition of VMWare by mid-week (umm … whaa?). In recent discussions with my delegate colleagues, even Oracle Corporation has been bounced around as a suggestion for a buyout parent … but quite honestly, it seems to me an Intel-VMWare pairing might make a lot more sense. The robust tools that VMWare has built for cloud migration would team nicely with Intel’s continual improvements to the CPUs, memory, and storage that underpin a majority of the hardware stack hosting many of the “Big 3” public cloud infrastructure.

Oh, and one other thing: These “big guns” should probably consider taking up our GestaltIT hosts to pre-review their presentations before we delegates actually get our first look. There are some serious benefits from professional overlook of their sessions before we ever get to see them, especially if it lets us focus on the benefits of each offering instead of having to suffer through a dozen or so slides filled with shameless self-promotion marketing pitches.

Small & Fierce Tech Upstarts Dominate

I enjoyed Cloud Field Day 13 (CFD13) in Santa Clara, CA back in February 2022 so much that I jumped at the chance to team up with a whole new set of colleagues at yet another Tech Field Day event in early April – Tech Field Day 25 – to review some amazing technology from seven different vendors. I have to admit that though the “big guns” at the event (more in my next blog post) brought their A-teams to present on their newest offerings, it seemed to me that the smaller vendors were more than willing to try punching above their weight and present their stories with a lot more verve and nuance.

Nasuni: Not Our Parent’s File Storage

As a past technology advocate for Hitachi Data Storage and as an Oracle DBA consultant, I’ve always been keenly interested in file storage technology. How much it’s progressed exponentially in the last decade was evident when Nasuni showed off their Nasuni File Services platform and its capabilities built to handle the demands of today’s cloud computing environments, with its UniFS file system underpinning storage for AWS, GCP, and Azure.

Ten years ago, we worried about losing critical files or chunks of databases; today’s key challenges include rapid restoration of files that’ve been compromised due to ransomware attacks. One especially innovative idea that they talked about is their Nasuni Labs portal. It’s a free, public, GitHub-based repository of open-source projects that they and their customer base has built over the past few years. The idea here is to provide faster uptake among and allow collaboration between storage solution professionals to solve specific problems within their environments – an excellent way to show support for your client base.

Apica: Let’s Break This App, Oh, a Few Bazillion Times

I started out my IT career as an application developer, and I still have a few broken keyboards to prove my frustration when I missed testing out a key (or not-so-key!) feature with adequate workloads and complexity. So I was intrigued when Apica presented their Active Monitoring Platform that offers full lifecycle testing that focuses on a user’s journey through an application, rather than just simulating random tests or recorded keystrokes. Apica’s toolset makes it possible to test applications at scale as well by quickly building reusable, scripted test cases to be used to ramp up thousands of simultaneous users running millions of test points – including purposely introducing invalid data! – against a pre-production application. Apica also offers application monitoring that’s integratable with popular support tools like Grafana or Splunk.

Keysight: So … How Many Users Can Our Network Really Handle?

Even with reliable cloud storage and the capability to hammer an application with millions of tests, one key component of a reliable user experience is still lacking: how well the actual network itself can handle extreme user demands. Keysight, an offspring of the venerable Hewlett-Packard, a company well-known for its passion to innovate – covered that need with a demo of their new CyPerf product. In today’s world of distributed zero trust networks composed of complex virtualized components – VPNs, VCNs, edge computing devices, and of course, the switches hooked to my Oracle database servers! – CyPerf can simulate actual network traffic across environments so administrators and testers can evaluate just how well current, expected workload levels will perform, as well as the impact of upscaling that traffic as demand increases seasonally.

Fortinet: Detecting & Deflecting Virtual Knives

I’d first encountered the Fortinet team at CFD13 and was blown away by their offerings at that event. I’m the first person to admit that I’m no internet security expert, but I do keep a close eye on trends in security penetration hijinks and techniques that bad actors typically use. Fortinet’s team showed off their FortiWeb product with live demos of how three different and increasingly sophisticated security breach vectors of attack – bot-based user simulation, data scraping, and missing user input sanitation – were detected, analyzed, and almost immediately contained within seconds through Fortinet’s dual layers of machine learning, all with virtually no human intervention.

It was the most interesting presentation at the event – just the right bit of marketing with plenty of pertinent demonstration of how their security toolset just worked. My only infinitestimal critique: Their technical expert playing the security admin role could be a bit more enthusiastic when she blocked the final rather sophisticated attempt, perhaps just smiling wickedly and saying sweetly, “Well, this is what happens when you bring a virtual knife to a security gunfight.” 😉

Elvis Is Everywhere. So Is Kubernetes.

Elvis Is Everywhere.

As I attended my first-ever Tech Field Day event – Cloud Field Day 13 (CFD13) – in Santa Clara, CA last week, the gritty cult music video Elvis Is Everywhere from the late 1980s kept running through my head. (In this grainy clip, Mojo Nixon and Skid Roper contend that the real secret of life, the universe and everything is that we all have a little Elvis Presley inside us and we’re all moving towards a perfect state of Elvis-ness because of Elvis-lution.)

And if you replace “Elvis” with “Kubernetes,” then you’ve got a glimmer of how I see the current state of DevOps and advanced computing today after attending CFD13: Kubernetes is everywhere, and everybody is using it – even when it may not necessarily make perfect sense to do so. It was amazingly enlightening to sit down with 11 other professionals from across the globe in a hybrid three-day event as we heard from eight different vendors – some huge, some a bit smaller – about the challenges of managing Kubernetes (usually abbreviated to K8, if you’ve been trapped under a virtual rock like me and didn’t already know that) in hybrid cloud, mono-cloud, and on-premises computing environments.

BTC Is Ubiquitous. (No, Not That BTC)

For those of you who know my background already, you probably realize why I was like a fish out of water for the first few hours of CFD13. Every vendor presenting their solutions focused on the Big Three Clouds – Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP) – or BTC, as I term them. And no, we didn’t talk about Bitcoin ($BTC) at all, unless you consider our post-event podcast on the evils of cryptocurrency and its relation to Web 3.0 as relevant.

It wasn’t until the last day that our final vendor presentation from Fortinet briefly acknowledged that yeah, Oracle Cloud Infrastructure actually existed and that their tools could help manage K8 apps in that public cloud environment just as well as any of the BTCs. Yes, Oracle does indeed have a public cloud, it’s pretty damn robust, and in many cases it’s significantly cheaper to operate than the BTCs, especially when there’s considerable data egress volumes, the oft-unspoken-of slayer of DevOps budgets when a developer or QA tester accidentally issues a query that quietly pulls several decades of your company’s sales history across the network. (Stepping off my mandatory ACE Director soapbox for now.)

Like Your New Summer Intern, K8 Demands Care & Feeding

How much care and feeding? Generally, that depends. The vendors that presented their wares to us aligned their approaches to effective K8 management across three magisteria: storage, networking and virtualization, and infrastructure and governance.

Storage. NetApp talked about their ONTAP offering that provides cloud-based storage for K8 applications, Pure discussed their PureFusion product that provides storage-as code, and KastenIO showed off their K10 offering for policy-based data management.

Networking + Virtualization. VMWare talked up their Tanzu application platform as their centralized solution for handling all aspects of K8 security, networking, and connectivity, and Metallic IO demonstrated how their Data Management as a Service (DMAAS) offering as a solution for monitoring the security of their K8 environments at multiple levels.

Again With the Whiteboard

Infrastructure & Governance. I present frequently at Oracle User Group events every few weeks, and I’ve found that relevant use cases resonate the most with my audiences. The folks from RackNGo impressed me the most of any vendor as they highlighted features of the latest release of Digital Rebar. Their demo focused on a not-uncommon conflict: the grizzled insider who’d already built their K8 infrastructure versus the newcomer CTO with an attitude of “I know I’m the new guy, but I’ve got this great vision for our computing infrastructure you are gonna love!” with his prized whiteboard always at the ready. Their interaction reminded me of an aging Captain Kirk’s complaint in The Simpsons’ Star Trek XII: So Very Tired parody: “Again with the whiteboard.” Check out their video on the CFD YouTube channel for the play-by-play of that use case.

Another crucial aspect of K8 management is testing out exactly how K8 environments can handle expected workloads, and the folks at StormForge showed us how their Optimize Pro and Optimize Live toolsets helped predict expected performance vs. live application performance. Finally, Fortinet demoed in real time how their offerings fared against some real-world security incursions against their (I am not making this up) their Damn Vulnerable Web Application.

But Can the Plumbing Take It?

What really surprised me was how little discussion there was about the underlying infrastructure – the databases that power all these K8 clusters, the physical network components and firmware that facilitates flexible virtual networking, and the SSDs, storage arrays, and storage networks that provide retention for massive data sources. During my four decades in the trenches as a DBA / application developer, I’ve been keenly aware of all that infrastructure and how my SQL code, physical and logical data models, and even storage I/O rates affected my applications’ responsiveness.

Our vendors’ presentations seemed to focus completely on enabling K8 DevOps / MLOps activities with scant regard for those “old-school” concerns. Look, I get it: K8 is completely OSS, and our CFD13 presenters are providing some desperately-needed governance tools. But there’s still a a part of me wondering if this K8 craze is so focused on enabling massive scale-out and extremely rapid development cycles when it just might focus a bit more on what seasoned IT professionals know already works: writing efficient code, designing well-formed data models, and giving at least a passing thought to the pounding those physical infrastructure layers are likely to take when we ignore proven IT development methodologies.

2021: Magic 8 Ball, Broken.

Looking at the predictions I made for 2021 at the end of last year, my technical foresight seemed to be on target. However, I did not foresee a violent insurrection at my country’s Capitol, a divided electorate (and more than a few politicians!) refusing to accept the results of an election, and an ever-growing disdain for painfully-obtained expertise, single sources of truth, and critical thinking skills.

But a new administration in Washington DC has also brought a glimmer of hope for our rapidly-changing world, especially with new government programs to address climate change and a deteriorating infrastructure – both physical and digital! – and that means incredible opportunities for all of us in the world of IT. So here are my predictions for 2022 and beyond:

Decision By Digital Accelerates

The uptick in IT organizations and companies that want to accelerate their digital decision-making is utterly amazing, and it doesn’t look like it’ll slow down anytime soon. For example, I recently sat in on a briefing from Intel on how they are enabling the use of AI and ML throughout their organization, the problems they’re facing, and how they’re judging which business use cases are proper candidates for automation.

Electrification 2.0

The extreme weather events of 2021 have made it obvious that our civilization needs to combat climate change now, and it’s become obvious that we need to shift away from fossil fuels and towards alternative energy. The good news is that the goals of COP26 appear to be reinforced by several elements of the Biden’s administration’s infrastructure plans. For example, $7.5B has been allocated just to improve the electrical charging grid we’ll need in the USA to support electric vehicles (EVs) – a topic I’ll be expounding more on in 2022 as a real-world use case for IT projects during my sessions at upcoming conferences.

More Often Than Not, It’s Still a People Problem

As my friend and colleague Liron Amitzi and I have discovered during conversations with our guests in this year’s podcast episodes for Beyond Tech Skills, when you finally delve into what technical problems are slowing down an IT project a team, it’s almost always a people problem. IT organizations are struggling to deal with diversity, equity and inclusion (DEI) issues, making new hybrid workplaces work for everyone, and most of all, retain mission-critical talent. With COVID-19 hopefully receding into merely endemic status in 2022, IT teams will continue to be hard-pressed to provide business solutions no matter where an employee or contractor lives or works or what time zone she works in.

The Great Reshuffle

Whether you call it the Great Resignation or the Great Reshuffle, there are millions of people simply deciding to throw in the towel on their jobs and call it quits. Conversely, many folks are simply taking advantage of a premium market for their prized technical skills, so 2021 has been a hectic year for employers, employees, and gig workers. 2022 may finally see experienced old-timers leaving the workforce permanently to take early retirement in droves, and that means IT organizations will need to focus on knowledge transfer and perhaps offer consulting opportunities for mentoring their younger counterparts as the transition continues.

A New New Normal

Finally, some key reasons for the Great Reshuffle have mercifully come to the forefront. Many professionals are stressed out beyond capacity to cope, and many organizations have finally recognized that their employees’ mental health has been ignored for much too long. Thankfully, IT has stepped up creatively, including Smartphone applications that offer us the ability to reach out to a professional advisor to help us cope with those stresses. The phrase It’s OK Not To Be OK is evidence we’ve acknowledged at last that a person’s mental health is just as important to their well-being as their physical health, and I’ll be podcasting, writing, and presenting about that a lot in the coming year.

Time For a (Re)Branding …

I first went public with my JimTheWhyGuy brand in early 2018, just after getting inspired during a user conference I was attending in San Antonio, TX. Like Thor’s thunderbolt, I realized no one was really following me on Twitter because my handle was so obtuse to locate. You’ve seen my last name: It has hardly any consonants, and when someone asks me, “How do you pronounce that?” my reply is typically, “With extreme difficulty.”

Even worse, I was squandering my followers’ interest on several platforms, especially LinkedIn, and it was time for a change. (You can read more about that thunderbolt and its implications here.)

Things have changed a lot since then, and I’m not just talking about the onset of COVID19 as well as my decision mid-pandemic to take a sabbatical from working full-time. I’ve finally achieved a personal goal of 10,000 connections across the globe on LinkedIn; I’ve been appointed to serve on the board of ODTUG (and will hopefully get elected to a second term); and I’ve even started a podcast with my friend and colleague Liron Amitzi.

So I’ve concluded it’s the perfect time to finally rebrand myself officially as JimTheWhyGuy. I’ve built this new portal as a one-stop-shop for my long-time followers to locate my most recent presentations, check out my observations via this blog, and even catch a laugh or ten from some of my recent videos. Take a look around, tell me if you like what you see, and don’t hesitate to recommend me as a speaker / presenter / humorist / futurist to your friends, your colleagues, and the organizations you frequent.

IoT: A Brave, Not So New Frontier

The Internet of Things (IoT) is at the center of the transformation of manufacturing, public utilities, transportation, logistics, and Smart Home technology. It’s something I foresee as key to the New Electrification wave that’s certainly coming to the USA as we transition away from fossil fuels towards technology like Green Hydrogen, new (and incredibly safer!) nuclear technology, and improved batteries for storage of alternative energy resources from solar panels and wind turbines.

One of my more popular sessions in 2021 has been how Oracle Database technology – specifically, Fast Ingest and Fast Lookup capabilities for Oracle 19c and beyond – are valuable features for accommodating the potentially enormous throughput that collecting, storing, and retrieving IoT data will require.

If you’d like to check out my presentation in slide show format, feel free to grab it here from slideshare.net, and be sure to take a deeper look at the code examples in my two-part article series on those features at ODTUG’s TechCeleration portal here. Prepare to have your mind expanded!

At Long Last, Our Podcast Is Launched!

Six months ago, it was just a simple idea based on a few brief conversations. Today, it’s finally a reality: the Beyond Tech Skills podcast.

Some background is in order. My good friend, colleague, and fellow Oracle ACE Director Liron Amitzi and I had been talking in person over an adult beverage or three (pre-COVID, of course!) and then texting and chatting for the past few years about the state of the IT industry. We found that even with our cultural differences – he’s originally from Israel but now lives in Vancouver, BC, and 20 years younger than me – we were remarkably like-minded about the fantastic opportunities IT has to offer to so many diverse folks around the world.

But we also saw there were enormous gaps:

  • Folks often focused so much on the technology itself or their coding skills that they ignored the other two-thirds of what makes a truly great professional: the soft skills they needed to be successful, including the importance of gaining detailed business knowledge as well as how to communicate clearly and work together as a team.
  • IT organizations were spending incredible amounts of time trying to find qualified candidates for positions at all levels, mainly because they had no idea how to interview people properly. Even worse, many great candidates didn’t get connected with great companies simply because they didn’t know how to handle the interview process.
  • Finally, as we looked back over our combined 60 years of experience, we were disturbed by the ongoing lack of diversity in many IT organizations. Both of us remember a time just a few decades back when the “white bro developer” culture didn’t exist and the advantage that different backgrounds and viewpoints yielded across the entire software development lifecycle.

We decided it was time to take action. We’re leading off our podcast with a series of episodes on the interview process itself: how to find the right candidates for your IT organization – the ones with the right fit and finish to match your teams’ goals.

But we’re not stopping there! We’ve got a great series of interviews that will tackle all of the issues we’re most concerned about, including DEI (diversity, equity, and inclusion) and how we can all make a difference. We’ll be talking to many of our colleagues across multiple technologies and industries over the next several months.

We’re planning to publish a new episode every second Wednesday starting on 11 February, but for now, please check out our introductory episode – just point your favorite podcast app at Beyond Tech Skills – and you can always stay up to date on our entire list of episodes at BeyondTechSkills.com.

2020: Meteor Averted. Bring on 2021!

In a year that started with only the third impeachment in history of a sitting President of the USA, a global pandemic that has killed hundreds of thousands of people, major unrest and justifiable demands for social justice and an end to police violence in so many of our towns and cities, and an unprecedented general election campaign that saw incredible voter turnout and post-election turmoil across my fine nation, I found myself drawing dark comfort from one of the most appropriate bumper stickers I’ve seen this year: Giant Meteor 2020. Just End It Already.

So what will 2021 bring? Here’s my viewpoints on technology, human civilization, and hope for an amazing year and decade.

Everywhere, the Internet of Things (IoT). IoT shows no signs of stopping its expansion into our daily lives, and that’s not a bad thing, either. Wearable devices, real-time contact tracing, and even cybernetic implants have made news in 2020. Remember that the whole point of IPV6 addressing was to make networking available for up to some 50 trillion individual devices by 2050, and I’ll be keeping an eye on that trend in this decade.

What’s also fascinating to me is the incredibly low-cost capabilities of IoT, made possible by cheap and reliable Raspberry Pi and Arduino computing nodes, sensors, and hubs. For example, today I’m putting the finishing touches on what I believe is a pretty sophisticated home security system, all built on IoT components. I’ve been able to build that with open-source software and a lot of help from online support communities. It’s not been without frustration, but I’ve learned so much from the experience that I’m ready to move on to more challenging tasks.

Machine Learning, AI, and Decision By Digital. All these IoT data sources offer our human civilization an incredible set of amazing opportunities, including more efficient agriculture, intelligent electric vehicles, “smart cities,” closed-loop recycling, cleaner air and water, and especially electric power infrastructure. But to clear the grain from the chaff, we’ll need ever-better machine learning algorithms, artificial intelligence, and digitally-driven decision making to take advantage of these exabytes of information.

Convergence, in Databases and Applications. 2020 clearly demonstrated that just-in-time supply chains have some weaknesses during times of stress from global events like the COVID-19 pandemic as well as local catastrophes like the Australian and Californian wildfires and a horrendous series of tropical storms and hurricanes. That means we’ll need to consider where our critical databases, applications, and infrastructure is located, too, so I’ll be watching trends towards converged solutions like Oracle’s Application Express (APEX) and its emphasis on its Converged Database strategy.

Emphasis on Objective Truth. Finally, if this past year has shown us anything – from the USA’s incredibly incompetent response to COVID-19 as well as the dramatically divergent political extremes about the reliability of the results from the USA’s general election – it’s the value of objective, verifiable, trustworthy facts and information. My country’s information infrastructure has been badly damaged, not just by recent nation-state actors’ hacking attempts, but by a deliberate rejection of expertise and knowledge provided by trustworthy professionals about everything from basic principles of public heath to how elections actually work.

It’s going to take at least a few years to restore that faith and trust in public institutions so that we can move on to tackle the enormous problems facing our human civilization in everything from climate change to clean energy transformation. But I’m optimistic that we can still get there because, as Jeff Bridges says in Starman, “When things are at their very worst, humans are at their very best.”

Bring it on, 2021! We are ready to be at our very best.

My Hat’s In the Ring for ODTUG Board

As you’ve probably heard, I’ve tossed my hat into the proverbial ring for ODTUG Board membership.

This is the first time I’ve ever run for any position of this stature in my life, and hopefully my platform and vision for the future of ODTUG will help you decide if I’m worthy of this honor. (My fellow candidates’ positions are also there, so please do take the time to read through our credentials and proposals.)

One thing is for sure: We have a most excellent roster of ODTUG candidates to choose from! I’m amazed at the talent that we have attracted to ODTUG, and that’s why I firmly believe we’re the premier Oracle User Group in North America bar none. So no matter who you vote for, please do vote! It’s crucial that every member of ODTUG has their voice heard, especially in these challenging times for our families, our careers, and our industry.