Numbed by yet another year of data breaches–2,013 in 2019 alone by one estimate–it is no surprise researchers widely acknowledge human error as a top-source of cyber insecurity. C-Suite executives and policy makers called “human error” their top cybersecurity risk in Oracle’s Security in the Age of AI report, released last May.
Without change, the scale and frequency of data breaches will continue to increase. To reverse the trend we must better understand how exactly, people contribute to cyber-insecurity.
How human error creates problems
In December 2019, Oracle surveyed 300 enterprise-level IT decision makers nationwide to examine concerns around cybersecurity posture and potential solutions. The survey evaluated the state of cybersecurity preparedness and trust in employing AI to address security gaps at organizations with 500+ employees. We saw key red flags from patching protocols to workforce constraints that created a cycle of vulnerability and error.
The results demonstrate that the human contribution to IT vulnerability remains undeniable. Most respondents (58%) cited human error as a leading reason for IT vulnerabilities. Yet what is often characterized as a “technical” factor–insufficient patching–was judged just as important as human error, with 57% of those surveyed citing it as a leading reason for vulnerability. But insufficient patching–meaning either that patches are not applied frequently enough, or they are not applied consistently enough–is a factor firmly rooted in human behavior.
The problem of human error may be even bigger than these statistics suggest. Digging deeper, the survey found that nearly half (49%) of respondents cited available workforce as a constraint on patching; 62% said they would apply a wider variety of patches more frequently if they had additional personnel to do so. When asked about major contributors to failed patches, 84% of respondents pointed to human error.
Not surprisingly, a separate question found people were less trusted to keep organization IT systems safe (16%) than were machines (31%).
A role for AI.
Enterprises will never be able to hire enough humans to fix problems created by human error. Existing methods that companies and agencies employ leave considerable room for error, with 25% of all security patches deployed by organizations failing to reach every device. When U.S.-based CEOs cite cybersecurity as the number one external concern for their business in 2019, the simple act of patching (and failing) is a huge unresolved risk. Why should humans have to fix this problem alone, when autonomous capabilities are available to improve IT security.
Some vendors may place blame on customers for incorrectly configuring computing resources, but Oracle understands customers deserve more, and should have the option to do less, which is why we use autonomous technologies to secure customer systems. With the cybersecurity workforce gap growing, deploying AI is the best way to supplement human abilities and protect systems more efficiently.
Oracle’s investment of $6 billion each year to address problems like automated patching and security configuration translates into real action taken to protect our customers. Oracle created the world’s first autonomous database, a defining technology for this next generation of AI-led software and cloud services. Our self-repairing, self-tuning, self-patching autonomous technology helps protect businesses and governments from threats wherever they originate.
By taking on this responsibility, Oracle empowers enterprises to redeploy the scarce human talent of their cybersecurity workforce to address new priorities, not basic security hygiene. Enterprises that adopt autonomous technologies broadly and quickly will help eliminate an entire category of cyber risk (patching)–and stop repeating the pattern of human-error induced insecurity in the future.
By Greg Jensen, Senior Director of Cloud Security