- New research from cybersecurity and observability leader Splunk suggests that, while the adoption of artificial intelligence (AI) continues to increase, trust remains a significant barrier to future implementation.
- Organizations have voiced concerns around building trust in AI-enabled systems and processes, ensuring data privacy and security, system reliability, and data quality issues.
- 80% of respondents outlined they are already addressing cybersecurity priorities with AI, including AI-enabled monitoring, risk assessment, and threat data analysis.
- Most survey respondents believe global ethical principles should regulate AI, rather than relying on individual nation-states.
According to the study by Splunk, every respondent reported plans to implement AI technologies, either currently using, testing, planning, or investigating its use. Despite this universal adoption, organizations recognize the need for a comprehensive planning framework to confront obstacles and achieve positive business outcomes from AI. Trust and reliability in AI-enabled systems, especially cybersecurity tools that leverage AI, remain top concerns for decision-makers.
The research gives a detailed picture of AI adoption within public and private sectors. The rate of AI adoption among federal agencies (79%) is analogous to the adoption across the public sector (83%). This similarity has led to similar AI goals and challenges across sectors. Half of the public sector survey respondents identified continuous monitoring as a top tactic to defend AI-enabled systems against cybersecurity attacks, followed by threat intelligence solutions (45%) and developing an incident response plan (43%).
Respondents also highlighted the role of AI in promoting innovation, enhancing goods or services, and improving citizen and customer experiences as the main drivers for their AI strategy. Furthermore, 44% of the private sector and 53% of the public sector are keen to use or are already using AI for automation to increase productivity across their organizations. However, 78% of respondents specified that global ethical principles should guide the regulation of AI, indicating national boundaries should not determine the rules for AI technology use and adoption.
“The push and pull between eagerness to innovate and hesitancy to venture blindly into the unknown will continue to hinder AI innovation until we have a clear body of general principles and rules for AI technology use and adoption,” commented Bill Rowan, VP of Splunk Public Sector, Splunk.