Most public and private sector organisations are using artificial intelligence (AI) in production, but trust remains a major obstacle for its future adoption, according to Splunk Inc. The cybersecurity and observability company’s research reveals that concerns centre around building trust in AI-enabled systems and processes, ensuring data privacy and security, maintaining system reliability and data quality.
All survey respondents reported plans to implement AI, either in use, testing, planning or investigation stages. However, despite widespread adoption, the results imply that organizations need to set priorities and confront obstacles with a thorough planning framework to attain positive business results from AI.
•
- Main concerns for decision-makers include trust and reliability in AI-enabled systems
- Cybersecurity tools that use AI remain a major concern
- Federal agencies’ AI adoption rate is 79%, similar to the public sector’s 83%
- 78% of survey respondents indicated that global ethical principles should guide the regulation of AI
- 44% of the private sector and 53% of the public sector are eager to use or are already using AI for automation to help increase productivity
•
•
•
•
Emerging technologies such as AI have often faced missteps and roadblocks in their rapid rise to adoption. Trust and reliability issues, particularly around cybersecurity tools leveraging AI have been major concerns for decision-makers (48% public, 53% private). These concerns highlight the pivotal role of early AI policy decisions in shaping an organization’s long-term AI strategies. Both private and public sectors are looking to use AI to improve resiliency, but hesitations relating to lack of clarity around rules and principles for AI use and adoption are hindering innovation.
Besides this, the research also highlighted the similarities and differences between the public and private sector’s use of AI. One key finding is that federal agencies’ rate of AI adoption (79%) is similar to that across the public sector (83%), leading to homogeneous AI goals and challenges. The study also reveals that a majority of respondents (80%) were actively addressing cybersecurity concerns with AI, including AI-enabled monitoring, threat risk assessment, and analysis of threat data.
However, obstacles remain, with 78% of respondents suggesting that the regulation of AI should be guided by global ethical principles rather than left up to individual nations. This finding underscores the need for broader consensus and guidelines in the area of AI adoption and utilization.
Another notable finding is the high demand for AI in driving automation, with 44% of the private sector and 53% of the public sector either eager to use or already using AI for this purpose to help increase productivity across their organizations.
“For both public and private sectors, purpose-built AI solutions can help improve an organization’s resilience,” says Bill Rowan, VP of Splunk’s Public Sector. “However, the balance between eagerness to innovate and hesitancy to venture into the unknown will continue to hinder AI innovation until there is a clear set of general principles and rules for AI technology use and adoption.”
For more insights and recommendations from the Splunk 2023 “AI Priorities, Obstacles, and Impact on Cybersecurity” survey, please visit the Splunk website.