Britain’s News logo Britain’s News
Mon 22 Dec 2025 • 08:44

AI Adoption in Organizations Highlights Need for Strong Governance and Security Measures

AI Adoption in Organizations Highlights Need for Strong Governance and Security Measures

# The Perils of Unregulated AI Autonomy in SRE

## AI agents are reshaping the landscape, but responsible use is crucial.

João Freitas, General Manager and Vice President of Engineering for AI and Automation at PagerDuty, highlights the growing adoption of AI agents within large organizations. As AI technology evolves, leaders are eager to tap into its potential for significant return on investment. However, the introduction of AI agents necessitates a careful approach to ensure both speed and security.

Currently, over half of all organizations have implemented AI agents to varying degrees, with many more planning to do so in the coming years. Early adopters are now reassessing their strategies, as 40% of tech leaders express regret over not establishing a firmer governance framework from the outset. This trend indicates a rapid adoption marked by shortcomings in policies and best practices that are essential for responsible, ethical, and lawful AI deployment.

As the pace of AI adoption accelerates, organizations must navigate the fine line between mitigating risk and implementing necessary guardrails for secure usage.

### Identifying AI Agent Risks

Three primary areas warrant attention when considering the safe adoption of AI agents:

1. **Shadow AI**: Employees often resort to unauthorized AI tools without proper authorization, bypassing established tools and processes. Although the phenomenon of shadow AI is not new, the autonomy of AI agents makes it easier for these unapproved tools to operate outside IT’s supervision, introducing new security vulnerabilities.

2. **Accountability and Ownership Deficiencies**: The inherent autonomy of AI agents raises questions about accountability when things go awry. Organizations need to clarify ownership to effectively address issues stemming from unexpected agent behavior.

3. **Lack of Explainability**: AI agents operate based on defined goals, yet the mechanisms behind their actions may remain opaque. Organizations must ensure these agents are equipped with explainable logic to enable engineers to trace their actions and, if necessary, reverse any unwanted outcomes.

These risks should not deter adoption but instead guide organizations towards more secure practices.

### Guidelines for Responsible AI Agent Use

To navigate the potential pitfalls associated with AI agents, organizations should implement robust guidelines aimed at ensuring safe usage. The following steps can significantly reduce risks:

1. **Prioritize Human Oversight**: While AI technology advances swiftly, human oversight remains essential. Any AI agent with the capacity to act and make decisions affecting critical systems should have a human overseeing its actions. This oversight should be standard, especially for business-critical tasks. Teams using AI need clarity on potential actions and the points at which they must intervene. Start with limited agency and gradually increase it as confidence grows.

Each AI agent should be assigned a specific human owner to establish accountability. Moreover, anyone in the organization should have the authority to flag or override an AI agent’s actions if they result in negative outcomes.

2. **Integrate Security Measures**: The introduction of AI agents should not compromise security. Organizations must consider platforms that adhere to stringent security protocols, validated through enterprise-grade certifications like SOC2 or FedRAMP. AI agents’ permissions should align strictly with their designated roles, avoiding excessive access across systems. Maintaining comprehensive logs of AI agents' actions will help troubleshoot incidents and provide insights into any issues arising from their operations.

3. **Ensure Explainability of Outputs**: AI actions should not be a mystery. Organizations must document the reasoning behind each decision made by an AI agent, making this information accessible to engineers seeking to understand the context leading to those actions. Keeping records of all inputs and outputs provides valuable insights, particularly if something goes awry.

The successful integration of AI agents hinges upon a strong emphasis on security. They hold the potential to transform operational efficiency, but a lack of focus on governance may lead to exposure to new risks. As AI agents become increasingly prevalent, organizations must be equipped to evaluate their performance and respond effectively to any complications that may arise.