
Joao Freitas is GM and VP of Engineering for AI and Automation pagerduty
As AI use increases in large organizations, leaders are increasingly looking for the next evolution that will drive major ROI. The latest wave of this trend is the adoption of AI agents. However, like any new technology, organizations must ensure that they adopt AI agents in a responsible manner that provides both speed and security.
More than half of organizations have already deployed AI agents to some degree, and more are expected to do so in the next two years. But many early adopters are now reevaluating their approach. Four in 10 tech leaders regret not establishing a strong governance foundation from the start, which suggests they have rapidly adopted AI, but there is room to improve policies, regulations and best practices designed to ensure the responsible, ethical and legal development and use of AI.
As AI adoption accelerates, organizations must find the right balance between their risk exposure and the implementation of guardrails to ensure AI use is safe.
Where do AI agents pose potential risks?
There are three key areas of consideration for safe AI adoption.
The first is shadow AI, when employees use unauthorized AI tools without explicit permission, bypassing approved tools and processes. IT must create the necessary processes for experimentation and innovation to introduce more efficient ways of working with AI. While shadow AI has existed for as long as AI tools themselves have existed, AI agent autonomy makes it easier for non-sanctioned tools to operate outside the scope of IT, which can introduce new security risks.
Second, organizations must bridge the gap in AI ownership and accountability to prepare for events or processes that go wrong. The strength of AI agents lies in their autonomy. However, if agents act in unexpected ways, teams must be able to determine who is responsible for resolving any issues.
The third risk arises when there is a lack of explanation for the actions taken by AI agents. AI agents are goal-oriented, but how they accomplish their goals may not be obvious. AI agents must have explainable reasoning behind their actions so that engineers can detect and, if necessary, reverse actions that may cause problems with existing systems.
While there should be no delay in adopting any of these risks, they will help organizations better ensure their security.
Three Guidelines for Adopting Responsible AI Agents
Once organizations have identified the risks posed by AI agents, they should implement guidelines and guardrails to ensure safe use. By following these three steps, organizations can reduce these risks.
1: Make human inspection the default
AI agency is developing at a fast pace. However, we still need human oversight when AI agents are given the ability to act, make decisions, and pursue a goal that could impact major systems. A human should be in the loop by default, especially for business-critical use cases and systems. Teams using AI must understand what actions it can take and where they may need to intervene. Start conservatively and, over time, increase the level of agency you give AI agents.
In combination, operations teams, engineers, and security professionals must understand their role in monitoring the workflow of AI agents. Each agent should be assigned a specific human owner for clearly defined oversight and accountability. Organizations should allow any human to flag or override an AI agent’s behavior when an action has a negative consequence.
When considering tasks for AI agents, organizations should understand that, while traditional automation is good at handling repetitive, rule-based processes with structured data inputs, AI agents can handle more complex tasks and adapt to new information in a more autonomous manner. This makes them an attractive solution for all types of tasks. But as AI agents are deployed, organizations must control what actions the agents can take, especially in the early stages of a project. Thus, teams working with AI agents should have an approval path for high-impact actions to ensure that the agent’s scope does not extend beyond the expected use cases, thereby reducing risk to the broader system.
2: Bake in Safety
The introduction of new devices should not introduce new security risks to a system.
Organizations should consider agentic platforms that comply with high security standards and are validated by enterprise-grade certifications such as SOC2, FedRAMP or equivalent. Furthermore, AI agents should not be given free rein on an organization’s systems. At a minimum, the AI agent’s permissions and security scope should align with the owner’s scope, and any tools added to the agent should not allow extended permissions. Limiting access to the system based on the role of the AI agent will also ensure that the deployment runs smoothly. Keeping a complete log of every action taken by an AI agent can help engineers understand what happened in the event of an incident and pinpoint the problem.
3: Make the output understandable
The use of AI in an organization should never be a black box. The logic behind any action must be made clear so that any engineer attempting to access it can understand the context that the agent used to make decisions and can access the traces that led to those actions.
IInputs and outputs for each action should be logged and accessible. This will help organizations establish a solid overview of the logic underlying an AI agent’s actions, providing significant value if anything goes wrong.
Security underpins the success of AI agents
AI agents provide a huge opportunity for organizations to accelerate and improve their existing processes. However, if they do not prioritize security and strong governance, they could expose themselves to new risks.
As AI agents become more common, organizations need to ensure they have systems in place to measure how they perform and have the ability to take action if problems arise.
Read more from us guest authorOr, consider submitting a post of your own! see our guidelines here,
<a href