Best Practices for Adopting a Secure Agentic Process Automation Setup

By Gurdeep Singh, Lead Architect for AI Ops+ & Observability at Tryg

Gurdeep Singh works at insurance firm Tryg and is an active member of our automation and robotics community

Did you ever watch “Minority Report”?

This post is inspired by the Future-Facing Lessons of the memorable movie with a look to business automation. In an age where automation is becoming increasingly autonomous, Agentic Process Automation (APA) represents the next frontier—intelligent agents that don’t just execute but reason, collaborate, and decide. As usual: With great power comes even greater security responsibility.

This future is not unlike the one portrayed in Minority Report, where a pre-crime system powered by predictive agents governs law enforcement decisions. While futuristic and efficient, the system's failure under ethical scrutiny reveals the critical need for secure, transparent, and accountable agentic operations.

This is where adoption of Agentic Process Automation in a COATS model with focus on “S – which standards for Security agents” plays a pivotal role and we can take anecdotes from the science fiction movie Minority Report to correlate.

Before we start: Let’s consider the COATS framework

COATS is a hierarchical framework for APA setup which segregate roles and responsibilities between APA agents and helps in setting up an organizational blueprint for Agentic Process Automations where each agent is classified based on its role and capabilities. Its role base access control, human to agent, agent to agent interaction, performance management and decommissioning is governed and referred to the setup. 

An illustration of the COATS model with Collaborators at the top of the pyramid

COATS can be expanded further as 

CollaboratorsThe Bridge Between Humans & AI Role: Ensuring human-in-the-loop decision-making.
Function:

  • Collaborate with humans and other AI agents

  • Flag abnormal agent behavior and escalate when needed

  • Maintain human oversight where necessary (e.g., risk assessment, compliance)

OrchestratorsThe Workflow Managers Role: Overseeing task assignments and agent performance.
Function:

  • Ensure Collaborators and other agents are properly onboarded

  • Maintain operational flow and resource allocation

  • Support performance monitoring and escalation protocols.

Automators – The Cross-Functional Executors Role: Handling end-to-end business transactions across systems.
Function:

  • Work with Orchestrators to ensure smooth automation processes

  • Assign tasks to Taskers for execution

  • Provide insights on automation performance and roadblocks.

TaskersThe Specialized Executors Role: Performing specific, well-defined tasks.
Function:

  • Execute individual automation tasks as reusable components

  • Operate independently while integrating into larger workflows

  • Improve efficiency through modular automation components.

SecurityThe Gatekeepers of AI Trustworthiness Role: Ensuring security, fairness, and transparency in APA.
Function:

  • Monitor AI decisions for bias, hallucinations, jailbreaks, and security threats

  • Maintain traceability, explainability, and governance of AI actions

  • Ensure compliance with AI security and ethical standards

Hollywood Parallel: “Minority Report” and the COATS Framework

In Minority Report, the PreCrime Division relies on Precogs—oracles that predict crimes before they happen. They are supported by a system of agents who execute tasks based on these predictions.

Let’s map these to the COATS model:

  • Collaborators = Precogs/Human-Analyst Interfaces – offering insights and predictions.

  • Orchestrators = Chief John Anderton’s team – managing and coordinating pre-crime operations.

  • Automators = Autonomous tools and predictive algorithms that generate alerts.

  • Taskers = Field agents and drones that perform physical tasks (e.g., apprehending suspects).

  • Security = The internal affairs unit and eventual auditors who question the system’s infallibility.

The downfall of PreCrime wasn’t the lack of capability—it was the lack of secure oversight, bias mitigation, and human override mechanisms. The lesson: even the most intelligent agentic system must be secure, ethical, and explainable.

So, what were the major issues with the Security which resulted in the downfall

  • Lack of transparency in how predictions are made

  • No accountability or auditing for edge cases (e.g., “minority reports”)

  • Total automation with minimal human oversight

  • System vulnerable to insider manipulation

Had the COATS model been properly applied, especially with a mature Security (S) role, many of these problems could have been mitigated or outright avoided.

Conclusion: Build for Trust, Not Just Speed

In today’s AI-driven era, enterprises are accelerating toward hyper-autonomous operations—where intelligent agents not only execute tasks but learn, adapt, and collaborate across complex environments. While this speed and autonomy can unlock massive efficiency gains, it also introduces profound risks if not anchored in security, transparency, and oversight.

As Minority Report vividly illustrates, predictive power and autonomous control—when unchecked—can quickly become liabilities rather than assets. The system’s downfall wasn’t technological—it was the absence of ethical grounding, human override, and resilient safeguards.

To avoid similar pitfalls in the real world, organizations must design Agentic Process Automation (APA) systems with embedded trust and systemic resilience. This begins with adopting the COATS framework to clearly define agent roles and responsibilities, and integrating security throughout the APA lifecycle.

Key Security Pillars for a Resilient APA Setup:

  • Autonomy ≠ Immunity 
    Agents should be treated as dynamic, evolving systems with version-controlled logic, identity management, and lifecycle governance—not static scripts.

  • Transparency is Non-Negotiable
    As agents make decisions, explainability becomes critical. Every action must be logged, auditable, and aligned with clear reasoning to foster accountability and trust.

  • Zero Trust by Default
    With agents accessing critical data and APIs, apply strict least-privilege access controls, enforce data masking, and validate all inter-agent communications.

  • Model the Bad Before It Happens
    Design for resilience, not just performance. Use red teaming and scenario-based simulations to uncover blind spots, rogue behavior, and failure paths before they emerge.

On closing thought: There are no absolute guarantees in the realm of autonomous systems. But by embedding security, governance, and ethical foresight into your APA architecture, you significantly reduce the blast radius of failure—and build systems that earn the trust of both internal and external stakeholders.

In the race for intelligent automation, trust must be the true measure of progress.

Learn more about automation and robotics

The conversation naturally continues in our automation and robotics community. Why not join us and be a part of it?

You can also learn more in some of the previous posts by Gurdeep. Check out these three recent posts: