Back to Blog

Agentic AI: Expectations, Key Use Cases and Risk Mitigation Steps

Prompt Security Team
February 25, 2025
Discover agentic AI's transformative potential, key enterprise use cases, and essential risk mitigation strategies for next-level software automation.
On this Page

AI agents are autonomous or semi-autonomous software entities that use AI techniques to perceive, make decisions, take actions, and achieve goals in their digital or physical environments. Unlike standard GenAI-powered chatbots, whose outputs may instruct users on how to complete objectives, AI agents can fulfill objectives themselves by performing tasks with minimal human input.

Because they account for and adapt to extensive and granular context, AI agents’ outputs closely resemble chosen courses of action. They still rely on initial instructions, but once an aim has been defined for them, they can fulfill requests proactively and with greater consideration for pieces of information not provided by a human user.

The enterprise shift towards agentic AI

Today, fewer than 1% of enterprise software applications include agentic AI, but adoption and usage are picking up steam. Capable of solving complex challenges on their own, AI agents are already playing key roles in numerous fields, including as part of Microsoft Copilot Studio, AWS Bedrock, Azure AI Studio, and other customized products from notable tech giants.

Gartner predicts that by 2028, a third of all enterprise software applications will include agentic AI. In 2025, we should expect organizations to shift significant resources from single-interaction procedures with LLMs to agentic AI’s multi-step approach.

As Mandy Andress, Chief Information Security Officer at Elastic, told us:

“2025 will be the year of agentic AI hype. Similar to the GenAI cycle, agentic AI will be evaluated to help solve many different challenges and we will better learn about its current limitations as we all become more educated.” 

Noteworthy use cases of agentic AI

Below are some of the most prominent use cases for agentic AI:

Software engineering

AI agents are conducting maintenance and migration as legacy code is converted to more modern languages. Automated code writing stands to change software development, CI/CD pipeline writing, mainframe programming, and more. As code generation and completion become increasingly automated, developers can focus more time and energy on fewer, high-stakes challenges and opportunities.

Customer service

AI agents automate communications to enhance self-service capabilities by reducing response times and increasing customer satisfaction rates. Unlike chatbots with access to a narrow range of actions, agentic systems can determine courses of action based on real-time customer behavior. Moreover, as they conduct more interactions, they can improve by refining courses of action based on previous experience.

Healthcare

For doctors and nurses currently inundated with large amounts of patient data, AI agents can filter for applicable information quickly and efficiently. They can automate administrative tasks themselves (e.g., scheduling appointments and organizing clinical summaries) as well as the process by which these tasks contribute to overall patient care (i.e., the healthcare workflow at large). 

AI agents can also contribute to major patient care decisions in ways that until recently have not been possible. For years, AI has been used to analyze the statistical likelihood of success between different treatment options, such as between surgery and chemotherapy for cancer. But it could not account for patients’ fears or emotional states – variables that are vital for final judgment calls. With its unprecedented capacity to interpret human emotions, agentic AI is poised to play key roles in such decisions.

Supply chain

Agentic AI systems can predict supply chain disruptions by accounting for real-time weather data, data sensors in cargo ships, and other factors. Accurate predictions will help import and export managers sync better with touchpoints across their respective (and interconnecting) value chains, including retailers selling to end customers. 

The risks of agentic AI

AI agents are not immune to risks, and because they can operate with less human supervision, threats extend beyond surface-level sources, such as inputs and orchestration layers. With agentic AI, all actions and interactions resulting from agents’ decisions are susceptible to harm.

Here are three of the most noteworthy threats associated with agentic AI:

  • Autonomous actions: Unlike standard AI tools, AI agents can make changes to systems and data without human intervention. This autonomy increases the likelihood of malicious misconfigurations, bugs, and manipulations.
  • Expanded attack surface: With broader access privileges, AI agents integrated into internal systems are more vulnerable. Adversarial inputs could trick them into performing unauthorized actions, potentially leading to system infiltration, privilege escalation, and data exfiltration.
  • Real-time decision loops: Agentic AI can sense changes, decide on next steps, and execute those steps repeatedly. The ongoing nature of this execution amplifies the risk of cascading errors. Any given misstep can result in more cumulative damage.

Risk mitigation tips for CISOs

To effectively mitigate risks associated with AI agents, CISOs should implement stringent access controls and ensure thorough oversight throughout the AI lifecycle. Role-based access control (RBAC) or attribute-based access control (ABAC) should be enforced to limit the AI agent’s privileges to the minimum required for its tasks. This reduces the risk of an agent gaining unauthorized access to sensitive systems. 

Comprehensive audit trails are also essential, as they help track every action the AI takes. These logs should be immutable to prevent tampering, and forensic procedures should be in place to trace decisions back to their origins. Additionally, AI agents should be treated as privileged users within Security Information and Event Management (SIEM) tools, with continuous monitoring and anomaly detection tools employed to flag suspicious behavior or out-of-pattern actions.

Robust testing and validation protocols are crucial for ensuring that AI agents function securely. Extensive testing in sandbox environments before deployment can help uncover potential vulnerabilities, and ongoing red-team exercises or adversarial testing can identify weaknesses in the agent’s decision-making loops or prompts. CISOs should also establish clear oversight mechanisms, including kill switches that can halt the agent’s actions if something goes wrong. 

It’s equally important to regularly update organizational policies to reflect the evolving nature of AI risks. These updates should align with established AI governance frameworks like ISO/IEC and the NIST AI Risk Management Framework (AI RMF), ensuring that agentic AI’s responsibilities, approvals, and accountability are clearly defined and adhered to. 

Proprietary and third-party tools that achieve sufficient visibility of agentic AI processes will be better positioned to detect unusual executions and interactions and rectify those deemed problematic (or simply unwanted). Due to agentic AI’s speed, such rectification will need to be automated.

If you want to talk about agentic AI and how to mitigate the risks associated with it, book a time to speak with us.

Share this post