No-Code AI Agents Vulnerable to Data Theft, Fraud: Tenable
No-code agentic AI platforms, such as Microsoft Copilot Studio, can be exploited through prompt injection to exfiltrate sensitive data and execute unauthorized financial transactions, reports Tenable Research. This manipulation highlights the severe operational and compliance risks organizations face when implementing AI without robust security governance.
Tenable Research successfully hijacked an AI agent's workflow, leading to the unauthorized extraction of sensitive payment data and the approval of services at zero cost. This demonstrates the immediate necessity for organizations to establish stringent security protocols when adopting AI-powered automation tools.
The root cause of this risk lies in the accessibility and functional power of these new tools, features which are not inherently balanced with integrated security mechanisms. "AI agent creators, like Copilot Studio, democratize the ability to create powerful tools, but they also democratize the ability to execute financial fraud, creating significant security risks without even knowing it," says Keren Katz, Senior Group Director of AI Security Product and Research, Tenable.
The analysis indicates that the efficiency-driving power of these agents can quickly transform into a real, tangible security risk.
Organizations are rapidly adopting no-code AI agent development platforms. The primary objective is to increase operational efficiency by allowing business users to build automation tools without requiring specialized developer expertise. This pursuit of frictionless automation, however, introduces a critical risk variable: the absence of rigorous security governance and the inadvertent assignment of excessive, non-obvious permissions to the AI agents.
This phenomenon is situated within the broader context of AI democratization, where the capacity to construct powerful tools extends to personnel lacking formal backgrounds in software development or cybersecurity. When an AI agent is configured to interface with mission-critical enterprise systems, such as customer management or financial platforms, a successful manipulation of its internal logic represents a direct threat of data exposure and economic loss. The research supports the hypothesis that automation lacking strict security enforcement can result in catastrophic business failures, even if the agent's initial design is well-intentioned.
To empirically demonstrate the vulnerability, Tenable Research constructed a travel agent using Microsoft Copilot Studio. This agent was designed to manage customer bookings, including the creation and modification of existing reservations. The agent was provisioned with fictitious customer data, including names, contact information, and credit card details, and instructed to verify customer identity prior to any data disclosure or modification.
The exploitation was achieved using a prompt injection technique. This methodology allowed the researchers to bypass the agent's identity verification controls and successfully hijack its workflow. Tenable explored two high-impact business risk scenarios:
-
Data Breaches and Regulatory Exposure: By compelling the agent to bypass identity checks, Tenable Research successfully exfiltrated payment information and complete records belonging to other customers. This action carries immediate consequences for compliance with data protection regulations, such as the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA), leading to significant financial penalties for the organization.
-
Revenue Loss and Financial Fraud: Because the agent possessed extensive "edit" permissions, intended only for updating travel dates, it was manipulated to alter critical financial fields. Specifically, the agent was directed to change the price of a trip to zero dollars, resulting in the unauthorized issuance of complimentary services. This finding illustrates a direct financial fraud vector and the subsequent loss of corporate revenue.
Tenable concludes that a foundation of governance and security enforcement is non-negotiable for the safe deployment of agentic AI. Corporations must recognize that AI agents, often built by personnel without security expertise, may inadvertently possess operational permissions that exceed the actual requirements of the use case, argues the company.
To mitigate these established risks, Tenable Research recommends adopting several proactive security measures. Organizations must explicitly determine which systems and data stores an agent can interact with before its deployment into a production environment. The agent's write and update capabilities must also be minimized to only those functions absolutely necessary for performing its core use case. Finally, continuous tracking of agent actions is required to identify any indications of data exfiltration or deviations from the agent’s expected business logic.









