Zenity Labs Exposes Widespread “AgentFlayer” Vulnerabilities Allowing Silent Hijacking of Major Enterprise AI Agents Circumventing Human Oversight

Groundbreaking research reveals working 0click compromises of OpenAI's ChatGPT, Microsoft Copilot Studio, Salesforce Einstein, Cursor, and more, exposing widespread vulnerabilities across production AI environments

At Black Hat USA 2025, Zenity Labs revealed AgentFlayer, a comprehensive set of 0click exploit chains that allow attackers to silently compromise enterprise AI agents and assistants without requiring any user action. The research, presented by Zenity co-founder and CTO Michael Bargury and threat researcher Tamir Ishay Sharbat in their session, “AI Enterprise Compromise: 0Click Exploit Methods,” demonstrates how widely deployed AI agents from major vendors can be hijacked to exfiltrate data, manipulate workflows, and act autonomously across enterprise systems-all while users remain completely unaware.

https://mma.prnewswire.com/media/2403212/Acct_2434697_Zenity_logo.jpg

The findings represent a fundamental shift in the AI security landscape to attacks that can be fully automated and require zero interaction from users. Zenity Labs successfully demonstrated working exploits against OpenAI ChatGPT, Microsoft Copilot Studio, Salesforce Einstein, Google Gemini, Microsoft 365 Copilot, and developer tools like Cursor with Jira MCP.

“These aren't theoretical vulnerabilities, they're working exploits with immediate, real-world consequences,” said Michael Bargury, CTO and co-founder, Zenity. “We demonstrated memory persistence and how attackers can silently hijack AI agents to exfiltrate sensitive data, impersonate users, manipulate critical workflows, and move across enterprise systems, bypassing the human entirely. Attackers can compromise your agent instead of targeting you, with similar consequences.”

Key Research Findings:

— OpenAI ChatGPT was compromised via email-triggered prompt injection, granting attackers access to connected Google Drive accounts and the ability to implant malicious memories, compromise every future session, and transform ChatGPT into a malicious agent

— A Microsoft Copilot Studio customer support agent, showcased by Microsoft on stage, was shown to leak entire CRM databases. Additionally, we found over 3,000 of these agents in the wild that can reveal their internal tools, making them susceptible to exploitation

— Salesforce Einstein was manipulated through malicious case creation to reroute all customer communications to attacker-controlled email addresses

— Google Gemini and Microsoft 365 Copilot were turned into malicious insiders, social engineering users and exfiltrating sensitive conversations through booby-trapped emails and calendar invites

— Cursor with Jira MCP was exploited to harvest developer credentials through weaponized ticket workflows

“The rapid adoption of AI agents has created an attack surface that most organizations don't even know exists,” said Ben Kilger, CEO, Zenity. “Our research demonstrates that current security approaches are fundamentally misaligned with how AI agents actually operate. While vendors promise AI safety, attackers are already exploiting these systems in production. This is why Zenity has built the industry's first agent-centric security platform-to give enterprises the visibility and control they desperately need.”

Industry Response and Implications

Some vendors, including OpenAI and Microsoft Copilot Studio, issued patches following responsible disclosure. However, multiple vendors declined to address the vulnerabilities, citing them as intended functionality. This mixed response underscores a critical gap in how the industry approaches AI agent security.

The research arrives at a pivotal moment for enterprise AI adoption. With ChatGPT reaching 800 million weekly active users and Microsoft 365 Copilot seats growing 10x in just 17 months, organizations are rapidly deploying AI agents without adequate security controls. Zenity Labs' findings suggest that enterprises relying solely on vendor mitigations or traditional security tools are leaving themselves exposed to an entirely new class of attacks.

Moving from Research to Defense

As a research-driven security company, Zenity Labs conducts this threat intelligence on behalf of the wider AI community, ensuring defenders have the same insights as attackers. The complete research, including technical breakdowns and defense recommendations, will be available at labs.zenity.io following the presentation.

See the Research in Action

Attendees at Black Hat USA 2025 can visit Zenity at booth #5108 for live demonstrations of the exploits, in-depth technical discussions, and practical guidance on securing AI agents in production environments.

For those unable to attend Black Hat, Zenity will host deeper discussions at the AI Agent Security Summit 2025 on October 8 at the Commonwealth Club in San Francisco. Reserve your spot now.

About Zenity

Zenity is the agent-centric security and governance platform that gives enterprises visibility and control over AI agent behavior-what they access, what they do, and the tools they invoke-with full-lifecycle protection across SaaS, custom agent platforms, and end-user devices. Founded by security researchers and engineers from Microsoft, Meta, and Unit 8200, Zenity enables organizations to embrace AI innovation without compromising security. Learn more at zenity.io.

About Zenity Labs

Zenity Labs is the threat research arm of Zenity, dedicated to uncovering and responsibly disclosing vulnerabilities in AI systems. Through cutting-edge research and real-world attack simulations, Zenity Labs helps organizations understand and defend against emerging AI threats. Subscribe to research updates at labs.zenity.io.

Media Contact:Diana DiazForce4 Technology Communicationsdiana.diaz@force4.co

https://mma.prnewswire.com/media/2745636/AgentFlayer_Flyer.jpg

https://c212.net/c/img/favicon.png?sn=LA45902&sd=2025-08-06

View original content to download multimedia:https://www.prnewswire.com/news-releases/zenity-labs-exposes-widespread-agentflayer-vulnerabilities-allowing-silent-hijacking-of-major-enterprise-ai-agents-circumventing-human-oversight-302523580.html

SOURCE Zenity

https://rt.newswire.ca/rt.gif?NewsItemId=LA45902&Transmission_Id=202508061930PR_NEWS_USPR_____LA45902&DateId=20250806

Scroll to Top