GCP Vertex AI - Uncovering Security Vulnerabilities

Think of GCP Vertex AI as a smart assistant that can do a lot of things for you. But if it has too many keys to important doors (permissions), someone could sneak in and take your stuff. That's what researchers found, and now Google is telling everyone to be more careful with these keys.
New vulnerabilities in GCP Vertex AI expose critical data and internal source code, prompting urgent security measures.
The Flaw
Artificial intelligence (AI) agents are quickly advancing into powerful autonomous systems that can perform complex tasks. These agents can be integrated into enterprise workflows, interact with various services, and make decisions with a degree of independence. Google Cloud Platform’s Vertex AI, with its Agent Engine and Application Development Kit (ADK), provides a comprehensive platform for developers to build and deploy these sophisticated agents. However, a recent investigation by Palo Alto Networks' Unit 42 has uncovered significant security vulnerabilities within Vertex AI that could allow attackers to exploit AI agents as 'double agents'—tools that appear to serve their intended purpose while secretly exfiltrating sensitive data and compromising infrastructure.
What's at Risk
The research highlights that the Per-Project, Per-Product Service Agent (P4SA) associated with deployed AI agents has excessive permissions granted by default. This misconfiguration enables attackers to extract service agent credentials and conduct unauthorized actions on behalf of the agent. Once deployed, any interaction with the AI agent invokes Google's metadata service, exposing critical information such as the GCP project hosting the agent and the identity of the AI agent itself.
Unit 42 successfully leveraged these vulnerabilities to gain unrestricted access to all Google Cloud Storage buckets within the consumer project, effectively undermining isolation guarantees. This level of access poses a significant security risk, transforming the AI agent from a helpful tool into a potential insider threat. Furthermore, the compromised P4SA credentials also granted access to restricted Google-owned Artifact Registry repositories, allowing attackers to download proprietary container images that form the core of the Vertex AI Reasoning Engine. This breach not only exposes Google's intellectual property but also provides attackers with insights into further vulnerabilities, including the potential exposure of internal source code.
Patch Status
In response to these findings, Google has updated its official documentation to clarify how Vertex AI utilizes resources, accounts, and agents. The tech giant has recommended that customers adopt a Bring Your Own Service Account (BYOSA) approach to replace the default service agent and enforce the principle of least privilege (PoLP). This recommendation is crucial, as granting agents broad permissions by default violates the principle of least privilege and is deemed a dangerous security flaw by design.
Immediate Actions
Organizations are urged to treat AI agent deployments with the same rigor as new production code, validating permission boundaries, restricting OAuth scopes, reviewing source integrity, and conducting controlled security testing before rollout. Additionally, as cloud attacks increasingly target running applications, organizations should consider adopting Cloud Application Detection and Response (CADR) strategies to enhance their security posture against potential threats stemming from these vulnerabilities.