AI & SecurityHIGH

LLMs Breaking Access Control - Hidden Risks Uncovered

Featured image for LLMs Breaking Access Control - Hidden Risks Uncovered
SWSecurityWeek·Reporting by Kevin Townsend
Summary by CyberPings Editorial·AI-assisted·Reviewed by Rohit Rana
Ingested:
🎯

Basically, AI can create security rules that might let too many people in by mistake.

Quick Summary

AI-generated access control policies can introduce serious security flaws. Organizations may unknowingly grant excessive permissions, risking their security. It's crucial to validate these policies before deployment.

What Happened

In recent discussions, Vatsal Gupta, a senior security engineer at Apple, highlighted a critical issue with the use of Large Language Models (LLMs) in generating organizational access control policies. As businesses increasingly adopt policy as code, LLMs are employed to write complex code in languages like Rego and Cedar. This shift aims to enhance efficiency, but it introduces significant risks. LLMs can produce policies that appear valid but contain hidden flaws, potentially undermining the organization's security model.

The problem lies in the semantic correctness of the generated policies. While they may compile successfully, a single missing condition or a misinterpreted attribute can redefine access boundaries, leading to unintended permissions. This subtlety poses a serious threat, as these flaws often go unnoticed, allowing access to sensitive resources that should be restricted.

Who's Being Targeted

Organizations leveraging AI for policy generation are at risk. As LLMs become integrated into engineering workflows, developers rely on them to automate the creation of security rules and access control policies. This reliance can lead to a false sense of security, as the generated policies may not align with the intended access restrictions. The continuous deployment of these flawed policies can result in a drift towards over-permissioned environments, where employees have access to more data than necessary.

Gupta's research indicates that many organizations may believe they are enforcing a least privilege model, while in reality, they are expanding their attack surface due to these unnoticed flaws. The risk compounds as more policies are generated, creating a complex web of security issues that are difficult to manage.

Tactics & Techniques

The recurring failure patterns identified in LLM-generated policies include:

  • Missing contextual restraints: Policies intended to limit access based on specific criteria, like region or department, may lack these conditions entirely, leading to global access.
  • Absence of deny logic: Many policies rely on a baseline deny posture, but LLMs may only capture exceptions without enforcing the underlying restrictions, resulting in broader access than intended.
  • Hallucination of attributes: LLMs can introduce non-existent attributes, causing unpredictable behavior at runtime.
  • Dropped temporal conditions: Policies that should control access based on time or session context may be simplified into static rules, leading to always-on access.
  • Action misclassification: Intended restrictions on sensitive actions may be misinterpreted, allowing broader operations than intended.

These failings stem from the AI's tendency to simplify language, which can lead to significant security implications. Over time, even minor deviations can accumulate, creating a large attack surface that is challenging to navigate.

Defensive Measures

To mitigate these risks, organizations should not abandon LLMs but instead revise their trust model regarding policy generation. Key recommendations include:

  • Validation layers: Introduce checks between policy generation and enforcement to ensure all necessary components are present and correct.
  • Testing policies: Policies should be tested for correctness, not just compiled, to catch potential flaws before deployment.
  • Enforce deny-by-default principles: Ensure that policies explicitly restrict access unless specified otherwise.
  • Treat authorization logic as high-risk: Recognize the potential for flaws and apply rigorous scrutiny to generated policies.

As organizations embrace AI-assisted security engineering, the focus should be on achieving correctness, auditability, and trust. In the realm of authorization, being 'almost correct' is simply not sufficient.

🔒 Pro insight: The reliance on LLMs for policy generation necessitates robust validation mechanisms to prevent systemic security flaws in access control.

Original article from

SWSecurityWeek· Kevin Townsend
Read Full Article

Related Pings

MEDIUMAI & Security

Cybersecurity Veteran Mikko Hyppönen Now Hacking Drones

Mikko Hyppönen, a cybersecurity pioneer, is now tackling the threats posed by drones. His shift from fighting malware to drone defense highlights the evolving landscape of cybersecurity. With increasing drone use in conflicts, understanding these threats is crucial for safety.

TechCrunch Security·
HIGHAI & Security

Anthropic Ends Claude Subscriptions for Third-Party Tools

Anthropic has halted third-party access to Claude subscriptions, significantly affecting users of tools like OpenClaw. This shift raises costs and limits integration options, leading to dissatisfaction among developers. Users must now adapt to new billing structures or seek refunds.

Cyber Security News·
MEDIUMAI & Security

Intent-Based AI Security - Sumit Dhawan Explains Importance

Sumit Dhawan highlights the importance of intent-based AI security in modern cybersecurity. This approach enhances threat detection and response, helping organizations stay ahead of cyber threats. Understanding user intent could redefine security strategies in the future.

Proofpoint Threat Insight·
MEDIUMAI & Security

XR Headset Authentication - Skull Vibrations Explained

Emerging research shows that skull vibrations can be used for authenticating users on XR headsets. This could enhance security and user experience significantly. As XR technology evolves, expect more innovations in biometric authentication methods.

Dark Reading·
HIGHAI & Security

APERION Launches SmartFlow SDK for Secure AI Governance

APERION has launched the SmartFlow SDK, providing a secure on-premises solution for AI governance. This comes after the LiteLLM supply chain attack raised concerns among enterprises. As organizations reassess their AI infrastructures, SmartFlow offers a reliable alternative to cloud dependencies.

Help Net Security·
MEDIUMAI & Security

Microsoft's Open-Source Toolkit for Autonomous AI Governance

Microsoft has released the Agent Governance Toolkit, an open-source solution for managing autonomous AI agents. This toolkit enhances governance and compliance, ensuring responsible AI use. It's designed to integrate with popular frameworks, making it easier for developers to adopt.

Help Net Security·