AI & SecurityHIGH

AI Security - Relyance AI Launches Lyo for Data Protection

HNHelp Net Security·Reporting by Industry News
Summary by CyberPings Editorial·AI-assisted·Reviewed by Rohit Rana
Ingested:
🎯

Basically, Relyance AI created Lyo to help protect data used by AI systems.

Quick Summary

Relyance AI has launched Lyo, a new tool designed to secure data interactions for AI agents. This innovation addresses critical security gaps as AI technology spreads across enterprises. With the rise in data breaches, Lyo is essential for protecting sensitive information and ensuring compliance.

What Happened

Relyance AI has unveiled Lyo, an innovative autonomous data defense engineer designed to secure how AI agents interact with enterprise data. This launch comes at a crucial moment as more organizations adopt AI technologies, which can access sensitive data and automate workflows at unprecedented speeds. Traditional security tools often struggle to keep pace, focusing primarily on locating data rather than understanding its usage in real-time.

As AI agents become more prevalent, the challenge shifts from merely finding sensitive information to comprehending how it is utilized. Gartner predicts that by 2027, over 40% of AI-related data breaches will result from improper use of generative AI. Lyo aims to bridge this gap, providing organizations with the necessary tools to monitor and secure their data effectively.

Who's Affected

The introduction of Lyo is significant for businesses leveraging AI technologies across various sectors. Organizations that rely on AI for data processing, workflow automation, or infrastructure provisioning will find Lyo particularly beneficial. Security teams are increasingly tasked with safeguarding systems that operate beyond their visibility, making Lyo's capabilities essential for maintaining data integrity and compliance.

Moreover, as AI agents can introduce vulnerabilities such as overprivileged access and unpredictable data behavior, companies need robust solutions like Lyo to mitigate risks associated with these technologies. The tool's ability to provide continuous monitoring and context around data usage is crucial for organizations navigating this complex landscape.

What Data Was Exposed

Lyo is designed to manage and protect sensitive data across various environments, including cloud infrastructures, SaaS applications, and third-party integrations. It enhances data visibility by mapping relationships between AI agents and data assets, identifying instances where AI agents may have excessive access to sensitive information. This proactive approach helps organizations understand potential risks associated with data exposure and unauthorized access.

Additionally, Lyo's contextual data classification capabilities allow it to categorize data sensitivity levels and track how data flows within the organization. This ensures that security teams are aware of which assets house critical information and how AI agents interact with that data, providing a comprehensive view of the data ecosystem.

What You Should Do

Organizations looking to enhance their data security posture in the age of AI should consider implementing Lyo. This tool offers several key features that can significantly improve data protection, including:

  • Unified AI and data visibility: Gain a complete understanding of how AI systems and data assets interact.
  • 24/7 monitoring and policy alerts: Stay informed of potential security policy violations in real-time.
  • Conversational Investigation: Use natural language queries to prioritize security issues effectively.

To maximize the benefits of Lyo, businesses should integrate it into their existing security frameworks and ensure that their teams are trained to leverage its capabilities. As AI continues to evolve, maintaining a robust security posture will require tools that can adapt to new challenges and provide the necessary context for informed decision-making.

🔒 Pro insight: Lyo's contextual monitoring could redefine how organizations manage AI-related security risks, particularly as generative AI usage escalates.

Original article from

HNHelp Net Security· Industry News
Read Full Article

Related Pings

MEDIUMAI & Security

Cybersecurity Veteran Mikko Hyppönen Now Hacking Drones

Mikko Hyppönen, a cybersecurity pioneer, is now tackling the threats posed by drones. His shift from fighting malware to drone defense highlights the evolving landscape of cybersecurity. With increasing drone use in conflicts, understanding these threats is crucial for safety.

TechCrunch Security·
HIGHAI & Security

Anthropic Ends Claude Subscriptions for Third-Party Tools

Anthropic has halted third-party access to Claude subscriptions, significantly affecting users of tools like OpenClaw. This shift raises costs and limits integration options, leading to dissatisfaction among developers. Users must now adapt to new billing structures or seek refunds.

Cyber Security News·
MEDIUMAI & Security

Intent-Based AI Security - Sumit Dhawan Explains Importance

Sumit Dhawan highlights the importance of intent-based AI security in modern cybersecurity. This approach enhances threat detection and response, helping organizations stay ahead of cyber threats. Understanding user intent could redefine security strategies in the future.

Proofpoint Threat Insight·
MEDIUMAI & Security

XR Headset Authentication - Skull Vibrations Explained

Emerging research shows that skull vibrations can be used for authenticating users on XR headsets. This could enhance security and user experience significantly. As XR technology evolves, expect more innovations in biometric authentication methods.

Dark Reading·
HIGHAI & Security

APERION Launches SmartFlow SDK for Secure AI Governance

APERION has launched the SmartFlow SDK, providing a secure on-premises solution for AI governance. This comes after the LiteLLM supply chain attack raised concerns among enterprises. As organizations reassess their AI infrastructures, SmartFlow offers a reliable alternative to cloud dependencies.

Help Net Security·
MEDIUMAI & Security

Microsoft's Open-Source Toolkit for Autonomous AI Governance

Microsoft has released the Agent Governance Toolkit, an open-source solution for managing autonomous AI agents. This toolkit enhances governance and compliance, ensuring responsible AI use. It's designed to integrate with popular frameworks, making it easier for developers to adopt.

Help Net Security·