Perplexity browser flaw exposed local files — Arabian Post

Security researchers have uncovered a vulnerability in the AI-powered Comet browser developed by Perplexity that could allow attackers to extract sensitive files and credentials from a user’s computer through a malicious calendar invitation, highlighting growing security risks tied to emerging “agentic” web browsers.

The flaw centres on the way Comet’s built-in artificial intelligence assistant processes instructions embedded within everyday digital content. According to cybersecurity analysts who disclosed the issue, attackers could embed hidden prompts in a calendar invitation that appear harmless to users but are interpreted as legitimate commands by the browser’s AI agent. Once triggered, the agent could access the user’s local file system and transmit data to an external server without the victim’s awareness.

Researchers demonstrated that the attack could occur with minimal interaction from the target. In some scenarios, simply adding the invitation to a digital calendar and asking the browser’s AI assistant to summarise the meeting details or manage the event would activate the hidden instructions. The AI would then execute those commands autonomously, potentially reading files stored on the device and sending their contents to the attacker while still presenting the user with an apparently normal response.

The vulnerability forms part of a broader group of weaknesses described by researchers as the “PleaseFix” family, which affects emerging AI-driven browsers capable of performing tasks on behalf of users. Unlike traditional web browsers that primarily display content, these platforms integrate intelligent agents that can read information, interact with services and automate actions such as managing calendars, sending emails or summarising web pages.

Cybersecurity specialists say this new architecture expands the attack surface by allowing AI agents to interpret instructions embedded in external data sources. In the case of Comet, attackers were able to hide malicious instructions within the formatting of a calendar entry, making them appear similar to internal system prompts used by the AI assistant. When the agent processed the calendar event, it treated the concealed commands as part of its legitimate workflow.

Researchers described the exploit as a form of “indirect prompt injection”, a technique in which malicious instructions are embedded within seemingly benign content processed by a generative AI system. Because the instructions are not directly entered by the user but are embedded in trusted sources such as emails, web pages or calendar entries, the AI may execute them without recognising them as hostile.

Testing showed that the exploit could enable the browser to browse directories on the victim’s computer, locate files containing sensitive information and transmit their contents to an attacker-controlled website. Some scenarios also demonstrated the potential to extract credentials stored in password management tools if those extensions were active within the browser session.

The incident underscores a growing concern among cybersecurity researchers that AI-enabled browsers may bypass many of the security assumptions underpinning traditional web technology. Conventional browser security models rely heavily on strict boundaries between websites and local computer files, often enforced through cross-origin restrictions that prevent web pages from accessing data stored on a user’s device.

Agentic browsers, however, operate under a different paradigm. Their embedded assistants are designed to perform complex tasks by interacting with multiple applications and services simultaneously. This means the AI agent may legitimately have access to the user’s local files, browser extensions and online accounts as part of completing routine requests, potentially allowing malicious prompts to exploit that access.

Security analysts say the vulnerability illustrates what they describe as an “intent collision” between the user’s request and hidden instructions embedded by an attacker. The AI agent attempts to fulfil both sets of instructions simultaneously, interpreting them as part of the same task even though one originates from malicious content.

Perplexity confirmed that it addressed the issue following responsible disclosure from the researchers. The company implemented safeguards preventing the AI agent from autonomously accessing file paths on a user’s computer, effectively blocking attempts to retrieve data directly from the local file system. Under the updated design, only explicit user actions can access files stored on the device.

Experts say the episode reflects a broader security challenge as generative AI capabilities become embedded in everyday software. Browsers incorporating AI assistants promise greater productivity by automating tasks such as research, scheduling and data analysis. At the same time, their ability to interpret natural language instructions introduces new forms of risk that conventional cybersecurity defences were not built to detect.

Industry analysts note that the race to integrate AI features into browsers has intensified over the past year, with technology firms seeking to redefine how users interact with the internet. Perplexity’s Comet browser, launched in 2025, forms part of this trend, offering a conversational interface that can navigate websites, summarise documents and perform actions across online services.

Read Previous

Apple broadens lineup with budget MacBook — Arabian Post

Read Next

Tycoon phishing network dismantled in global crackdown — Arabian Post

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular