Final week, OpenAI unveiled ChatGPT Atlas, an internet browser that guarantees to revolutionise how we work together with the web.
The corporate’s CEO, Sam Altman, described it as a “once-a-decade opportunity” to rethink how we browse the online.
The promise is compelling: think about a man-made intelligence (AI) assistant that follows you throughout each web site, remembers your preferences, summarises articles, and handles tedious duties comparable to reserving flights or ordering groceries in your behalf.
However beneath the shiny advertising lies a extra troubling actuality. Atlas is designed to be “agentic”, capable of autonomously navigate web sites and take actions in your logged-in accounts. This introduces safety and privateness vulnerabilities that the majority customers are unprepared to handle.
Whereas OpenAI touts innovation, it’s quietly shifting the burden of security onto unsuspecting customers who’re being requested to belief an AI with their most delicate digital selections.
What makes agent mode completely different
On the coronary heart of Atlas’s attraction is “agent mode”.
Not like conventional internet browsers the place you manually navigate the web, agent mode permits ChatGPT to function your browser semi-autonomously. For instance, when prompted to “find a cocktail bar near you and book a table”, it’ll search, consider choices, and try to make a reservation.
The expertise works by giving ChatGPT entry to your shopping context. It might probably see each open tab, work together with kinds, click on buttons and navigate between pages simply as you’ll.
Mixed with Atlas’s “browser memories” characteristic, which logs web sites you go to and your actions on them, the AI builds an more and more detailed understanding of your digital life.
This contextual consciousness is what allows agent mode to work. Nevertheless it’s additionally what makes it dangerously weak.
An ideal storm of safety dangers
The dangers inherent on this design transcend standard browser safety issues.
Take into account immediate injection assaults, the place malicious web sites embed hidden instructions that manipulate the AI’s behaviour.
Think about visiting what seems to be a professional buying website. The web page, nonetheless, comprises invisible directions directing ChatGPT to scrape private knowledge from all open tabs, comparable to an lively medical portal or a draft e-mail, after which extract the delicate particulars with out ever needing to entry a password.
Equally, malicious code on one web site may probably affect the AI’s behaviour throughout a number of tabs. For instance, a script on a buying website may trick the AI agent into switching to your open banking tab and submitting a switch kind.
Atlas’s autofill capabilities and kind interplay options can turn out to be assault vectors. That is particularly the case when an AI is making split-second selections about what info to enter and the place to submit it.
The personalisation options compound these dangers. Atlas’s browser recollections create complete profiles of your conduct: web sites you go to, what you seek for, what you buy, and content material you learn.
Whereas OpenAI guarantees this knowledge gained’t practice its fashions by default, Atlas continues to be storing extra extremely private knowledge in a single place. This consolidated trove of data represents a honeypot for hackers.
Ought to OpenAI’s enterprise mannequin evolve, it may additionally turn out to be a gold mine for extremely focused promoting.
OpenAI says it has tried to guard customers’ safety and has run hundreds of hours of centered simulated assaults. It additionally says it has “added safeguards to address new risks that can come from access to logged-in sites and browsing history while taking actions on your behalf”.
Nevertheless, the corporate nonetheless acknowledges “agents are susceptible to hidden malicious instructions, [which] could lead to stealing data from sites you’re logged into or taking actions you didn’t intend”.
A downgrade in browser safety
This marks a significant escalation in browser safety dangers.
For instance, sandboxing is a safety method designed to maintain web sites remoted and forestall malicious code from accessing knowledge from different tabs. The trendy internet relies on this separation.
However in Atlas, the AI agent isn’t malicious code – it’s a trusted consumer with permission to see and act throughout all websites. This undermines the core precept of browser isolation.
And whereas most AI security issues have centered on the expertise producing inaccurate info, immediate injection is extra harmful. It’s not the AI making a mistake; it’s the AI following a hostile command hidden within the atmosphere.
Atlas is particularly weak as a result of it offers human-level management to an intelligence layer that may be manipulated by studying a single malicious line of textual content on an untrusted website.
Assume twice earlier than utilizing
Earlier than agentic shopping turns into mainstream, we’d like rigorous third-party safety audits from impartial researchers who can stress-test Atlas’s defenses in opposition to these dangers. We want clearer regulatory frameworks that outline legal responsibility when AI brokers make errors or get manipulated. And we’d like OpenAI to show, not merely promise, that its safeguards can face up to decided attackers.
For people who find themselves contemplating downloading Atlas, the recommendation is simple: excessive warning.
If you happen to do use Atlas, assume twice earlier than you allow agent mode on web sites the place you deal with delicate info. Deal with browser recollections as a safety legal responsibility and disable them except you may have a compelling motive to share your full shopping historical past with an AI. Use Atlas’s incognito mode as your default, and keep in mind that each comfort characteristic is concurrently a possible vulnerability.
The way forward for AI-powered shopping could certainly be inevitable, nevertheless it shouldn’t arrive on the expense of consumer safety.
OpenAI’s Atlas asks us to belief that innovation will outpace exploitation. Historical past suggests we shouldn’t be so optimistic.
Uri Gal, Professor in Enterprise Data Programs, College of Sydney
This text is republished from The Dialog beneath a Inventive Commons license. Learn the unique article.