Researchers within the US have reportedly used OpenAI’s voice API to create AI-powered telephone rip-off brokers that could possibly be used to empty victims’ crypto wallets and financial institution accounts.
As reported by The Register, laptop scientists on the College of Illinois Urbana-Champaign (UIUC) used OpenAI’s GPT-4o mannequin, in tandem with numerous different freely obtainable instruments, to construct the agent they are saying “can indeed autonomously execute the actions necessary for various phone-based scams.”
In keeping with UIUC assistant professor Daniel Kang, telephone scams that contain perpetrators pretending to be from a enterprise or authorities group goal round 18 million People yearly and value someplace within the area of $40 billion.
GPT-4o permits customers to ship it textual content or audio and have it reply in type. What’s extra, based on Kang, it’s not expensive to do, which breaks down a significant a barrier to entry for scammers trying to steal private data resembling financial institution particulars or social safety numbers.
Certainly, based on the paper co-authored by Kang, the common value of a profitable rip-off is simply $0.75.
In the course of the course of their analysis, the group carried out numerous totally different experiments, together with crypto transfers, present card scams, and the theft of person credentials. The typical total success charge of the totally different scams was 36% with most failures as a consequence of AI transcription errors.
“Our agent design is not complicated,” stated Kang. “We applied it in simply 1,051 strains of code, with a lot of the code devoted to dealing with real-time voice API.
“This simplicity aligns with prior work showing the ease of creating dual-use AI agents for tasks like cybersecurity attacks.”
He added, “Voice scams already cause billions in damage and we need comprehensive solutions to reduce the impact of such scams. This includes at the phone provider level (e.g., authenticated phone calls), the AI provider level (e.g., OpenAI), and at the policy/regulatory level.”
The Register reviews that OpenAI’s detection methods did certainly alert it to UICU’s experiments and moved to reassure customers that it “uses multiple layers of safety protections to mitigate the risk of API abuse.”
It additionally warned, “It is against our usage policies to repurpose or distribute output from our services to spam, mislead, or otherwise harm others — and we actively monitor for potential abuse.”