Insurance fraud increased by 19% from synthetic voice attacks in 2024

Insurance fraud increased by 19% from synthetic voice attacks in 2024

Insurance fraud is more common than scams in other fields, and its prevalence is growing, a new report from fraud detection company Pindrop found.

After analyzing more than 1.2 billion customer calls in 2024, the company discovered there was a 475% increase in synthetic voice fraud attacks at insurance companies. Insurance fraud rose by 19% year over year, with a fraud rate of 0.02%.

Pindrop protects insurers, banks and retailers from voice fraud schemes by authenticating customers. The company is backed by Andreessen Horowitz and Citi Ventures.

Co-founder and CEO Vijay Balasubramaniyan said voice fraud is “scaling at a rate that no one could have predicted.” Across all industries, the company projected only a 4% increase in fraud attempts.

“Deepfakes, synthetic voice tech and AI-driven scams are reshaping the fraud landscape,” he said in a statement. “The numbers are staggering, and the tactics are growing more sophisticated by the day.”

Fraud exposure is 20 times more likely in the insurance industry than in banking, the company says. There is also a 61% increase in fraud rates of personally identifiable information and bank account data. While the report spans health, life and property and casualty insurance, payers are "uniquely exposed," Balasubramaniyan clarified in an email to Fierce Healthcare.

Digital claims processes are give scammers easier entry points, and insurance fraud relies on photos, videos and voice recordings, which are easier to misrepresent than in banking. 

"And because insurance payouts can be significantly higher than banking transactions, each fraudulent claim carries a greater financial impact," he explained.

Slightly fewer than 1 in 4 insurance fraud cases involve phishing or reconnaissance because the scammers are instead using social engineering and procedural familiarity. Of the insurance fraud cases, 7% of the time they are classified as “familiar fraud,” or when the scammers use personal connections and known information to infiltrate and earn trust.

The report highlighted an insurer on the West Coast, where the insurer is bombarded with calls from “foreign actors providing Social Security numbers” for active policies. If someone picks up, the scammer completes knowledge-based authentication by sharing personal information. This level of protection is considered outdated, Pindrop says.

From there, they ask targeted questions to receive disbursed funds.

“Strengthening authentication protocols, implementing real-time risk analysis and continuously training contact center representatives to recognize evolving fraud tactics remain critical defenses against these increasingly skilled adversaries,” the report explained.

According to the Federal Trade Commission, fraud accounted for $12.5 billion in losses last year. In the insurance sector, that trend is continuing, Pindrop found, predicting the fraud rate will grow by 8% year over year. Deepfake fraud could increase by 162%, spearheaded by automated bots, “emotional-sounding AI” and accurate voice dubbing.

Generative artificial intelligence fraud losses could exceed $40 billion in 2027 in the U.S, Pindrop estimated.

“What began as isolated incidents of synthetic voice fraud in early 2023 has snowballed into a full-scale wave of AI-powered deception,” the report said. “And with every new model released, the gap between what’s real and what’s generated continues to shrink.”

The rise of agentic AI will lead to more confusion, the authors expect. Agentic AI is the combination of large language models with machine learning and other automation. As more people and corporations use AI agents to complete tasks, that behavior will be normalized.

In the wrong hands, it could lead to more cyberattacks and smarter malware

“What was once viewed as suspicious may soon be legitimate Agentic AI behavior, blurring the line between trusted AI agents acting on behalf of a customer and agents acting with malicious intent,” the report added.

Updated: June 12 at 12:13 p.m.