By mid-2025, the United States was on pace to hit a new annual high in reported data breaches, according to figures compiled by the Identity Theft Resource Center. Reporters at NPR’s The Indicator traced a key driver: artificial intelligence tools that make digital crime faster, cheaper, and easier to scale. Their reporting, led by Cooper Katz McKim, examines how AI is changing the economics of hacking, the reach of criminal groups, and the risks to households and markets.
A record surge, explained
The Identity Theft Resource Center has tracked a steady climb in breach reports across sectors. The midpoint snapshot of 2025 pointed to a record-setting year. The reporting highlights how readily available AI models lower the bar for entry and speed up operations for seasoned criminals.
“The U.S. was on track to set a new yearly record in the number of reported data breaches.”
That trend matters for consumers and small firms. Each breach can expose logins, personal data, and credentials that fuel identity fraud and follow-on scams. The volume of incidents raises the odds that any one person or business will be affected.
AI as a force multiplier for crime
The Indicator’s team describes AI as a multiplier for cybercrime. Off-the-shelf tools can write phishing emails, spin up fake websites, and automate social engineering at scale. That makes mass campaigns more efficient and adaptive.
AI has “made the work of criminal hackers easier, cheaper and scalable.”
Criminals can also probe for weak points faster, using code assistants and synthetic data to test attacks. This shift reduces costs and increases the number of attempts, which helps explain the surge in successful breaches.
Fighting AI with AI
The reporting also explores defensive uses of AI. Companies are turning to machine learning to spot unusual patterns, flag suspicious logins, and filter phishing attempts in real time. The goal is to match automation with automation. While no system is perfect, early detection can limit the spread and cost of a breach.
Cooper Katz McKim’s conversations suggest a growing race between attackers and defenders. As tools improve on both sides, the advantage swings with speed, data quality, and user behavior.
Crime diversification and market risk
The series raises a wider concern: organized crime networks may be branching out as AI lowers skill barriers. The phrase “when cartels start to diversify” points to groups that see digital fraud as a new revenue stream with lower physical risk.
The team also examines how AI could disturb financial markets. Synthetic media and automated bots can spread rumors or market-moving headlines in seconds, creating sharp swings before facts catch up. Even small distortions can rattle thinly traded assets.
Scams, patterns, and the human factor
Episodes focus on the mechanics of scams and how they scale. From “sewing patterns” to “stolen dimes,” the reporting shows that even tiny, repeated thefts can add up when automated. The human layer still matters. People click links, reuse passwords, and trust what looks real. AI exploits those habits at volume.
What you can do now
- Use a password manager and turn on multi-factor authentication for key accounts.
- Update software and devices promptly to close known flaws.
- Be cautious with links and attachments, even if the message looks polished.
- Monitor bank and credit statements for unfamiliar charges.
- Freeze your credit if you are not seeking new loans.
The Indicator’s reporting, with production by Connor Donevan and edits by Kate Concannon and Patrick Jarenwattananon, argues that AI has changed the cost curve of cybercrime. As 2025 breach figures climb, the stakes are rising for individuals, firms, and regulators. Expect faster attacks, smarter defenses, and more spillover into finance and organized crime. The key question now is whether security practices, policy, and user habits can adapt as quickly as the tools fueling this surge.