Amid a surge in spending on artificial intelligence, a new survey points to a sober reality: many companies are not seeing a payoff yet. The findings suggest that organizations across sectors invested heavily over the past year, only to face higher costs and stalled benefits. The signal comes as boardrooms ask for proof of value and technology teams race to contain expenses.
Despite a heavy investment, a survey shows AI is costing companies money.
The tension reflects a familiar pattern with new technology. Early adopters move fast, but practical hurdles slow results. Integration work, training data issues, and change management can overwhelm initial plans. For many firms, the spend hits the books now, while gains may take longer to materialize.
Why the bill is rising
Companies often underestimate the setup work. Models need clean data, stable pipelines, and careful monitoring. Cloud costs can jump with experiments that run longer than planned. Skills are scarce, which pushes salaries and consulting fees higher. Security reviews add time and expense, especially in regulated industries.
- Data preparation is time-consuming and costly.
- Model training and inference can drive up cloud bills.
- Specialist talent is expensive and in short supply.
- Integration with legacy systems takes longer than expected.
- Governance and compliance require new processes and tools.
These costs compound when teams run many pilots at once. Some executives describe “pilot sprawl,” where dozens of small tests never move to production but keep consuming money and attention. In that environment, even modest wins can be hard to see.
What counts as value
Return on AI often shows up in process improvements: faster response times, fewer manual steps, or better forecasts. Those benefits can be real yet diffuse. Finance leaders ask for clear metrics, while business units may feel the gains but struggle to quantify them. Without shared definitions, projects face scrutiny.
Experts advise setting a baseline before any build. That includes current cycle times, error rates, and costs per task. Teams should compare pilots against that baseline, not a vague future state. If an idea cannot show progress within a few sprints, it may be the wrong fit or the wrong time.
Risks that erode returns
Quality issues can turn into direct costs. If a model produces errors, employees must review or redo work. That reduces any time savings. Legal and brand risks add further pressure. Misuse of data or unclear outputs can trigger regulatory reviews. Firms then spend on audits and controls after the fact.
Shadow projects also play a role. Staff may test public tools without guardrails, leading to inconsistent outcomes and hidden spend. Central governance can help, but heavy-handed bans can push activity out of sight. The balance is hard to strike.
Where companies are finding wins
Despite current headwinds, some targeted uses continue to deliver. Document summarization, code assistance for developers, and customer support triage often show measurable gains when scoped tightly. The common thread is a clear task, a reliable data source, and a workflow that captures savings.
Leaders who report progress describe smaller bets with faster feedback. They choose one use case, define success, and align incentives. They also track the full cost, including people time, data work, and ongoing monitoring. Transparency builds trust and keeps expectations grounded.
A path to better outcomes
Companies can improve their odds by focusing on basics before scale. Several steps recur across successful efforts:
- Start with high-friction tasks that have clean data and clear owners.
- Set a baseline and target metrics for cost, time, and quality.
- Cap pilot spending and time, with explicit go/no-go criteria.
- Design human review steps where errors carry real risk.
- Track cloud and tooling costs at the project level.
- Invest in data quality and reuseable pipelines early.
Vendors are responding, too. More tools now offer usage controls, audit logs, and cost dashboards. Those features can help teams spot waste sooner and adjust. Still, tools cannot replace disciplined project planning.
The latest survey adds caution to a crowded field of promises. It suggests that many firms are paying now for gains they have not yet seen. The near-term task is to narrow focus, measure results, and cut projects that do not meet the bar. Over the next year, watch for fewer pilots, more targeted deployments, and clearer reporting to finance committees. That shift could turn rising spend into tangible value—or confirm that some ideas are not worth the cost today.