The Real Cost and Risk of Replacing Humans with AI

AI Data Centre

There is a story being told in boardrooms and earnings calls right now, and it goes something like this: artificial intelligence will replace expensive human labour, reduce headcount, and drive efficiency at scale. It is a compelling narrative, and you’d be a fool to not be swayed by it. Yet, at this moment in time, it is also largely fiction, and the organisations treating it as a settled strategy are accumulating risks they have not yet learned to name.

I want to be clear about where I stand. I am a firm believer in AI. Not as a replacement for human intelligence, but as a force multiplier for us. The organisations that will genuinely benefit from AI are those that use it to help their people think faster, deliver better, and focus on the work that actually requires human judgement. That is a very different proposition from the one currently driving headlines. The fact is, as I see it, AI is being pitched to a world that reacts to short term numbers and not long term risk.

What prompted me to write this piece, with a little help from my own AI-stack, was when I read a comment, shared with candour from an unexpected source. Bryan Catanzaro, Vice President of Applied Deep Learning at Nvidia, the company that has been one of the biggest financial beneficiaries of the global AI spending surge. Catanzaro told Axios in April 2026 that for his own team, "the cost of compute is far beyond the costs of the employees." That is not a sceptic speaking. That is an insider admission from inside the machine. And it should give every board, every leadership team, and every government department currently dismantling its human capital in pursuit of AI efficiency a reason to pause.

The Numbers Tell a More Complicated Story

The scale of AI investment right now is extraordinary. According to Morgan Stanley, Big Tech firms have announced $740 billion in capital expenditure on AI in 2026 alone, a 69% increase on 2025. McKinsey recently projected that AI expenditures could reach $5.2 trillion by 2030, rising to $7.9 trillion at an accelerated pace, with $1.6 trillion coming from data centre spending and $3.3 trillion from associated IT equipment.

Meanwhile, the cost of AI tools is rising, not falling. Spending management firm Tropic noted in December 2025 that AI software fees have increased by 20% to 37% over the past year. Uber's Chief Technology Officer told The Information that the budget he thought he would need for AI coding tools had been "blown away already." And according to the Yale Budget Lab, there is no widespread data to support the idea that AI is displacing jobs at scale, yet.

The productivity case is equally fragile. A 2024 MIT CSAIL study found that AI automation would be economically viable in only 23% of roles where vision is a primary part of the work. In the remaining 77% of cases, it is still cheaper for a human being to do the job. Layered on top of this, the US Federal Reserve reports that just 18% of US companies had adopted AI tools by end-2025, even as adoption grew 68% between September and December of last year.

We are not witnessing a productivity revolution. What we are witnessing and living through is a spending race with an uncertain destination because of how AI has been positioned.

Let me state clearly that I am not an AI cynic. Far from it, but as I see it, looking at the data, as things stand, the organisations running fastest are often the ones with the most to lose.

The Productivity Paradox — History Rhyming

This pattern has a name and a precedent. The economist Robert Solow observed in 1987 that "you can see the computer age everywhere except in the productivity statistics." That paradox, massive technology investment, no measurable productivity gain, persisted through the 1980s and 1990s until organisations finally learned how to redesign work around computers, rather than simply layering computers on top of existing processes.

We are in an analogous moment. The technology, as I said, is genuinely powerful. The economic case for replacing human labour with it, right now, in most contexts, is not yet proven. The Federal Reserve notes no clear evidence of AI improving productivity at scale. The organisations that win this decade will be those that use the current window, before the economics fully shift, to build genuine human-AI collaboration capability. Those that hollow out their human capital prematurely will find themselves exposed when the edge cases arrive, and they always arrive.

Keith Lee, an AI and finance professor at the Swiss Institute of Artificial Intelligence's Gordon School of Business, put it plainly when he told Fortune: "It's not just about AI becoming cheaper than humans. It's about becoming both cheaper and more predictable at scale." Predictable at scale. That caveat is doing enormous work, and most boards are skipping over it.

The Risks Boards Are Not Pricing

From my own recent advisory organisations at the intersection of reputation, trust, and strategic risk, what I see are five specific risk categories being systematically ‘underpriced’ in the current AI adoption wave.

1. Substitution risk: The replacement of human judgement in areas where accountability cannot be delegated. Regulated decisions, stakeholder communications, complaints handling, crisis response: these are domains where the cost of an error is not a bad quarter, it is a regulatory sanction, a reputational crisis, or both. When an AI system fails in these contexts, the board cannot point to the machine. The accountability still sits with the people who chose to deploy it.

2. Capability erosion risk: Dismantling institutional knowledge before AI can replicate it. Organisations that reduce headcount rapidly lose tacit knowledge: the informal networks, the contextual judgement, the client relationships that cannot be documented or trained into a model. Once gone, this capacity is extraordinarily difficult to rebuild. The human infrastructure of a business is not a cost line. It is often the competitive moat.

3. Vendor dependency risk: Building operating models around cost structures that are about to change. AI companies are currently, in many cases, subsidising usage to drive adoption. As Gartner projects, inference costs will fall by more than 90% over the next four years, but pricing models will shift in parallel. Flat subscription fees are likely to give way to usage-based pricing. Organisations that have built their operating model around current cost assumptions are building on sand.

4. Regulatory and governance risk: Running ahead of the frameworks that are still forming. In financial services, healthcare, and government procurement, AI governance frameworks are actively being developed. Organisations that move fast and accumulate AI-driven processes before those frameworks are settled will face retrospective compliance requirements, often at the worst possible moment. In the UK, both the FCA and the ICO have signalled that accountability for AI-driven decisions rests with the deploying organisation, not the technology provider.

5. Reputational risk: The trust gap between what is promised and what is delivered. This is the one that is hardest to recover from. When a company announces AI-driven efficiency gains, lays off a significant portion of its workforce, and then reports a service failure, a hallucination, a biased output, a cascading system error, the narrative writes itself. The organisation has traded human reliability for machine efficiency, and the machine has let it down publicly. That story does not stay in the technology press. It reaches regulators, clients, and investors.

A recent and striking example: one engineer described an AI coding agent destroying his database and network as a result of what he called 'overuse.' On an individual scale, that is an expensive lesson. At the scale of a bank, a hospital, or a government department, it is a governance failure with consequences for citizens and shareholders alike.

The Public Sector Is a Special Case

I want you to spend some time on the following, because it receives insufficient attention. The pressure on government departments and public agencies to demonstrate AI-driven efficiency is real and growing, particularly in a fiscal environment where every departmental budget is under scrutiny. The logic of replacing expensive human advisers and caseworkers with AI systems is superficially attractive.

But public sector AI deployment carries risks that the private sector does not. When a commercial AI system makes an error, the consequence is typically financial or reputational. When a public sector AI system makes an error, whether in benefits assessment, in planning decisions, in immigration case management, the consequence is a citizen's access to services, their legal rights, or their safety. The accountability gap here is not theoretical. It is live and largely unresolved.

Governments and agencies that are delegating material decisions to AI systems without robust human oversight are not just taking on operational risk. They are taking on a democratic accountability risk. Who is responsible when the system is wrong? That question needs an answer before deployment, not after the first judicial review.

What Good Looks Like

The contrast I find most instructive at present is Goldman Sachs. Their Chief Information Officer, Marco Argenti, was reported this week as saying that tracking individual AI usage across his 12,000-strong engineering team is not what he focuses on. What he watches is how fast his engineers move from idea to production. That is a human-centred productivity metric. It treats AI as an accelerant for human capability, not a substitute for it.

That framing, of AI as an accelerant, is the one that minimises risk and maximises genuine value. It keeps human accountability in the loop. It preserves institutional knowledge. It allows organisations to move fast without dismantling the judgment infrastructure that makes speed sustainable.

The organisations I most respect in this space are building what I think of as a human-AI collaboration architecture: clear protocols for which decisions AI informs, which it assists, and which remain firmly with humans. They are investing in training their people to work alongside AI effectively. And they are being honest, with their boards, their clients, and their regulators, about where the technology is and is not performing.

Five Recommendations for Leaders Navigating This Now

These are not abstract principles. They are things that organisations can and should be doing right now.

  • Conduct an honest audit of where AI is actually deployed in your organisation — not where it is planned or piloted, but where it is live and operational. Map the accountability for every decision that AI is influencing. If you cannot name the human accountable for each output, that is a governance gap.

  • Do not reduce human capacity before AI capability is proven — in your specific context, for your specific tasks, against your specific standards. Generic benchmark data is not evidence that the technology will perform in your environment.

  • Model your vendor dependency — understand what your AI cost structure looks like if subscription pricing shifts to usage-based. Build contingency into your operating model. Do not assume current pricing is permanent.

  • Engage your regulator proactively — particularly in financial services, healthcare, and public procurement. Do not wait for frameworks to be imposed. Organisations that shape the conversation are better positioned than those who receive the guidance after the fact.

  • Invest in the reputation of your AI governance — not just the governance itself. How you communicate your AI practices to clients, employees, and the public is a strategic asset. Transparency builds trust. Opacity, when things go wrong, destroys it.

The Tipping Point Is Coming — But It Is Not Here Yet

Gartner predicts that the cost of running a large language model with one trillion parameters will fall by more than 90% over the next four years. When that happens — when AI becomes both cheaper than humans and demonstrably more reliable across a wider range of tasks — the economics will shift significantly. The organisations that are preparing for that moment sensibly, building capability without over-extending, are the ones that will be best positioned.

But that tipping point is not here yet. The current period is one of high investment, rising costs, unproven productivity returns, and rapidly evolving governance frameworks. It is precisely the kind of environment in which reputational risk is highest — where the gap between what is promised and what is delivered is widest, and where a single high-profile failure can undo years of carefully built trust.

The organisations that navigate this well will be those that hold the line between ambition and accountability. That use AI to make their people better, not to make their people redundant. That invest in the governance structures and human oversight mechanisms that allow them to deploy AI responsibly and to say, credibly, when something goes wrong: "We knew the risk. We managed it. Here is what we are doing."

That is what reputation as strategic capital looks like in the AI age. Not a press release. Not a policy statement. A demonstrated, consistent commitment to the standards that trust is built on.

Julio Romo

Independent and international communications consultant and digital innovation strategist with over 20 years experience in markets around the world.

https://www.twofourseven.co.uk/
Next
Next

Why Media for Equity Is Going Global, and Why Now