how to, opinion Julio Romo how to, opinion Julio Romo

Journalism, Trust & Strategy in the AI Era: A 2025 Playbook for Leaders

As AI reshapes journalism, the media’s role in building—or breaking—reputation is evolving. From the printing press to generative tools like ChatGPT, I explore how trust, strategy, and media engagement must adapt to a future where truth competes with machine-generated content.

The arrival of generative artificial intelligence has accelerated every process inside a modern newsroom.

In January, the Reuters Institute canvassed 326 editors, product chiefs, and CEOS in 51 countries; 87 per cent said that generative AI is already transforming their organisations, from automated transcription to story drafts and personalised audio feeds.

Associated Press, The Financial Times, and USA Today’s parent, Gannett, now treat automation as infrastructure, redeploying reporters to investigations, routine earnings calls, or match reports in the AI Era.

AI literacy is fast becoming a core newsroom skill. AP’s grant‑funded training offers webinars, tip‑sheets and conference workshops; the BBC, Guardian and Reuters run internal “prompt‑engineering clinics”. Journalism schools at Columbia and City, University of London, now teach students to audit AI outputs for bias and hallucination, alongside classic source-checking. Taken together, these experiments point to a permanent shift: stories will be broken, checked and packaged by hybrid teams of journalists and models.

Leaders who fail to supply machine‑readable facts—clear timelines, structured data, provenance‑rich images—risk being mis‑summarised by search‑generative experiences before a human reporter even calls.


Join Our Expert Webinar – Wednesday, 23 April 2025

Because the newsroom and journalism are changing, I am hosting a 45-minute free webinar and live discussion this Wednesday, 23 April at 15:00 (UK Time) with freelance journalist, newsroom consultant and journalism trainer Laura Oliver and IBM’s Business Executive for AI in EMEA, Hans-Petter (HP) Dalen.

There is still time to register to hear about the impact of AI, not just in business or politics, but importantly in newsrooms around the world. How AI is being adopted in newsrooms impacts how those in strategy, communications and reputation management need to operate.


The State of Public Trust in News and Media 2025

Trust remains the critical scarce resource. This year’s Edelman Trust Barometer places media at 48 per cent global trust, the lowest of the four institutions Edelman tracks, and five points behind business. In the United Kingdom, overall trust in news stands at 36 per cent, while in the United States, it sits at 32 per cent—barely one in three Americans.

Yet consumption has not dipped: DataReportal’s Digital 2025 Global Overview counts 5.24 billion active social-media user identities, representing 63.9 per cent of humanity, and is up 206 million from last year.

News—verified or fabricated—now travels at the speed of the scroll. “Without facts, you can’t have truth. Without truth, you can’t have trust,” Nobel laureate Maria Ressa reminds us.

Why Trust Is Harder to Earn: Four Interlocking Pressures

Journalists, strategists and corporate communicators agree on four structural drags:

  • Political and regulatory churn. Patchwork legislation, including the EU AI Act, the UK Digital Markets Bill, and state-level deep-fake laws in the US, produces conflicting disclosure rules.

  • Behavioural fatigue. Bullish influencers and doom‑scrolling create “news avoidance”—a phenomenon the Financial Times has charted across Britain and the US.

  • Technological disorientation. Deep-fake video, voice cloning, and AI-generated “slop” flood timelines faster than fact-checkers can respond.

  • Commercial fragility. Local news deserts widen; platform referral traffic declines as Google’s AI Overviews answer queries without clicks.

Each pressure alone dents credibility. Combined, they fuel what Edelman calls a “grievance-based society,” where six in ten respondents believe that business and government “make their lives harder.”

Strategic Communications in a Synthetic Landscape

Against this backdrop, what leaders in communications and reputation management, as well as their leaders of business and governments, need is a three‑part operating system:

  1. Monitoring and resilience. Invest in provenance‑tracking. Tools that read C2PA or Verify‑IPTC metadata can flag manipulated assets within minutes, not news cycles.

  2. Narrative design. Assume an LLM will summarise your following policy paper before anyone opens the PDF. Offer well‑labelled fact‑sheets, Q&AS and slide decks so the first‑pass AI reads what you want it to read.

  3. Governance and ethics. Map every internal AI use case against the EU AI Act’s disclosure clauses. Voluntary watermarking today prevents forced disclosures tomorrow.

Failure on any pillar escalates reputational risk at the speed of computation,' as DeepMind’s Demis Hassabis put it.

Global Nuances: One Story, Many Audiences

In Europe, privacy law and the forthcoming AI Act impose rigorous compliance burdens—but also provide the clearest rule‑book. While in the United States, First Amendment protections curtail regulation, so accountability often arises through shareholder lawsuits or advertiser boycotts. Across the Asia-Pacific, a hybrid model prevails: Singapore’s strict online-falsehoods law coexists with India’s high-trust public broadcasters, even as closed WhatsApp groups disseminate rumours at scale.

A disclosure that satisfies Ofcom in London may not placate the SEC in New York; a tongue‑in‑cheek TikTok that delights Gen Z in Manchester could misfire in Jakarta. Translation double‑checks and local counsel are no longer “nice to have”—they are release criteria.

Law firms are positioning themselves in a crucial role to legally protect not only intellectual property, but also their reputations.

Scenario Planning for Leaders

The three draft essays from Chatgpt, Stanford’s Storm and Google Gemini all recommend forward scenarios; the synthesis here highlights four that merit board‑level drills:

  • Search without clicks. AI answers hoover up traffic; publishers push for licensing or block crawlers.

  • Verified provenance. C2PA-style watermarking becomes default; early adopters bank a “trust dividend”.

  • Synthetic saturation. It is predicted that by 2026, 90 per cent of online content will be machine-generated, making human-crafted journalism a premium tier.

  • Regulator as a platform. Governments release official data via APIS, shrinking misquote risk but concentrating information power. The UAE is to become the first nation to write laws using AI, in a world first.

Each scenario changes how reputation crises ignite and spread; rehearsing them in simulation now saves real‑world cost later.

Six Practical Moves for Boards and C‑Suites

  1. Audit the content pipeline. Which statistics, images or voice clips could be convincingly faked today?

  2. Publish an AI‑use charter. Transparency about your models earns goodwill with regulators and journalists alike.

  3. Expand the human edit desk. Automation multiplies scale; only editors reduce errors.

  4. Back to media‑literacy training. Your employees are the first line of defence against disinformation.

  5. Re‑engage professional newsrooms. Exclusive briefings, access to domain experts, and rapid fact packs help journalists get the story right—fast.

  6. Fund quality journalism. Whether through sponsorship, subscriptions, or philanthropy, supporting independent reporting is a risk mitigation strategy, not a form of CSR embellishment.

Rebuilding the Facts Together

History suggests that every communications revolution eventually settles around new norms of credibility.

The movable‑type press gave way to libel law; broadcast radio learnt balance after the Fairness Doctrine; Satellite and cable news now wrestle with subscription fatigue and algorithmic referral loss. Generative AI will be no different—unless leaders opt out of the conversation.

Those who invest today in transparent data, ethical automation and genuine dialogue will not merely survive the synthetic era; they will define the public square that follows. Or, as Maria Ressa reminds us, “Without facts, you can’t have truth. Without truth, you can’t have trust… and democracy as we know it is dead.”

Join us on Wednesday for a live free webinar to learn more.

Read More
opinion Julio Romo opinion Julio Romo

Perugia Journalism Insights: 2025 Trends for Business Leaders

The 2025 Perugia Journalism Festival wasn’t just about media—it was a wake-up call for global leaders. From AI disruption to collapsing public trust, the challenges facing journalism mirror those in business and government. Here’s what leaders need to know to stay credible, connected, and ahead.

Every year in April, Perugia, Italy, becomes the newsroom of global journalism. Thousands of journalists, editors, media thinkers and technologists gather for the International Journalism Festival (IJF), one of the most influential events in the media calendar.

This year, the conversations in Perugia weren’t just about journalism but the future of the media, information, leadership, and public trust. And if you’re in a position of influence in business, government, or public affairs, what’s happening in journalism should be on your radar.

In 2025, journalism’s most significant challenges are becoming universal leadership issues for businesses and government alike:

  • How do you build trust in a skeptical world?

  • How do you harness AI without losing credibility?

  • How do you communicate effectively when your audience is scattered, distracted, or tuned out?

Here’s what every senior leader needs to know.

1. AI Is Rewriting the Rules—And Leadership Needs to Catch Up

Artificial Intelligence and GenAI were dominant themes in Perugia. Not in a theoretical, someday-soon sense, but in the here-and-now of daily newsroom operations.

As an example, newsrooms around the world are today using AI to:

  • Generate article summaries

  • Translate content in real time

  • Tag and archive video/audio content

  • Analyse audience behavior

  • Even draft story templates.

The upside is speed, scale, and personalisation. The downside? Misinformation, hallucinations, and the risk of losing the human touch or not including context are critical for people to make informed decisions and develop trust.

How AI is impacting and disrupting media and journalism is a subject that affects us in strategy and communications, and I’ll be debating with freelance journalist, newsroom consultant and journalism trainer Laura Oliver (who was in Perugia) and Hans-Petter (HP) Dallen, IBM’s Business Executive for AI in EMEA, as part of Folgate Advisors AI Month.

This webinar will take place on Wednesday, 23 April at 15.00 (UK Time). To sign up, click my LinkedIn post below and complete the online form.

2. Trust Is the Most Valuable—and Fragile—Asset

Trust, or the lack of it, is an issue that affects us all. The 2025 Edelman Trust Barometer painted a sobering picture: 61% of people worldwide feel a sense of grievance toward major institutions. Trust in media? Hovering at 50% and falling in many international markets.

Even more concerning:

63% say they struggle to distinguish real journalism from content designed to mislead.

That confusion is happening at the intersection of social media, AI, and information overload—and it affects more than news organisations. It’s hitting businesses, governments, and NGOs alike.

I remember the days in the early 2000s when media outlets invested in the creation of user-generated-content teams who spent time finding real stories which they could verify and then publish with context. Today, well, for the last eight to ten years, content online has been challenging to verify and issue that is becoming even more difficult with AI being used by ‘actors’ to negatively influence perceptions and opinions not just of governments but of businesses and individuals.

How businesses and their communications teams and advisers react will be even more critical as when continue to move into unchartered media territory.

3. The Collapse of Traditional Traffic Is a Signal for All Sectors

Another major headline from IJF 2025: Social media no longer reliably drives traffic to news sites, an issue that has been raised in the past by the Reuters Institute for the Study of Journalism Annual Report. Platforms like Facebook, Instagram, and X (formerly Twitter) have reduced external linking. AI assistants now answer questions without sending users to source material.

That collapse in referral traffic has forced newsrooms to reinvent how they reach people. Some have invested in:

  • Direct relationships (newsletters, apps, SMS updates)

  • Community-building tools

  • Compelling video/audio content

  • Platform-specific storytelling (e.g., TikTok, Reels)

The parallel is clear. Businesses and governments can’t rely on a single channel—social media, SEO, or third-party apps. Like media organisations, organisations must build deeper, more direct relationships with their audiences.

The platforms that they choose need to be able to support fact over fiction.

4. Storytelling Is Now a Strategic Skillset

I’ve said this for many years, but how organisations, businesses or governments are perceived is down to the real-life experience of audiences, the quality of their storytelling, and how relatable this is to the audience.

How human, engaging, transparent and trustworthy an organisation’s storytelling is will shape, or not, how they are perceived and the trust and reputation that their audiences will give them.

For media, long-form storytelling is becoming less common, primarily because it secures less engagement and retention. Instead, newsrooms are getting creative with:

  • Short-form vertical videos

  • Podcasts and voice notes

  • Interactive explainers

  • Live Q&As and comment blocks

  • Multilingual, multi-platform storytelling

This is more than a format change—it’s a mindset shift to reach audiences whose attention span has steadily dropped.

PRs and strategic communicators need to invest more time in thinking like storytellers - aligned to simple, engaging narratives, not just spokespeople. Content should be explicit, visual, and built for the platforms where your stakeholders spend time. The message may be profound, but the delivery must meet modern expectations.

5. Audience Behavior Has Changed—for Good

One concern raised at IJF 2025: News avoidance is rising. People are overwhelmed, anxious, and distrustful. Many opt out of news or stick to platforms that confirm their views. We’ve known this for many years, and it is an issue that needs to be addressed. We need to see how AI is leveraged to retain audiences.

This matters for public engagement across the board. Whether rolling out a national campaign or managing an organisational shift, your audience might not be listening as they used to. Media outlets have known about this for quite some time, and communicators need to learn more from journalists and their media outlets.

Some organisations and their communications advisers already understand and shape their comms based on an understanding of audience fatigue, for which they simplify their messages and use a trusted messenger who is not the CEO.

Equally, they engage with audiences not just on email but on more personable broadcast platforms like WhatsApp, YouTube or even Instagram if the platform is relevant to the brand and they can maintain control and trust.

6. Collaboration Is the New Competitive Advantage

Faced with shrinking budgets and massive complexity, media outlets have been partnering more than ever. They’re co-funding investigations, sharing tools, and forming alliances with NGOs and tech companies.

They are becoming critical at hosting events and private dinners at which they can convene decision-makers- events for which consultancies charged a hefty fee. Outlets like The Financial Times have their Strategies team that delivers counsel to media outlets.

Thinking collaboratively unlocks value and can enhance trust. In a fractured-attention economy, sharing the stage can amplify your impact.

The Bottom Line: Journalism’s Struggle Is a Mirror

The Perugia Journalism Festival showed us that the media industry is grappling with the very same pressures that businesses and governments now face, such as the disruption that AI is enabling and the impact of the current trust-deficient audience. This is forcing many organisations to reassess how they communicate and engage with their audiences and stakeholders.

What’s happening in media is bigger than journalism. It reflects the information economy in which we all live and lead. The leaders who adapt to these shifts—who lead with clarity, transparency, and a sense of responsibility—will not only survive this transformation. They’ll lead it.


What You Can Do Now

Join our webinar and learn how AI is transforming the newsroom and business of news, changing the business of communications.

Read More
opinion, how to Julio Romo opinion, how to Julio Romo

Beyond the Noise: Why Impact Capital Still Wins

Despite political headwinds, impact investing is evolving—not retreating. From China to the Gulf, capital is flowing to ESG ventures that align long-term financial returns with strategic reputation and trust.

A few days ago, I was reading Global Corporate Venturing’s article “Impact funds aren’t disappearing” by Robert Lavine, and a critical point stood out: While ESG start-ups and their investors are quietly and confidently delivering impact, they could do more if they better strategically communicated this.

Yet, as we know, ESG businesses have become the target of political backlash—especially in the United States under a second Trump administration—even though impact funds are continuing to grow, diversify, and deliver value.

Populist narratives dominate the airwaves, but the story is different in boardrooms and sovereign wealth offices outside of the US: the market is rewarding reality. In fact, while four years in politics is a lifetime. In finance, it’s barely a moment.

As Global Corporate Venturing points out, impact funds are becoming essential to de-risk portfolios and build long-term value in the volatile global economy we find ourselves in.

Why? Because ESG-aligned investing is no longer a fringe movement or corporate window-dressing. It’s a calculated response to macroeconomic shifts, climate risk, technological transition, and societal expectation. In a world facing systemic shocks—from climate and biodiversity loss to geopolitical instability—impact funds are proving more resilient than carbon-heavy investments weighed down by reputational risk and stranded assets.

The Long-Term Horizon: Political Cycles vs. Financial Reality

Political leaders in Western Democracies often operate in short-term cycles: four or five years of headlines, electoral strategy, and ideological posturing. But asset managers, sovereign funds, family offices, and corporations take a far longer view—often 10, 20, or even 50 years. And the financial data is clear: ESG investing is not only here to stay but outperforming legacy models.

Despite headlines suggesting waning interest, ESG investing remains a major force—though the nature of investor engagement is evolving. According to Morningstar’s Global ESG Q4 2024 Flow Report, global sustainable funds experienced net outflows of $88 billion in 2024, reversing the modest $4.3 billion in inflows recorded in 2023. These outflows were mainly driven by Europe, which accounted for $86.5 billion in redemptions, reflecting regulatory pressures and repositioning among a handful of large managers.

However, the underlying picture is more nuanced—and telling. Passive ESG funds attracted $47.8 billion in new capital globally, while active ESG funds saw $135.9 billion in withdrawals. This shift suggests investors remain committed to sustainability but increasingly favour transparent, low-cost, index-based strategies over more opaque active approaches. In the US, ESG sentiment remains politically polarised, yet even there, sustainable strategies retain a foothold.

What this signals is not an ESG retreat but a recalibration. Investors are demanding more clarity, better performance metrics, and strategic alignment. The capital is still very much available for those able to communicate value, resilience, and measurable impact.

Strategic Moves from Global Powers

China: Outspending, Outbuilding, Outperforming

China’s green finance strategy is a masterclass in long-term positioning. By the end of 2024, green loans in China had reached ¥36.6 trillion (approx. $5.1 trillion), a 36% annual increase, now accounting for over 13% of total lending in the country.

China isn’t just decarbonising—it’s capitalising. It leads the world in solar PV, wind turbine, battery and electric vehicle manufacturing. In fact, 60% of all new renewable energy capacity between now and 2030 is expected to come from China.

This isn’t greenwashing—it’s geopolitics. While some Western governments politicise ESG, China is quietly securing its leadership in the global green industrial revolution.

The Middle East: Sovereign Funds Go Sustainable

In 2024, the Gulf Cooperation Council’s sovereign wealth funds—including Saudi Arabia’s Public Investment Fund (PIF), the UAE’s Mubadala and ADQ, Qatar Investment Authority, and Oman Investment Authority—deployed $55 billion across 126 deals in sectors including green hydrogen, renewables, smart cities and advanced manufacturing.

Equally, as a strategic example, look how the UAE has positioned itself with the new US administration through its UAE USA United programme. Here’s a post from the UAE Embassy in Washington DC that shows how economic power works.

This isn’t a reputational offset. It’s a recognition that long-term returns lie in diversified, sustainable portfolios—not fossil-fuel dependence.

Saudi Arabia’s PIF is now targeting 70% of the Kingdom’s renewable energy goals by 2030. UAE’s Masdar is investing in renewables across Africa and Asia—activity that I have personally seen.

ESG principles are increasingly embedded across these funds’ mandates—not for PR and reputation management but for Return on Investment.

ESG and Impact Investing: Financial Returns Speak Loudest

Sustainable investing is delivering.

According to Morgan Stanley, in the first half of 2024, ESG-focused equity funds outperformed traditional funds by 60 basis points, with ESG-aligned fixed-income strategies showing even greater resilience during market volatility.

Meanwhile, the International Energy Agency reported that investment in clean energy globally was expected to hit $2 trillion by the end of 2024—doubling fossil fuel investment.

Strategic Recommendations for Investors and Businesses

To realise the full potential of ESG and impact capital, decision-makers need to rethink how they position themselves and their ventures and act strategically—shaping perception, protecting reputation, and creating trusted financial narratives.

Here are some strategic recommendations:

1. Reframe ESG as Risk Management, Not Morality

Make the case for impact investing in language boardrooms and investors understand: resilience, long-term value, regulatory alignment, and reputational capital. As I wrote about before, in a Capitalist and ‘America First’ environment in which we now find ourselves in, frame ESG in terms of innovation that delivers financial returns.

Action: Equip LPs, clients and stakeholders with sector-specific data and clear performance comparisons that show how ESG-linked assets de-risk portfolios. This is all about not what you want to say and present but what you want them to understand and the necessary framing to get their support.

2. Align Capital with Policy Certainty, Not Political Noise

Ignore political noise. Track where capital is going, not where it’s being criticised. Most major economies—from the EU and China to the UAE and Singapore—are aligning with sustainable growth policies.

Action: Use policy foresight and regulatory trend analysis to identify sectors where early-stage impact investment will yield long-term first-mover advantage.

3. Prioritise Trust and Transparency in Communications

Investors want clarity. Policymakers need consistency. Stakeholders value honesty. Reputation is built on these principles—and increasingly priced into valuations, as I’ve written about before.

Action: Integrate ESG and impact narratives into annual reports, investor relations, public positioning and stakeholder communications.

4. Own the Reputation Advantage

In a world of misinformation and distrust, reputation becomes a premium. Impact capital—when properly explained and backed by data—builds authority, credibility and public trust. It is investing in stability and the future and not the past.

Action: Appoint strategic communications and reputation advisors at fund level to shape the narrative, influence stakeholders, and unlock new partnerships and markets.

The Market Will Reward Strategic Patience

The backlash against ESG may dominate short-term headlines, but the long-term financial case is unshakable.

China and the nations in the GCC are proving that with the right strategic vision, sustainable capital allocation is not only the right thing to do—it’s the most profitable.

In the battle between politics and money, trust the money. And trust those who build, communicate and protect reputations for the long haul.

Read More
how to, opinion Julio Romo how to, opinion Julio Romo

How To Rethink Data Culture in Government

Brent Hoberman’s question—“How good is the data that governments use to make choices?”—spotlights a deeper issue: it’s not just about the data, but the people and culture behind it. In this article I explore how risk aversion, silos, and bias hinder policy impact—and what leaders can do to fix it.

Rethinking Data Culture in Government: Why People Matter More Than Platforms

Brent Hoberman, Co-Founder and Chairman of Founders Forum Group, recently posed a pointed question on LinkedIn: 'How good is the data that governments use to make choices?' In a world where data drives nearly every sector, his query lands at the heart of policymaking—and exposes a much bigger issue.

The problem isn’t just data quality. It’s what happens after the data is gathered: how it’s interpreted, challenged, applied, and communicated. This is where people and culture play an outsized role. And it’s why leaders—in government, investment, and business—must stop thinking about data as a technical asset alone and start treating it as a strategic one that depends on human judgment and institutional design.

I want to unpack these issues from the perspective of data quality, which is important, but the people interpreting it and the culture of the environment in which they have to question the data and make assumptions.

In my view, GenAI is not there to deliver shortcuts but to present different perspectives that our critical thinking needs to consider to unlock the outcomes of improving productivity and growth, which AI can deliver.

Good Data, Bad Decisions: Where Things Go Wrong

We often treat data as inherently objective, but how it is interpreted—who interprets it, through what lens, and under what constraints—matters as much as the data itself.

Take the UK’s experience during COVID-19. Despite access to extensive health and economic datasets, inconsistent interpretations led to wavering policy responses. The Office for Budget Responsibility later admitted significant forecasting errors, shaking public confidence in data-driven decisions.

Similar issues surfaced in significant infrastructure projects like HS2. Initial economic modelling drastically underestimated costs, leading to public mistrust and reactive policy changes. These aren’t failures of data—they’re failures of how it was applied.

Even Stripe CEO Patrick Collison, a voice from the private sector, noted the danger of false confidence in large datasets—his point: insufficient data isn’t the only problem. Misapplied data—interpreted without critical thinking or contextual understanding—can be just as damaging.

In the eight years I have worked as a specialist within the UK Government, in Digital Data and Technology, Policy, Trade and Internal Audit professions, I’ve had the pleasure of working with some great people and civil servants. However, what I have noticed in my time is that the culture is what has held true innovation from taking place.

The Human Layer: Risk Aversion, Bias, and Bureaucracy

Looking at the examples that Brent highlighted, let’s look at why these policy-making issues and outcomes keep repeating themselves. In my view, it comes down to three interlocking human and cultural challenges:

1. Risk Aversion Is Rational—But Limiting

Civil servants operate in a high-stakes environment. Their decisions are under constant scrutiny by media, politicians, and the public. In this context, risk-taking is often viewed as not innovation but liability.

Sarah Munby, Permanent Secretary at the UK Department for Science, Innovation & Technology, has acknowledged that it’s often rational for civil servants to avoid risk. But logical or not, it breeds inertia.

When failure is penalised more than success is rewarded, the safest decision is to do nothing new.

2. Cognitive Bias Distorts Data Use

Confirmation bias, anchoring, and availability heuristics aren’t abstract psychological concepts. They shape how policies are made.

A policymaker invested in a particular narrative may unconsciously seek data confirming their view and discount contradictory evidence.

Over-interpretation is also a significant issue. Data stretched to fit political needs loses its integrity—and can lead to flawed, even dangerous, decisions.

3. Bureaucratic Silos Kill Momentum

Government departments often operate in silos. Data is hoarded, not shared. Systems don’t talk to each other. This report by the UK National Audit Office confirms that departments need to ‘work together more effectively on industrial strategy.’

And insights that could drive better outcomes get lost in translation—or trapped in incompatible formats.

The UK's approach remains fragmented and outdated compared to digitally integrated governments like South Korea, Singapore or Japan - the latter two I have got to know quite well.

Global Lessons: What Innovative Nations Get Right

While the UK wrestles with entrenched bureaucracy, other countries show what’s possible when data, leadership, and culture align.

  • Singapore: Through its 'Smart Nation' initiative, Singapore integrates real-time data platforms to make public services seamless. Government, academia, and private industry collaborate deeply, removing institutional silos.

  • Japan's ‘Society 5.0’ vision blends AI and big data to plan more intelligent, sustainable urban environments. In fact, this strategy is central to the Osaka (Kansai) 2025 Global Expo.

  • South Korea: Its ‘Digital New Deal’ enabled swift data-led responses during COVID-19, powered by strong partnerships between government and tech companies.

  • United Arab Emirates: The UAE—particularly Abu Dhabi—has taken a bold, strategic lead in AI adoption by integrating it into national policy, investing billions through sovereign wealth funds like ADIA, Mubadala, and MGX, and forming global partnerships to position itself as a hub for innovation and digital governance. Their national strategy is called the UAE national Strategy for Artificial Intellegence 2031.

  • The U.S.: While federal agencies vary widely, collaborations with private firms like Palantir and Microsoft have produced some of the world’s most advanced public data systems.

In all these examples, one thing is clear: technology alone doesn’t drive transformation. It’s the willingness to take calculated risks, to experiment, and to bring in diverse expertise that makes the difference.

Lessons from Business: Why the Private Sector Moves Faster

The cultural divide between government and business regarding data is stark. That said, the ability of the public to innovate at pace can be remarkable, but in the private sector, data decisions are often tied directly to customer feedback, revenue impact, and competitive pressure—that forces action and rewards adaptation.

Silicon Valley startups and tech companies in Shenzhen and Tokyo iterate constantly. They make small bets, test them, learn fast, and scale what works. It is in the culture that risk is learnt from, which enables the unlocking of innovation.

This mindset—rapid experimentation over exhaustive analysis—is still rare in government, where long policy cycles and political accountability inhibit quick movement. Yet, this agile approach is exactly what data requires.

A Smarter Path Forward: Strategic Recommendations

Governments must recalibrate how they think about data to close the gap between aspiration and impact.

This means shifting from a tech-first mindset to a people-and-culture-first strategy is one that McKinset, one of my clients, promotes with confidence in the ‘Never Just Tech’ way of working and communications campaign.

Here’s how government needs to rewire itself:

1. Build a Culture of Informed Risk-Taking

  • Encourage pilot projects or 'policy sandboxes' that allow for low-risk testing of new approaches.

  • Create internal protections for innovative civil servants so failures are treated as learning opportunities, not career risks.

2. Strengthen Cross-Departmental Collaboration

  • Mandate interoperability standards for data systems across departments.

  • Fund cross-agency task forces to address complex, multi-dimensional challenges like climate, health, and housing.

3. Invest in Data Literacy and Critical Thinking

  • Embed data literacy into civil service training—not just technical skills but also bias awareness, ethical interpretation, and critical evaluation.

  • Include diverse expertise on policymaking teams: behavioral scientists, data analysts, domain experts, and communicators.

4. Prioritise Ethical Data Use and Public Trust

  • Develop and publicise transparent guidelines for collecting, storing, and applying data.

  • Engage citizens in how their data is used—building understanding and trust through plain-language communication.

5. Benchmark Globally, Act Locally

  • Use international models not as copy-paste solutions but as inspiration tailored to local political and institutional realities.

  • Create a structured approach to learning from countries with more substantial digital infrastructure and integrated policy systems.

6. Communicate with Honesty and Clarity

  • Communications and positioning are critical. Be upfront about uncertainties and trade-offs in data-led policy. Voters are more likely to support change when they understand the rationale.

  • Use storytelling to humanise data—show how real people benefit when better insights inform policies.

The Bottom Line: It’s Not Just About Data

If the UK and other governments want to deliver better services, smarter spending, and more substantial outcomes, they need more than dashboards and datasets. They need cultural change. They need institutions that reward learning, not just control. They need to empower people who can ask hard questions about the data—not just accept it at face value.

This doesn’t mean we abandon analytics or modeling. It means we ground them in a human-centered strategy supported by ethics, collaboration, and a willingness to evolve.

As Brent Hoberman’s question rightly implied, 'How good is the data?' isn’t the only thing we should be asking. We also need to ask:

  • Who is interpreting the data?

  • What assumptions are they bringing?

  • And are we creating the right conditions for the best insights to surface—and stick?

The future of effective policy isn’t just data-driven. It’s people-powered and not just about the technology!

Read More
opinion, how to Julio Romo opinion, how to Julio Romo

If 'Signal Gate' Happened in a Bank

What if the recent ‘Signal Gate’ leak had happened inside a global bank? In this blog I explores how a similar breach in financial services would trigger regulatory action, reputational fallout, and investor backlash—and what leaders must learn to safeguard trust in uncertain times.

The Signal Chat That Shook National Security

Imagine this: The CEO of a global investment bank opens a private Signal group chat to coordinate a confidential strategic acquisition. Senior partners, a regulator liaison, and the head of compliance are all included. But by accident, so is a journalist. Within minutes, the journalist has read—and screen-grabbed—market-sensitive, insider information. The story hits the press the next day.

Chaos ensues. Regulators investigate. Fines are issued. Careers are over. Trust evaporates.

This scenario is not fiction—it’s a corporate parallel to what just happened when The Atlantic's Editor in Chief Jeffrey Goldberg was ‘accidentally’ added to a Signal group chat involving senior U.S. government officials—including the Vice President—who were discussing imminent military action in Yemen. This leak, now referred to by some as ‘Signal Gate,’ raises profound questions about information security, governance, and trust.

In business, particularly in financial services, this kind of breach wouldn’t just spark headlines—it would trigger an avalanche of regulatory, legal, and reputational consequences.

So what would happen if such a breach occurred inside a regulated financial institution, and what lessons must leaders take from this?

The Reality of Regulation: Financial Firms Live in a Compliance Minefield

Financial institutions in the US, UK, EU, and across Asia-Pacific operate under stringent rules defining how sensitive, confidential, and market-moving information is handled. These regulations exist for good reason: the financial system runs on trust, and even the perception of misconduct or poor governance can shake markets, trigger withdrawals, or destroy brands.

Key frameworks include:

  • US: SEC, FINRA, and the Sarbanes-Oxley Act mandate strict control over electronic communications, insider trading, and recordkeeping.

  • UK: FCA SYSC rules require senior managers to take responsibility for controls, while MiFID II mandates secure recordkeeping and reporting.

  • EU: GDPR and the Market Abuse Regulation (MAR) require tight access controls and whistleblowing channels.

  • Asia-Pacific: MAS (Singapore), ASIC (Australia), Financial Services Agency (Japan) and others require compliance teams to monitor, log, and protect market-sensitive communications.

In this context, if a CEO or trader used Signal, WhatsApp, or Telegram to discuss confidential deals or non-public material, and an unauthorised party was added—intentionally or not—the consequences would be immediate.

What Would Happen in a Financial Firm? A Breakdown of the Fallout

Internal Governance Crisis

An immediate breach of internal communications policy would occur. Most institutions prohibit the use of nonauthorised communication apps for business. The incident would trigger a forensic investigation by internal audit, compliance, and legal teams.

Regulatory Enforcement

In the US, the SEC and FINRA would begin parallel investigations. Recent fines against banks for using WhatsApp and Signal for business communications have exceeded $2 billion, with JPMorgan, Barclays, and Goldman Sachs all sanctioned. The FCA and EU regulators would likely act similarly.

Criminal and Civil Liability

Depending on the content, the leaders involved could face civil lawsuits (for breach of fiduciary duty or negligence), insider trading allegations, or even criminal charges if material non-public information was mishandled.

Reputational Crisis

Media coverage would be fierce. Headlines would focus on leadership recklessness, board failings, and lapses in compliance. In a sector where trust underpins everything, the damage could be long-lasting.

Investor Fallout

Public companies could see share price declines as investors question governance standards. Private equity firms, venture capital and corporate venture capital firms would likely face LP pressure, potential fund withdrawals, and damage to future fundraising rounds.

The Trust Factor: Why It’s Bigger Than Just Cybersecurity

The fallout from an incident like this is not just technical—it’s reputational.

Trust and reputation are strategic and intangible assets. Banks, asset managers, and insurers compete not just on performance but on predictability, discretion, and professionalism. Reputational risk is now treated by many boards as equal to credit and market risk.

Whether the new leadership and administration believe it or not, in the case of Signal Gate, U.S. military and diplomatic credibility was undermined globally. If the same thing happened in finance, the brand equity built over decades could unravel in days.

Lessons for Business and Government Leaders

So, what do leaders need to be aware and mindful of to ensure that the organisation’s reputation and financial well-being are protected? Well, thankfully, most financial institutions will have a cyber team focused on not just the technology but the human weaknesses, and they with the leadership would be focused on the following:

Never Use Informal Tools for Formal Business 

Even if an app offers encryption, it should be off-limits for regulated or sensitive discussions if it's not approved for enterprise use. Organisations should invest in auditable, enterprise-grade communications platforms.

Build Governance Around People, Not Just Tech

Most breaches are not caused by technology failures but by people. Leaders must model proper behaviour and ensure policies are actively enforced.

Assume Everything Will Be Made Public

Today’s environment demands radical transparency. Assume that anything said or written can be leaked or misdirected. Would your organisation be comfortable with what’s said in private being on the front page tomorrow?

Crisis Plans Must Include Reputational Risk from Communication Breaches

Organisations need detailed incident response plans covering internal comms, media engagement, regulatory notifications, and stakeholder management. The potential risk confirms the need for strategic communications to work alongside the General Counsel to ensure that while regulatory matters are dealt with, the perception of the public and stakeholders can be managed and supported.

Regulators Are Watching Closely

This isn’t theoretical. Regulators around the world are actively cracking down on the use of informal channels. The bar is rising.

How Financial Services Firms Are Responding

Many firms today are implementing:

  • Zero Trust architectures with identity-based access controls

  • Automated surveillance of communications across email, Slack, Teams, and Zoom

  • Bring Your Own Device (BYOD) restrictions or approved corporate device policies

  • Executive training and attestations around information handling and digital conduct

  • Chief Trust Officer roles that merge cybersecurity, legal, and reputational oversight

These aren't optional. They are becoming central to protecting stakeholder confidence, with insurance and re-insurance looking at reputation management activities in place to manage non-regulatory requirements.

What Government Can Learn From Finance

Ironically, while governments regulate banks tightly, many don’t apply the same discipline to themselves. The Signal Gate episode reveals a governance, recordkeeping, and operational discipline gap among elected officials.

Governments could benefit from adopting practices such as:

  • Auditable communication tools for national security discussions

  • Regular ministerial and official training in operational security and cyber hygiene

  • Independent audits and reviews of digital communications policy compliance

  • Reputation scenario planning at the Cabinet or department level

While governments have protocols and garding for people who can access what information, the gap is always the human and the lack of awareness of what is and isn’t allowed based on their grade and the sensitivity of the information that they have access to. And, in this geo-political climate, security is becoming even more of a need.

Trust, Governance, and the Cost of Informality

If the Signal Gate incident had occurred in a major investment bank or asset manager, the consequences would have been devastating: regulatory sanctions, lawsuits, firings, and the collapse of hard-earned trust.

In today’s connected world, leaders must treat information security and communications discipline as core to strategy—not just compliance. Whether you run a financial institution, a multinational company, or a government department, how you handle sensitive information defines your reputation.

Signal Gate isn’t just a political embarrassment. It’s a warning to every leader: in an age of instant leaks and global scrutiny, there is no room for informality when trust is on the line.


I work with and advise leaders on how to protect and enhance trust, reputation, and perception—especially when it matters most.

Let’s talk about how your organisation manages sensitive information and the reputational risks linked to communication and governance failures.

Please comment, share or subscribe to my  LinkedIn Reputation Matters newsletter. Or connect with me on LinkedIn.

Read More
Julio Romo Julio Romo

Media Literacy in the AI Era: Protecting Trust, Reputation

In a House of Lords inquiry, Dr Mhairi Aitken (Alan Turing Institute) and Professor Sander van der Linden (University of Cambridge) warned that AI-driven misinformation—seen by 40% of UK adults—undermines trust and reputation. They called for leaders to strengthen regulation, media literacy, and partnerships with tech platforms to uphold credibility.

An Urgent Call for Media Literacy

Media literacy is no longer optional but a crucial means of safeguarding public trust, institutional reputation, and social cohesion in a fast-evolving information landscape.

This call powerfully stood out during yesterday’s House of Lords Communications and Digital Committee session, which convened to hear evidence about the challenges and threats posed by online misinformation and disinformation.

The committee called on two expert witnesses to share their insight and experience. They were:

  • Dr Mhairi Aitken, Senior Ethics Fellow at The Alan Turing Institute

  • Professor Sander van der Linden, Professor of Social Psychology in Society at the University of Cambridge

Drawing on their distinct but complementary areas of expertise, they painted a picture of how artificial intelligence (AI), social media platforms, and deeply ingrained psychological biases have been engineered together to intersect and undermine trust in digital content. Taken together, their testimonies suggest that UK citizens—of all demographics—face a complex and growing set of online risks.

According to Professor van der Linden, a key metric from Ofcom indicates that “40% of people say that in the preceding month they’ve seen misinformation in the UK, 90% say that they’re very concerned about the impacts of misinformation, and about 20% say that they’ve seen deepfakes”.

While online falsehoods are not new, the recent explosion of generative AI has made fabricated images, videos, and text more difficult to detect. Dr Aitken explained that a ‘particularly pressing threat’ is the cumulative erosion of trust, warning that “people might increasingly see or hear something fake and believe that it’s real” while also beginning to “lose trust in all content online”. This dual threat—the difficulty of identifying fake content and a growing reflex to doubt everything—sits at the heart of an urgent policy conversation.

After watching the session, the following are issues raised at the select committee hearing, which looked into and discussed the ramifications for society, perception, and reputation, and asked for the experts’ proposals for governments, businesses, and the broader public.

Key Threats: Generative AI and Misinformation

The greatest challenge underscored by both witnesses is the combination of misinformation with generative AI, a technology category that can create new audio, video, imagery, and text with minimal human oversight.

Just a few years ago, misleading social media posts might be produced by so-called ‘troll farms’ or individual bad actors. Now, AI-driven systems can produce and distribute fabricated narratives at incredible speed and scale.

The Proliferation of Deepfakes

Deepfakes—manipulated videos in which a person’s face or voice is digitally forged—present a tangible example of how AI erodes traditional authenticity indicators.

Professor van der Linden noted that roughly 20% of people surveyed in the UK had encountered deepfake material. Moreover, large-scale foreign or domestic actors can easily automate their production. Instead of relying on teams of people to craft convincing fake videos, AI can churn out hundreds of variants with minimal effort.

AI-Driven Micro-Targeting

Other examples of AI-aided manipulation include micro-targeting and ‘nano-targeting.’ By analysing vast quantities of user data—web browsing history, social media interactions, demographic information—AI systems can pinpoint individuals most susceptible to particular narratives.

As the professor observed, while micro-targeting is already a ‘significant concern,’ it may pale compared to what AI-driven nano-targeting can achieve, zeroing in on single individuals with hyper-personalised messages.

Burdens on the Public

Dr Aitken highlighted, a further complication is the expectation that individuals should be able to spot every AI-generated or manipulated piece of content.

People often view low-resolution images on mobile devices, scrolling at speed through a feed of rapidly updating posts.

Even the best ‘tips and tricks’ for identifying AI content—such as looking for distortions in background objects—are moot when technology evolves or when images are compressed, cropped, or quickly shared on ephemeral channels. Asking average users to maintain a constant, high-level vigilance leads to what she termed ‘over-scepticism,’ a corrosive distrust of all media, genuine or otherwise.

Erosion of Trust: Societal and Reputational Implications

The consequences of rampant misinformation and advanced AI tools go beyond a few embarrassing mix-ups on social media. Both witnesses stressed how digital manipulation poses serious, long-term threats to trust, social harmony, and reputation at multiple levels.

  1. Public Health: Misinformation about medical treatments or vaccine safety can undermine public compliance with health guidance, especially when disguised as authoritative.

  2. Democratic Processes: Elections can be swayed if certain voter groups are deliberately targeted with misleading claims. Repeated exposure to conflicting information sows confusion, making it easy to discredit genuine journalism and verified facts.

  3. Incitement of Violence: Professor van der Linden invoked the concept of ‘stochastic terrorism,’ wherein misinformation repeatedly circulates, amplifies societal tensions, and eventually sparks public disorder or violence.

  4. Reputational Harm: At the personal level, deepfake technology can ruin individual reputations by forging compromising images or videos. At the institutional level, businesses and government agencies can lose public goodwill if they are linked—accurately or not—to a scandal or false claim.

  5. Widening Inequality: Evidence shows that minority groups are targeted explicitly with false narratives, intensifying distrust towards mainstream platforms or public agencies and further polarising society.

Much of this erodes trust in news outlets, democratic institutions, and official communications.

Dr Aitken warned that, as public scepticism grows, audiences might respond to legitimate media stories with the reflex: “How do I know that’s not fake?”. The constant drip of dubious content can make all news unreliable, with serious repercussions for policy-making, governance, and business credibility.

Policy Gaps: Current Regulatory Shortcomings

Many nations struggle to regulate digital platforms effectively. In the United Kingdom, there is an ongoing debate about balancing freedom of speech with the urgent need to protect users—especially children and vulnerable populations—from harm.

Online Safety Act Limitations

Witnesses and committee members mentioned the Online Safety Act, which addresses various forms of online harm. However, both Dr Aitken and Professor van der Linden emphasised that, in its present form, the Online Safety Act does not comprehensively tackle misinformation or disinformation. It focuses on issues such as child safety, terrorism, and illegal content but does not give Ofcom or other regulators explicit powers to rein in widespread false narratives unless they meet a stringent legal threshold—for instance, deliberate falsehoods shared to cause harm.

Moreover, the act appears ill-prepared to keep pace with AI-driven developments, leaving significant scope for malicious actors to exploit the technology in ways not subject to enforcement. As one committee member observed, crafting legal definitions for broad terms like ‘misinformation’ or ‘fake’ without risking overreach or conflating legitimate debate with manipulative content is extremely difficult.

Regulatory Coordination and Accountability

Neither Dr Aitken nor Professor van der Linden suggested that government agencies should become arbiters of truth. Instead, they see an ‘accountability gap’ between platforms and the public.

Social media companies often set community standards that ostensibly prohibit hate speech or deliberate misinformation but are rolling back enforcement. Regulators and researchers frequently lack access to the data needed to understand how content is being promoted, and there is inadequate government coordination across various departments (for instance, the Home Office, DCMS, education, and foreign affairs).

Professor van der Linden cited other jurisdictions—like the EU’s Digital Services Act—as a model for improving transparency, setting risk assessments, and imposing fines when companies fail to address harmful misinformation systematically.

Action Agenda: Recommendations for Government

Reflecting on the hearing’s evidence, it is clear that tackling misinformation demands concerted action by government and public bodies, with an emphasis on regulation, education, and coordination.

Expand the Regulator’s Remit

Witnesses proposed strengthening Ofcom’s powers to investigate misinformation. This involves not policing everyday opinions but ensuring accountability when platforms allow the systematic spread of demonstrably false content that can incite harm.

Consider requiring large tech platforms to label AI-generated content more reliably, through digital watermarking. Although Dr Aitken noted that “malicious actors can fairly easily evade” watermarks, systematic labelling would be necessary.

Invest in National Media Literacy Programs

Both experts recommended embedding ‘prebunking’ or ‘inoculation’ approaches into the national curriculum, an idea borrowed from the success of Nordic countries like Finland. Teaching children—at an early age—how to identify common tactics of propaganda and conspiracies can pay dividends in adulthood.

This instruction should be repeated yearly (so-called ‘booster shots’) to reinforce critical thinking and adapt to the evolving media landscape.

In fact, the Finnish model, developed because of how Russia was spreading misinformation was a model I remember teaching during my communications training in markets in South East Asia like Malaysia, Singapore and Indonesia.

Establish Cross-Government Coordination

Various government branches face overlapping challenges: foreign disinformation campaigns, domestic extremist content, health conspiracies, and election integrity. A more structured approach could unify intelligence-sharing and policy interventions.

A central point of contact or cross-department council could help standardise definitions, guidelines, and escalation procedures when misinformation spikes around national events.

Support Trusted Community Organisations

Dr Aitken stressed that local institutions and community groups, already trusted within specific demographics, are prime vehicles for meaningful engagement around misinformation. Government funds or grants could expand their capacity to hold workshops and discussions, addressing the distinct concerns of each community, from public health guidance to political processes.

By pursuing these strategies, government authorities can restore control and resilience to the information environment without impeding fundamental freedoms.

Business Imperatives: Corporate Responsibility

It is not only government agencies that have a responsibility to act. Companies—particularly those that operate online platforms or depend on user-generated content—must shoulder a share of the burden.

What is needed is a collaborative strategy and approach where stakeholders can work together towards a common beneficial aim for all, which is establishin gand rebuilding trust.

Platform Accountability and Transparency

Social media giants can and should do more to highlight suspicious content, verify legitimate sources, and demote material flagged as misleading.

Platforms must share data and cooperate with independent researchers to evaluate the efficacy of algorithms, especially recommendation systems that can amplify polarising material.

Consistency in policy enforcement is crucial. One hearing participant observed that some companies currently have rules, “but they’re not enforcing their own rules.” This rollback undermines trust in the platforms themselves.

Corporate Risk Management

Beyond social media, most businesses face reputational threats if they become the subject of AI-fuelled smear campaigns or manipulated leaks. Implementing robust fact-checking, crisis communication plans, and staff training can guard against these risks.

Larger firms might coordinate with regulators and law enforcement to address repeated attempts to slander brand images or defraud customers through imposter AI chatbots.

Ethical Innovation

AI startups and established tech firms alike should consider it a design principle to embed watermarking or labelling features in generative AI systems by default.

Taking the lead in developing reliable detection tools or in refining watermarking standards can help companies demonstrate leadership in corporate social responsibility.

Public Engagement: Building a Culture of Inquiry

A better-informed and critically engaged public is the best bulwark against manipulative narratives. Dr Aitken and Professor van der Linden recognised the importance of giving individuals the skills to interpret the onslaught of online content while avoiding the trap of ‘over-scepticism.’

Critical Literacy from a Young Age

School-based programmes can enhance pupils’ capacity to question sources, use fact-checking tools, and discuss manipulative tactics. Age-appropriate lessons can demystify how AI can forge realistic text or images.

Encouraging healthy scepticism rather than pervasive cynicism is the goal. Young people should learn how to differentiate credible data from speculation or factual reports from memes designed to provoke strong emotional responses.

Adult and Lifelong Learning

Outside formal education, libraries, community centres, and adult learning institutes could integrate short workshops or modules on digital verification.

Employers could also offer in-house seminars, particularly in businesses prone to reputational risks. In doing so, adults who missed out on formal digital literacy education can catch up and adapt.

Grassroots Awareness Campaigns

Sustained and well-funded public information campaigns can publicise known ‘red flag’ signals of misinformation. They can also direct citizens to reliable fact-checking services or official clarifications on viral claims.

Dr Aitken noted that promoting dialogue within communities encourages a nuanced understanding of AI’s capabilities and dangers. This approach fosters trust, as the information comes from local figures already known to residents.

Securing Our Information Future

The House of Lords Communications and Digital Committee hearing was an urgent reminder that the UK—and every modern democracy—faces a rapidly evolving fight against misinformation.

Generative AI is accelerating the creation of false or distorted content, undermining trust in genuine sources and posing unique challenges for policymakers, businesses, and the public.

Yet, despite the severity of the threats described by Dr Aitken and Professor van der Linden, their testimonies also sketched out a constructive path forward:

  • Regulation: Expand Ofcom’s remit, or develop new frameworks, so that the willful spread of false content can be scrutinised and platforms compelled to act.

  • Education: Implement a national media literacy strategy, teaching children from an early age how to detect propaganda and manipulative tactics. Support adult-focused programmes to ensure no segment of the population is left behind.

  • Coordination: Improve cross-department government collaboration and data-sharing. Recognise that misinformation is not just a digital communications problem; it cuts across security, health, education, and social welfare.

  • Platform Responsibility: Urge companies to enforce their community standards consistently, label AI-generated content, and partner with external researchers so that harmful content can be identified and demoted swiftly.

  • Community and Trust: Fund and partner with local organisations to promote engagement on AI, content verification, and resilience-building. Leverage already-trusted voices and institutions to reach different demographics effectively.

Given the complexity of the modern information environment, no single initiative—whether a piece of legislation, a fact-checking partnership, or an educational policy—will suffice. However, by distributing responsibility across governments, businesses, and the public, society can begin to reassert standards of authenticity.

Resisting the lure of cynicism, Dr Aitken encapsulated the challenge: “The deeper threat here is that increasingly, as there is exposure and awareness of AI-generated content, people begin to lose trust in all content online.”  The goal is to prevent that sweeping crisis of faith in legitimate information. A thoughtful balance of regulation, community engagement, corporate accountability, and personal awareness can achieve just that. By investing in robust media literacy for all, the UK can empower citizens to question manipulative claims but still recognise—and trust—fact-based reporting and expert opinion.

As organisations and individuals adapt to a world where falsehoods may look as convincing as truth, the stakes have never been higher.

Society stands at a crossroads: either we accept a downward spiral of suspicion, or we collectively commit to equipping each new generation with the knowledge, tools, critical thinking and regulations necessary to maintain a healthy, informed democracy.

The vision that emerged from the select committee hearing points toward the latter. By acting decisively, government bodies, corporate leaders, and citizens can protect credibility and reputations in the era of AI, ensuring that open, evidence-based discourse continues to flourish in the UK’s public sphere.

Read More
opinion, how to Julio Romo opinion, how to Julio Romo

Corporate Diplomacy: The New Global Power Shift

In today’s multipolar world, diplomacy is no longer the sole domain of governments. Multinational corporations are stepping into roles once reserved for diplomats—navigating geopolitical risks, engaging regulators, and shaping public policy. Corporate diplomacy is now essential for business resilience, reputation, and global growth.

For centuries, diplomacy was the exclusive domain of governments. Treaties, alliances, and negotiations were the purview of ambassadors and ministers. Today, this traditional order is undergoing a profound transformation.

Multinational corporations (MNCs) are increasingly brokering deals, influencing policies, and intervening in crises once considered solely governmental responsibilities.

This emergence of "corporate diplomacy" has been accelerated by globalisation, rapid technological advancements, and a fragmented geopolitical landscape. Businesses today must learn to navigate this complex new reality.

Drivers of Change: Globalisation, Technology, and Geopolitical Shifts

The interplay between government-led international relations and the global expansion of MNCs has changed significantly since the late 20th century. Globalisation and trade digitisation have empowered some companies to wield economic influence comparable to mid-sized nations.

American big tech companies, for example, demonstrate how businesses can influence policy and regulation internationally, as seen in their engagement with the EU. Nation-states compete for investment, jobs, and growth, altering the balance between state and business. Where government-led negotiations have been slow or ineffective, corporations have stepped in, becoming "transnational actors in their own right," as Harvard Business Review and Foreign Affairs noted. Large firms now set global standards in areas like data privacy, energy policy, and environmental protection. A 2019 Brookings Institute study highlighted multinational corporations' influence through lobbying on foreign policy.

Heightened political polarisation and rising global conflicts, particularly in Ukraine, have made this shift even more visible. Businesses must reassess supply chains, navigate sanctions, and even take on diplomatic-like roles. New companies and startups with international supply chains also grapple with these challenges.

According to a Chatham House analysis of the Ukraine conflict, we are witnessing broader geopolitical fragmentation, revealing a multipolar world where economic and military power is widely dispersed. MNCs must now navigate market forces, shifting regulations, social responsibility expectations, and local political realities. Corporate leaders find themselves adopting roles once exclusive to professional diplomats, requiring geopolitical and geoeconomic awareness.

Shifting from Market Strategy to Non-Market Influence

Success now depends on more than just products, services, and balance sheets. Corporate diplomacy involves non-market strategies, including stakeholder engagement, lobbying, and alliance-building. Companies rely on networking, corporate reputation, and competitive intelligence to shape their public image and impact policy outcomes.

The "business diplomacy" concept emphasises long-term, trust-building exercises with governments and civil society, addressing issues from climate policy to cybersecurity.

@financialtimes The US tech company, whose manufacturing fortunes are prominently tied to China, now has a small but growing footprint in India. But analysts warn that the iPhone maker must navigate geopolitical tensions as it seeks to reduce its reliance on Beijing. #apple #India #tech #business #iphone ♬ original sound - FinancialTimes

Trust Gaps and Public Expectations

Businesses enjoy higher public trust (51%) than governments (37%), according to Edelman’s 2025 Trust Barometer. Corporations that showcase ethical conduct and social impact are leveraging this trust. However, this trust is fragile. Controversies around data privacy, labour standards, or environmental harm can trigger public backlash. As governments struggle to maintain consensus, companies have an opportunity and responsibility to step into diplomatic-like roles but must do so carefully to avoid accusations of overreach.

Moving Towards a Multipolar World: Geopolitical Fragmentation

The conflict in Ukraine exemplifies the move toward a multipolar global environment. While Western governments imposed sanctions on Russia, Russia pivoted towards China and the global south. Chatham House analysts emphasise how such fractures splinter global alliances, forcing businesses to adapt supply chains and evaluate new risks.

The lines between commerce and national security have blurred. Firms must navigate volatile markets, stricter export controls, and the risk of reputational damage.

Cyber-Diplomacy: Cybersecurity as a Diplomatic Domain

The Ukraine conflict also highlighted cybersecurity’s critical role. Technology companies are the guardians of the digital ecosystem. Cybersecurity is now a first-order corporate diplomacy issue, as Microsoft’s President Brad Smith emphasised at a lecture at the Paris Institute of Political Studies (Science Po).

Mechanisms of Corporate Diplomacy

  • Lobbying 2.0: Modern lobbying addresses digital regulation, data governance, and sustainability issues. Companies engage multiple stakeholders to shape policies.

  • Networking and Multi-Stakeholder Engagement: Networking is vital for advancing corporate interests. High-level government contacts and grassroots ties help influence policy and maintain brand credibility.

  • Competitive Intelligence and Scenario Planning: Tracking geopolitical, regulatory, and social developments is essential. Firms must adapt quickly to events like sanctions and tariffs.

  • Reputation Management and CSR: Corporate image is crucial. CSR initiatives must be aligned with corporate strategy and deliver a return on investment.

Recommendations for Leaders

In a world where MNCs rival states in influence, leaders must embrace the mindset and tools of diplomacy. They need to:

  • Manage Reputation as a Strategic Asset: Balance public pressure and ensure social programs are authentic.

  • Invest in Strategic Communications and Crisis Simulations: Share ESG commitments proactively and run scenario-based drills.

  • Gather and Leverage Geo-Economic Insights: Partner with specialist firms and diversify supply chains.

  • Strengthen Stakeholder Engagement: Form multi-stakeholder alliances and localise diplomacy.

  • Build In-House Diplomatic Expertise: Hire former diplomats and trade officials and provide geopolitical training.

  • Embed ESG in Core Strategy: Set genuine targets and be transparent.

Why Strategic Communications and International Engagement Are Critical

Reputation is paramount in the age of instant global communication. Companies must map political risks for various events, such as financial risk scenario planning.

Diplomatic collaboration and thought leadership, through partnerships with think tanks and policy forums, are also crucial.

Looking Ahead: Corporate Diplomacy in a Fragmenting World

Several trends will define the next phase:

  • Heightened Vulnerability to Geopolitical Risks: Flexible, scenario-based planning is essential.

  • More Assertive Social Responsibility: Companies must address social issues to avoid alienating the public and stakeholders.

  • Digital Diplomacy and Cyber Challenges: Cybersecurity will be intertwined with global politics.

  • Evolving Role of Middle Powers: These nations have outsized influence.

  • Expansion of Partnerships and Coalitions: Multi-stakeholder coalitions will proliferate.

Embracing a Diplomatic Mindset

The age of corporate diplomacy has arrived. Businesses operate at the heart of policy debates and crisis response. Leaders must adopt a diplomatic lens, meshing profit objectives with local sensitivities, global partnerships, and ethical governance.

The next generation of corporate strategists will require fluency in finance, marketing, international relations, and risk analysis. Corporate leaders are blending commerce with diplomacy.

Corporate diplomacy is no longer an optional add-on but an existential requirement. Organisations must think and act like politicians, bridging cultural divides and resolving complex challenges.

By cultivating trust and demonstrating genuine social impact, companies will protect themselves from political turbulence and unlock lasting value.


I work with leaders to integrate strategic communications and international stakeholder engagement into their decision-making processes. Let’s discuss how strategic geo-political advisory can help your business and/or investments navigate uncertain envirorments..

Please comment, share or subscribe to my  LinkedIn Reputation Matters newsletter. Or connect with me on LinkedIn.

Read More
how to Julio Romo how to Julio Romo

Heathrow Closure and the Case for UK Infrastructure Resilience

The recent fire at Heathrow Airport exposed more than a power outage—it revealed critical gaps in the UK’s infrastructure resilience and crisis readiness. For leaders in government and business, the incident is a stark reminder: resilience, reputation, and rapid response must be built into strategic planning now.

Late last night on Thursday night, 20 March, a fire at an electrical substation in Hayes triggered a full power outage at Heathrow Airport, leading to the cancellation of over 1,200 flights and severe disruption for over 200,000 passengers.

As one of the world’s busiest airports and a major cargo hub, the incident brought to light systemic vulnerabilities in the United Kingdom's national infrastructure, supply chains, and emergency preparedness.

The situation caught a lot of people and experts by surprise, highlighting the importance not just in how critical infrastructure is protected, but also why in today’s tense world climate the public and private sectors here in the UK and overseas need to invest more not just in security and resilience, but also in managing risk and the trust and reputation that is given to them by the public and wider stakeholder communities.

Geopolitical instability and hybrid threats are sadly becoming a part our every day life. As Benjamin Franklin said, “By failing to prepare, you are preparing to fail.”

Today, spare a thought for the communications professionals, not just at Heathrow, but down the supply chain in the UK and overseas that, are also having to manage this situation.

Heathrow: A National Asset Under Pressure

Heathrow is more than just a passenger terminal—it is a strategic national asset, handling 80 million passengers and 1.7 million tonnes of cargo annually, accounting for 30% of UK air freight.

In 2023, Heathrow managed a staggering £198.5 billion worth of goods, surpassing the combined cargo throughput of all other UK airports.  This dominance in air freight underscores the airport’s critical position in the UK’s logistics and global trade networks.

Yet, the airport is controlled by a consortium of international investors, each with distinct risk appetites, strategic priorities, and governance standards. While this diversified ownership brings capital and global expertise, it also complicates decision-making during crises and can cause delays in unified communication, inconsistent protocols, and governance silos can hinder a swift response

UK exports to non-EU nations via Heathrow were alone worth over £100 billion last year with the airport supporting around 180,000 jobs. It functions as a primary entry and exit point for goods and people, making its operational continuity critical to the UK's economic and reputational standing.

The fire and ensuing shutdown revealed how reliant the UK is on a single infrastructure node for air travel and logistics. The temporary loss of this gateway created ripple effects throughout the economy, impacting businesses large and small that depend on just-in-time delivery models, particularly in sectors such as pharmaceuticals, food, and high-value technology.

Hybrid Threats and National Security

Though no foul play has been confirmed, counter-terrorism police were involved in the investigation, reflecting growing concern about infrastructure as a target for state and non-state actors. MI5’s 2024 Annual Update had already warned of increased attempts by foreign actors to disrupt UK energy and transport systems.

The Heathrow incident exemplifies the evolving threat landscape, where traditional risk management models fall short.

In February, I wrote about why CEOs must invest in geo-political risk strategy. Now we see conformation of not just that, but resiliance structures and the necessary communications infrastructures that can work to support an re-assure stakeholders and the wider public.

Reputational Fallout and Stakeholder Confidence

The reputational damage from the incident was immediate and far-reaching. News outlets, including The Times, Financial Times, and Reuters, as well as many outlets in international markets covered the event extensively, reporting widespread delays and the failure of backup systems. 

Sir John Holland-Kaye, former Heathrow CEO, once described the airport as “the front door to the British economy.” The fire highlights how the UK needs to work collaboratively to manage the risk, perception and reputation, especially given how interconnected our economy is.

Key reputational risks included:

  • Public Confidence: Travellers and cargo clients experienced significant delays and confusion, compounded by inconsistent communication.

  • Investor Perception: Heathrow Airport Holdings, owned by a consortium including Qatar Investment Authority and other global investors, saw short-term impacts on its financial instruments.

  • UK's Global Image: International observers questioned the UK’s capacity to secure and manage critical infrastructure, particularly in a post-Brexit landscape where global competitiveness is vital.

Crisis Communications: Lessons in Transparency and Empathy

Initial responses to the incident were marked by technical jargon and slow updates, an issue noted by PR Week, which spoke to crisis communication industry colleagues, including Rod Cartwright, principal at Rod Cartwright Consulting and special advisor to the CIPR’s Crisis Communications Network.

Key principles of effective crisis communication include:

  • Speed and Clarity: Timely, accurate information prevents speculation.

  • Consistency: Unified messaging from all stakeholders avoids confusion.

  • Empathy: Acknowledging the human impact builds trust.

  • Media Engagement: Using trusted outlets to shape the narrative supports market confidence.

Strategic Communications Recommendations

To improve crisis communication preparedness, leaders across government and business in the UK and overseas must adopt a proactive, strategic approach. Key recommendations include:

Communicate for Confidence

Embed crisis communications into infrastructure planning. Leaders must be ready to communicate swiftly, credibly, and empathetically during crises to preserve institutional and market trust. Timely and transparent updates can reinforce confidence among investors, supply chain partners, regulators, and the public. This should also include scenario-based communication protocols that are regularly reviewed and rehearsed.

Strategic and Crisis Communications Investments for Business Leaders

Business leaders must scale up their investment in strategic communications capabilities as part of their risk and reputation strategy. Consider:

  • Dedicated Crisis Communications Teams: Establish in-house or retained external teams that can be activated instantly when an issue happens. These teams should be trained in handling high-risk scenarios with accuracy, empathy, and speed, and be able to engage at the highest level internally and externally.

  • Stakeholder Perception Mapping: Regularly assess how customers, investors, regulators, and partners perceive the business and its resilience. Use qualitative and quantitative tools to track changes in sentiment before, during, and after a crisis.

  • Crisis Response Playbooks: Develop and rehearse communication scenarios with templates, messaging trees, and designated spokespersons. These playbooks should also include escalation protocols and guidance for communicating across different markets and legal jurisdictions.

  • Leadership Visibility: Train executives to show presence, calm, and decisiveness during crises—an essential part of public and market reassurance. Visible leadership builds confidence among internal teams and external stakeholders alike.

  • Digital Listening and Monitoring Tools: Use real-time monitoring tools to track sentiment, media coverage, and misinformation across traditional and digital channels. This will help you to develop quick corrective messaging and targeted responses at pace.

  • Internal Communications Readiness: Ensure staff are informed and empowered to when appropriate share accurate updates externally. Internal alignment is critical to prevent conflicting messages and reinforce unity.

  • Stakeholder Trust Building: Develop long-term communications strategies that go beyond reactive messaging. Build and maintain trust by showing transparency, competence, and responsiveness consistently over time—not just in moments of crisis.

  • Cross-Sector Media Training: Provide media training to executives and senior spokespeople who may face scrutiny during crises. Messages must be adapted for diverse audiences including regulators, customers, investors, and the media.

From Vulnerability to Vision

The Heathrow fire was more than a logistical disruption. It was a systemic warning about the fragility of national infrastructure in the current volatile world in which we live and work.

In an era where hybrid threats, climate shocks, and geopolitical competition are converging, resilience is not a regulatory checkbox—it is a strategic imperative.

Leaders must embed resilience into infrastructure design, ownership structures, communications strategies, and cross-border cooperation.

The call to action is clear: transform fragmented governance, modernise outdated systems, and create a unified, intelligence-led approach to protecting the lifeblood of the British economy. The time to act is not during the next crisis but now.

Read More
how to Julio Romo how to Julio Romo

AI for PR Leaders: Automate Tactics, Lead Strategically

Artificial Intelligence (AI) is transforming PR by dramatically improving tactical tasks like content creation and analytics. However, strategic advisory—rooted in human judgment, emotional intelligence, and cultural insight—remains essential. Leaders must blend AI efficiency with human expertise to achieve meaningful, trusted outcomes

Artificial Intelligence (AI) is rapidly transforming the landscape of public relations, communications and strategic advisory professions, reshaping workflows, amplifying message reach, and redefining the speed at which professionals operate. However, amidst this technological transformation, a critical distinction remains: AI excels at tactical implementation, but strategic advisory and high-level stakeholder engagement must continue to be guided by experienced human professionals.

Senior leaders, boards, and decision-makers in businesses, investment firms, and government entities must understand this nuance to leverage AI effectively while preserving human-led strategic governance.

The Tactical Power of AI in Communications

AI’s influence on PR and communications has been profound, primarily enhancing efficiency, accuracy, and scale. According to a report by Gartner (2024), ‘40% of businesses have deployed generative AI in multiple units, especially marketing and customer service functions.’

Equally, a McKinsey report identified that ‘companies implementing AI writing solutions report productivity increases of up to 40% in content creation tasks. These technologies enhance business writing capabilities by providing advanced language processing, content optimization, and creative suggestions that align perfectly with brand voice and industry standards.’

For example, AI tools can rapidly generate press releases, social media posts, and briefings based on predefined criteria, enabling communicators to respond swiftly to fast-moving events.

McKinsey’s analysis from 2022 also noted that companies using AI-powered analytics improved their messaging targeting, significantly enhancing campaign effectiveness and reach. Research from the University of Southern California (USC) Annenberg Center for Public Relations presents insight into the ‘rising application of AI across communications by the public relations industry.

However, as tactical capabilities increase through AI, senior executives must recognise the limits of automation.

While tactical execution is streamlined, the critical components of planning and strategic decision-making, particularly in reputation management and stakeholder engagement, demand nuanced human insight and intervention.

Strategic Advisory: The Human Domain

Strategic communication involves more than disseminating messages—it requires thoughtful consideration of context, empathy, cultural nuances, and long-term impacts. It requires the ability to identify stakeholders and their specific interests and be able, in essence, to connect the dots.

Despite its sophisticated analytics and predictive capabilities, AI currently lacks the depth of emotional intelligence and cultural sensitivity necessary for high-stakes advisory roles. And even when it can overcome this, the one thing that it will not be able to replace is the human interaction that people rely on.

AI and GenAI delivers improved productivity so that leaders, especially those who engage on a peer-to-peer level, with added insight that they can use in their advisory work.

Research published by Harvard Business Review (2023) underscores that 75% of executives consider trust and human relationships integral to successful strategic communication, especially during crises or complex negotiations. By its nature, senior-level advisory demands direct human interaction, trust-building, and judgement based on experience, ethics, and emotional intelligence—qualities that AI cannot fully replicate.

When considering crises, an organisation’s general counsel usually leads in engagement with the Board or C-suite. AI can be a great asset, especially when working alongside a strategic communications advisor.

Equally, the power and influence of experts are critical. Leaders make decisions not just based on the data they are presented with but also on the trust and perception of those presenting them with strategic options. Peer-to-peer advisory happens because of the expertise that an individual brings to a situation that needs solving.

What experts bring to the table is contextual awareness, nuance, or strategic alignment with organisational objectives, market realities and geopolitical or geoeconomic situations.

AI-generated options could overlook critical cultural, political, reputational, or human considerations, potentially misleading decision-makers and leading to misguided strategic choices or unintended consequences, which is why expertise and human engagement. Yes, improving prompting can help, but it still lacks the human ability to understand the outcome and the data generated from the prompt engineering used.

Relationships Are Crucial, Especially Across International Cultures

Understanding cultural nuances is indispensable in international strategic communications. A 2019 article from Harvard Business Review highlights how, globally, trust in institutions is deeply intertwined with cultural perceptions, and messages that resonate in one region can significantly differ in another.

Human communications experts bring the ability to navigate these cultural complexities. They possess the experience to interpret subtle cultural signals, body language, and unspoken expectations—critical for international business and diplomatic communications. By contrast, AI tools, though capable of identifying patterns, data and sentiments across large datasets, lack the innate ability to genuinely engage and build trusted personal relationships across diverse cultures, which are critical in international business and trade.

Peer-to-peer relationships are fundamental in business, particularly when operating across different markets and cultures because they establish trust, mutual respect, and a deeper understanding that transcends transactional interactions.

Effective peer relationships foster open dialogue, enable nuanced decision-making, and help navigate cultural complexities that technology alone cannot decipher. Negotiating with a leader gives you the confidence that whatever is ultimately agreed, the authority of the person you’ve negotiated with will enable them, most of the time, to action what’s been agreed. 

Recognising and adapting to cultural differences not only improves communication clarity but also strengthens partnerships, facilitating smoother negotiations and more resilient business outcomes. Ultimately, businesses that invest in cultivating meaningful, culturally aware peer-to-peer interactions are better positioned to succeed in international markets.

In my 15 years of working internationally, across Europe, the Middle East, Asia, South East Asia and the US, I have seen the differences that make us unique. Training and advisory insight in these markets have had to be adapted to ensure that the messages and insight I share are received, and for this, an understanding of culture and the unique situation that each client, whether junior or senior, has been critical.

Clients I have worked with directly or indirectly expect an understanding of them and the environment in which they live or work. An understanding of culture helps to open doors.

AI-Assisted Design and Activation of Campaigns

While strategic decision-making and relationship management remain human-led, AI can significantly benefit the design, activation, and monitoring of influencing public or private campaigns.

AI can analyse vast datasets, segment audiences precisely, and optimise message dissemination at scale. For example, during a public health campaign, AI can rapidly adapt messaging based on real-time feedback loops, increasing campaign responsiveness and effectiveness. Similarly, in private influence campaigns, such as those aimed at regulators, investor communities or internal corporate stakeholders, AI-driven analytics can precisely measure and predict audience reactions and engagement levels.

However, while AI can effectively design and activate these campaigns, and automate them, governance and risk management must remain human-driven.

Equally, it is worth remembering that AI algorithms are prone to biases from historical datasets, which can inadvertently amplify misinformation or cultural insensitivity. Human governance and oversight ensures ethical standards, inclusivity, and appropriateness in messaging remain paramount.

Integrating Human Expertise and AI: A Model for the Future

Senior leaders in business, investment and in governments must strategically redesign their communications functions to integrate AI’s tactical capabilities alongside human strategic oversight.

In essence, where AI can support and unlock value for strategists and communicators by improving the productivity of leaders and the efficiency of those doing the tactical activation.

Here are some recommendations for effectively combining these capabilities:

1. Clear Delineation of Roles

Executives should clearly delineate between tasks suitable for AI automation and those requiring human oversight.

Tactical tasks such as media monitoring, basic content creation, and routine campaign management should leverage AI.

Conversely, strategic roles involving stakeholder engagement and management, crisis communications, high-level messaging, and ethical considerations must remain human-led.

2. Human-Centric Advisory Framework

Organisations should establish or enhance senior advisory councils comprising experienced communication strategists and General Counsel, an issue that I have written about in the past.

These councils ensure that AI-generated strategies align with corporate values, ethics, and long-term stakeholder interests. These issues are discussed at the Board and C-suite and serve as a governance mechanism to mitigate AI-driven risks.

3. Invest in Human Skills

Companies should invest in continuous human capital development, emphasising skills that AI cannot replicate, such as emotional intelligence, critical thinking, ethics, and cross-cultural competency.

According to Deloitte’s 2023 Future of Work report, organisations investing in these skills see significant improvements in strategic agility and employee engagement.

4. Establish Robust Risk Management Practices

Leaders should embed robust risk management protocols that allow rapid human intervention if AI-driven campaigns diverge from desired outcomes or inadvertently propagate misinformation. Regular audits and oversight frameworks led by human teams ensure accountability and alignment with strategic objectives.

5. Foster Cross-Functional Collaboration

Encourage close collaboration between communication professionals, data scientists, legal experts, and senior leadership. This cross-functional approach ensures comprehensive understanding and alignment, maximising AI’s tactical benefits while retaining strategic integrity and mitigating risks.

Building the Future: Human-Led Strategic Excellence

AI’s transformative potential in PR and communications is undeniable. Yet, in my opinion, it remains a complementary rather than substitutive force. Organisations that excel will be those that judiciously leverage AI for tactical excellence while maintaining human-led strategic governance.

In a profession fundamentally grounded in human connection, influence, and trust-building—especially within complex international contexts—the future of successful PR and communications lies in a balanced model. One where AI enhances capabilities without replacing the essential human judgment, cultural awareness, and strategic foresight necessary for genuine engagement at the highest levels.

By clearly defining roles, enhancing human advisory capabilities, and embedding rigorous governance frameworks, leaders can ensure their organisations are perfectly positioned to harness AI’s full potential without losing the irreplaceable human element that remains the foundation of effective strategic communications.


Need to strategy and advisory to help your agency or in-house to better use AI for or strategic advisory or tatical campaign development?

I work with leaders to modernise and integrate strategic communications into their decision-making processes. Let’s discuss how your company can better use AI work flows in your communications, and corporate strategy for long-term success.

Please comment, share or subscribe to my  LinkedIn Reputation Matters newsletter. Or connect with me on LinkedIn.

Read More