My thoughts on strategy, communications and digital and technology, and how it’s creating opportunities and transforming service delivery in businesses and governments.

Sign-up to my RSS feed or follow me on Twitter or LinkedIn.

Listen to my podcast on Acast, Apple Podcasts, Google Podcasts or Spotify.

How Deepfake Scams Threaten Financial Institutions

How Deepfake Scams Threaten Financial Institutions

Earlier this year, a sophisticated deepfake video began circulating online, purporting to show Goldman Sachs’ Chief U.S. Equity Strategist, David Kostin, endorsing a fraudulent investment scheme. The video, seemingly authentic and convincingly delivered, claimed returns of 48%, 66%, and even 108% within a week. It replicated Kostin’s speech patterns and delivery style with unsettling precision, making it indistinguishable from authentic corporate media.

The reputational hit was immediate and serious for a figure like Kostin, whose analysis guides institutional investors and whose commentary moves markets. Though Goldman Sachs swiftly issued a rebuttal and triggered takedown requests, the damage had already spread. The clip was re-uploaded across Telegram and WhatsApp groups, and even embedded in online investment scams targeted at retirees and young retail investors.

This wasn’t just a technical manipulation. It was a personal violation with significant implications for investor confidence, media trust, and Goldman Sachs’ hard-earned reputation.

Kostin and Goldman Sachs are not the only people and financial institutions that have been targeted and used by online scammers. With growing GenAI tools, clever social engineering, a lack of educational awareness of these tools, scammers are targeting those who are most vulnerable from the fake opportunities they peddle.

A Timeline: From Novelty to National Security Risk

The evolution of generative AI (GenAI) and deepfakes has moved rapidly from experimental novelty to serious institutional risk.

Between 2018 and 2020, AI’s potential became increasingly evident. Openai’s release of GPT-2 in 2019 marked a turning point, so powerful that its full version was initially withheld due to concerns over ‘misuse potential.’ Around the same time, deepfakes emerged on platforms like Reddit and YouTube, primarily as tools for political satire and non-consensual pornography. In fact, the main risk point for deepfakes was initially thought to be those working in politics and government.

While the early uses that came out in the public were seen as fringe, financial institutions quietly began exploring AI to enhance fraud detection and Know Your Customer (KYC) processes.

From 2021 to 2023, Genai entered the mainstream. Openai’s GPT-3, alongside image generators like DALL-E and Midjourney, triggered a wave of enterprise adoption. Banks began integrating GenAI into customer service (via chatbots), regulatory compliance, and document processing. Yet, as these tools gained traction, with security agencies raising the alarm.

In 2023, Europol warned that synthetic media could dominate digital content by 2026, flagging significant misinformation and identity fraud risks.

By 2024 and the start of 2025, the threats became a reality. Deloitte reported a 700% surge in deepfake incidents targeting the financial sector. A high-profile case in Hong Kong saw scammers use a deepfake CFO in a video call to steal $25 million. Regulatory bodies reacted quickly. FINRA included AI-generated deception as a core compliance risk in its 2025 oversight report, while the U.S. Treasury and the UK’s Financial Conduct Authority (FCA) issued specific guidance on AI impersonation threats to financial market integrity.

As the world continues to move towards a multipolar world with more conflict and economic insecurity, bad states or individual actors are turning to Genai to target not just financial institutions, but also citizens.

The Trust Deficit: AI and the Erosion of Perception

Executive Voices Are Now Vulnerabilities

The attack on David Kostin is part of a broader trend. According to Deloitte, in 2023 deepfake incidents in the financial sector increased within Europe by over 780 percent, and the UK accounted for 13.5 percent of total cases.

The World Economic Forum’s Global Cybersecurity Outlook 2025, which was published in January, revealed that ‘cybercrime grew in both frequency and sophistication, marked by ransomware attacks, AI-enhanced tactics – such as phishing, vishing and deepfakes – and a notable increase in supply chain attacks.’

The report highlights how GenAI ‘supports attackers in developing credible social engineering attacks in a wider range of languages, which helps threat actors target a greater number of people in more countries at a lower cost,’ and ‘when augmented with GenAI, threat actors can create convincing impersonations of the voice, video, images and writing styles of senior leaders. When these deepfakes are maintained over prolonged interactions with targeted staff, they can be used to defraud organisations or help attackers gain access to their IT systems.’

Accenture’s research has noted a 223% rise in the trade of deepfake-related tools on dark web forums between Q1 2023 and Q1 2024.

In this climate, every executive voice, every onscreen briefing, becomes a potential liability, one that can mislead millions, move markets, or trigger regulatory inquiry.

Declining Consumer Confidence in Digital Authenticity

In the LNRS 2025 Trust Index, 55% of consumers said they no longer trust financial video content without verification. Younger audiences, especially Gen Z and Millennials, were the most sceptical, citing AI-generated scams seen on Instagram, YouTube, and TikTok.

Similarly, Accenture’s ‘Banking Consumer Trends 2025’ report found that only 26% of respondents trust banks to use AI ethically, down from 41% just two years prior.

As AI accelerates, trust decays, and without trust, perception becomes volatile.

Deepfakes as Financial Weapons: Case Studies and Impact

Arup Deepfake Scam – Asia, 2023

In one of the earliest high-profile corporate deepfake attacks, UK-headquartered engineering firm Arup fell victim to a sophisticated AI-driven fraud. In late 2023, scammers targeted Arup’s Hong Kong office staff with a deepfake video call that convincingly impersonated the company’s Chief Financial Officer and senior leaders.

The attackers recreated the CFO’s voice using audio from publicly available recordings while leveraging generative AI to simulate multiple known participants in a virtual meeting. During the call, an employee was instructed to process a series of fund transfers, resulting in a $25 million USD loss. The scam was only discovered hours later, when inconsistencies were identified internally. The funds were unrecoverable.

Elon Musk Crypto Deepfake – USA, 2024

In the consumer space, a deepfake video of Elon Musk promoting a cryptocurrency investment circulated widely on X (formerly Twitter) in early 2024. Despite visible disclaimers, the convincingly edited clip was shared over 150,000 times, misleading viewers into believing Musk endorsed a fraudulent scheme.

One U.S. retiree reportedly lost $690,000 after acting on the video’s investment pitch. Though flagged by moderators and removed, the scam’s virality underscored how high-profile impersonations can lead to serious financial loss.

How Can Financial Institutions Strengthen Their Strategy Against AI Threats?

So, how do we mitigate the rising threat of AI-driven fraud and protect institutional reputation and investor trust? And what can financial institutions do and adopt so that they can be proactive and resilient in their strategy?

Above all, strategists and strategic communicators need full sight and understanding of an organisation's governance and technology before advising, engaging and communicating privately and then publicly with stakeholders and the wider public.

Internal Governance and Preparedness

1. Executive Authentication Protocols - How to Strengthening Identity in a Synthetic Age.

As generative AI and deepfake technologies become more accessible and sophisticated, ensuring the authenticity of executive communications is no longer optional, it is a strategic imperative.

Financial institutions must implement robust executive authentication protocols to safeguard against impersonation, fraud, and reputational damage.

One of the most effective first lines of defence is the use of biometric logins for executive access to sensitive platforms, including internal communications tools, trading systems, and board portals. Biometric identifiers, such as facial recognition, fingerprint scanning, or voice biometrics, provide far greater security than traditional passwords, which are increasingly vulnerable to phishing or brute-force attacks. Some banks and investment firms are already exploring multi-modal authentication, combining biometrics with behavioural data, such as typing patterns or location, to confirm identity in real-time.

2. Crisis Simulation and Tabletop Exercises - Preparing for AI-Driven Disinformation Scenarios

In today’s evolving threat environment, where AI-generated content can mimic trusted voices and visual identities, traditional crisis planning is no longer sufficient. Financial institutions must integrate AI-specific tabletop exercises and simulation drills into their governance and communications frameworks. These exercises should be designed to test not only operational resilience, but also how executives, communications teams, compliance, and legal counsel respond under pressure to reputational and market-moving threats.

Quarterly cross-functional drills are a best practice. These simulations should reflect real-world scenarios tailored to the institution’s risk profile. For example, a deepfake video of the CFO announcing a dividend cut might be released during earnings season. Or, a fabricated briefing from a regulatory body might circulate on encrypted messaging apps, falsely suggesting that the firm is under investigation. In another case, an impersonation email, complete with a synthetic voice message, might instruct staff to execute a high-value fund transfer. Each scenario should test executive decision-making and their communications teams response timing, as well as coordination with IT/security teams, and internal escalation protocols.

Importantly, institutions should look to penetration testing (pen-testing) methodologies as a model for stress-testing their communications infrastructure. Just as cyber teams routinely simulate phishing and intrusion attempts to test systems and staff, communications and risk leaders should commission controlled disinformation campaigns or synthetic media drops, crafted by vetted external consultants or internal red teams.

These exercises allow executive teams to experience the shock, confusion, and urgency of a real deepfake event, and to practice delivering accurate, calm, and credible responses under pressure.

To be effective, these simulations must involve the C-suite, corporate affairs, investor relations, legal, compliance, and cybersecurity functions. After-action reviews should assess response speed, message consistency, legal risk exposure, and public impact. Findings should be used to refine crisis communications playbooks, update contact chains, and pre-approve holding statements.

In a world where a synthetic message can erode billions in market value or trigger regulatory intervention, rehearsing for the worst is now a mark of strategic foresight and not paranoia. At the same time, it can reassure markets and the insurance and re-insurance that financial institutions are required to have.

3. AI Model Governance Councils: Embedding Oversight into AI Deployment

As generative AI becomes embedded in customer service, investment analysis, and operational decision-making, financial institutions must establish robust oversight structures to mitigate systemic risk. A best-practice approach is the formation of a cross-functional AI Model Governance Council, a formal body responsible for evaluating, approving, and continuously monitoring all high-impact AI deployments.

Every significant AI system should undergo a mandatory risk assessment prior to launch. This includes evaluating data provenance, accuracy, model explainability, and potential for bias or misinformation. Use cases such as client-facing chatbots or automated investment insights require particular scrutiny, given their potential to influence decisions and damage reputation if they “hallucinate” false outputs.

Effective governance requires more than technical expertise. Councils should include representatives from legal, compliance, communications, risk, and data science teams, ensuring that regulatory, ethical, and reputational considerations are built into decision-making. This cross-functional lens ensures that potential crises, such as an AI issuing misleading financial guidance, are anticipated and contained.

Critically, all AI systems must be designed with an embedded ‘kill switch’ or deactivation protocol. This allows institutions to immediately pause or withdraw an AI tool from use if it begins generating harmful, incorrect, or non-compliant content, protecting customers, markets, and institutional trust in real time.

4. Data Hygiene and Provenance: Building Trustworthy AI from the Ground Up

High-integrity AI begins with high-quality data. Financial institutions must prioritise data hygiene and provenance to ensure that AI systems produce accurate, fair, and compliant outputs. This starts with auditing all training datasets for accuracy, recency, and bias, particularly when models are used in regulated contexts such as customer service, credit scoring, or investment analysis.

Institutions should establish clear protocols to source data only from verified, auditable, and purpose-appropriate origins. Integrating unstructured or unverifiable data—such as Reddit threads or open web forums—into training pipelines can contaminate models with misinformation or culturally biased assumptions. Any external data must be rigorously validated and documented.

Regular data reviews and model retraining cycles should be scheduled, particularly after major market events or regulatory updates. Transparency logs and version control systems can also support accountability. Ultimately, strong data governance is not only about performance—it’s a foundation for building responsible, explainable, and legally defensible AI.

Public-Facing Trust and Reputation Management

1. Transparent AI Ethics Policy: A Signal of Trust

Financial institutions should publish a clear, detailed stance on their use of artificial intelligence, demonstrating transparency, accountability, and leadership in a rapidly evolving landscape. A comprehensive AI policy should outline how the organisation governs AI oversight, ensuring that all systems are reviewed for accuracy, compliance, and risk. This is not just important for the business or consumer base, but also for regulators, shareholders and other critical stakeholders.

Equally important is a firm commitment to data privacy. By clarifying how customer and institutional data is collected, stored, and used in AI models, institutions can reassure stakeholders and regulators that both ethical standards are being upheld and consumers can trust them and the regulators.

Institutions must also establish and disclose anti-disinformation protocols, including how they identify and respond to AI-generated misinformation, especially when it involves executive impersonation or market-sensitive content.

Finally, highlighting and supporting ethical AI development partnerships with academic, regulatory, and technology organisations adds credibility. A well-communicated AI policy is not just a risk mitigation tool, it’s a proactive strategy to build long-term reputation, stakeholder trust, and regulatory confidence.

HSBC, for example, published its “Responsible AI Principles” in 2023 and updated them with external audits in 2024.

2. Investor and Customer Education: Strengthening Digital Resilience

Educating investors and customers is essential to building trust in an AI-driven environment. Institutions should create accessible online hubs that explain emerging fraud risks such as deepfakes and offer guidance on how to verify official communications.

Sharing anonymised case studies of detected or thwarted scams demonstrates transparency and preparedness, reinforcing confidence in the organisation’s ability to respond to digital threats.

How you communicate and the tone of voice that you use can help with how you are perceived.

3. Real-Time Monitoring and Response Cells: ​​Staying Ahead of AI Threats

To protect reputation in a high-speed information environment, financial institutions must invest in and support real-time monitoring and response capabilities.

Using specialist platforms such as Blackbird AI, Reality Defender, and Cyabra, firms can continuously scan digital channels for deepfakes, impersonations, or coordinated disinformation campaigns. This proactive surveillance enables early detection and containment of threats before they escalate.

Equally important is the development of clear escalation pathways, involving communications, legal, compliance, and executive teams.

Pre-approved legal statements and PR responses should be prepared in advance to ensure a swift, consistent reply when reputational risks emerge.

By combining technology with operational readiness, institutions can maintain control of their narrative and reinforce public trust.

4. Build Industry Alliances: Collaborating to Counter AI Threats

Fighting AI-enabled fraud requires collective action. Financial institutions should actively join initiatives such as the World Economic Forum’s Global Coalition for Digital Safety, which promotes best practices for detecting and mitigating harmful content.

Collaborating with regulators, peer institutions, and cybersecurity firms enables the sharing of deepfake signatures, threat intelligence, and takedown protocols.

These alliances not only enhance early warning capabilities but also demonstrate industry-wide accountability and leadership. By working together, institutions can create a stronger defence against AI-driven threats while reinforcing trust across the financial ecosystem.

What Boards, General Counsel, and Strategy Teams Must Do

For Legal Teams

Boards, legal teams, and strategy leaders must take a proactive stance on AI-related risk. General Counsel should review and update IP and image rights policies to cover synthetic likenesses, ensuring legal protection in the event of executive impersonation. They should also prepare templates for swift DMCA takedowns and explore AI-specific indemnity clauses with insurers.

For Investor Relations

Investor Relations teams can strengthen transparency by including AI governance frameworks and incident logs—such as deepfake detections—within ESG disclosures and shareholder communications.

For Boards and Risk Committees

Boards and Risk Committees must recognise AI-enabled reputational harm as a material risk. This includes allocating dedicated resources to reputation intelligence platforms and establishing clear crisis response protocols. Directors should be regularly briefed on synthetic media threats and integrated escalation plans, reinforcing their governance responsibilities in a high-risk digital environment. Taking action now will protect reputation, maintain market confidence, and meet rising regulatory expectations around responsible AI use.

What the David Kostin Deepfake Teaches Us

Kostin’s case is not the first and it won’t be the last. But it is a wake-up call: GenAI has become powerful enough to replicate expert voices, deceive investors, and manipulate entire segments of the market.

A senior figure’s reputation, built over decades, can be compromised in minutes by an AI-generated clip. And the effects aren’t limited to one firm: they ripple through the sector, shake confidence, and attract regulators’ scrutiny.

Global Regulation Is Catching Up

Governments are moving to regulate AI and deepfakes, setting minimum expectations that financial institutions must meet and ideally exceed. In the UK, the Online Safety Act (2024) enables Ofcom to order takedowns of fraudulent synthetic media, with the FCA and PRA set to issue AI governance guidance for the sector in late 2025.

The EU’s AI Act imposes strict labelling requirements and limits deepfake use in high-risk domains, with fines reaching €30 million or 6% of annual turnover.

In the U.S., the SEC is pushing for “technology-agnostic” regulation, while Congress debates mandatory watermarking of synthetic content.

Singapore’s MAS has already mandated ‘Explainable AI’ for systems used in financial compliance. These evolving regulations should serve as a baseline.

Leading institutions must go further—embedding governance, transparency, and verification into every stage of AI deployment to safeguard reputation and maintain public trust.

Defending Trust in the Age of Synthetic Reality

The financial services sector is built on confidence. But confidence cannot survive if perception is corrupted by deception.

Deepfakes are not just a cybersecurity issue—they are a strategic risk, a communications crisis, and a boardroom priority.

What’s needed now is not just technical mitigation, but full-spectrum resilience:

  • Strategic foresight to predict vulnerabilities.

  • Ethical leadership sets the tone from the top.

  • Cross-sector collaboration to protect shared reputation.

As AI tools continue to evolve, so must our approach to trust.

David Kostin’s experience is not an anomaly, it’s a warning. The question for financial institutions, boards, and advisors is simple: Will you wait until a deepfake crisis strikes, or will you lead the defence now?


I advise a wide range of organisations, including governments and investors on how to position themselves and sharpen messaging, and build resilient reputational capital that supports long-term value creation and stakeholder trust.

If you’re looking to modernise your communications team so that it is ready to tackle the growing threat of deepfakes and reputation challenging issues then, I would welcome a conversation.

To stay informed, subscribe to my LinkedIn newsletter, Reputation Matters, where I share insight and practical guidance at the intersection of investment, innovation, and trust.

Please feel free to connect or share this with your network who may benefit. To my  LinkedIn Reputation Matters newsletter. Or connect with me on LinkedIn.

Trust Is Currency: Why Reputation Drives Tech Investment

Trust Is Currency: Why Reputation Drives Tech Investment