Deepfakes in Finance: Spotting the Signs of Deception

By Hugh Scantlebury, CEO and Founder of Aqilla

Just when you thought reality couldn’t get any more complicated or distorted – along come deepfakes. Deepfakes, for the uninitiated, are digitally altered images and videos that present a false reality. Usually, they depict something that didn’t happen or make a slight change to actual events. On one hand, this can be entertaining, used for light-hearted pranks or comedy. But deepfakes also have darker implications and can empower bad actors. In recent years, we have seen videos of Joe Biden advising US citizens to stay home during the primary elections in an attempt to influence the vote. There have also been instances of TV stars endorsing competitions that were later confirmed as scams. 
 
Risk to all
Contrary to popular belief, deepfake scams are not exclusively aimed at the wealthy or famous. Finance departments – of all sizes – are particularly appealing targets due to their access to sensitive information and substantial funds.
 
Deepfakes can trick victims in lots of different ways. You may think you’ll be able to identify an AI-generated call or voice message from your boss, but the technology is very advanced. Its ability to replicate voice saw a CEO scammed out of almost £200,000, when a senior executive at his parent company “requested” an emergency transfer to a third-party bank account. He was confident it was legitimate, as he recognised the person’s subtle German accent, which the AI had perfectly mimicked, along with their tone, pitch, and annunciation.
 
Deepfake voice or video messages are hard to identify. But deepfake-generated written content is arguably even more difficult. Some scam messages are obvious, sent from suspicious emails, asking you to click a link with little other context. I think we’re all clued up enough to avoid those by now! However, as our education on phishing attacks has improved, so have the tactics of bad actors.
 
AI can also learn an individual’s writing patterns, style and quirks to generate messages and emails that appear to come from them. You may know that your boss always sends short, to-the-point emails and that your colleague’s messages are longer and littered with exclamation marks. But AI can replicate these patterns in a very convincing way. The boom in generative AI over the past couple of years has paved the way for large language models to be made available to all for free. This provides widespread access to powerful capabilities that, incidentally, were created with the genuine aim of helping us automate email generation and deal with a growing number of daily messages.
 
Facing the threat
Fortunately, we can take steps to improve our chances of identifying even the best-disguised deepfake threats:
 
1. Question everything – Most finance teams already implement Know Your Customer (KYC) and Know Your Customer’s Customer (KYCC) principles in all transactions as part of anti-fraud, tax evasion, and money laundering measures. Try applying these principles throughout your organisation as well. Don’t accept an email from your boss as gospel or assume it’s an urgent request that must be fulfilled immediately. Verify the request through another method – if they email you, call them to double-check. Similarly, if you receive a call, follow up with an email to confirm the task assigned to you. Seeking reassurance and giving the actual person a chance to intervene if it wasn’t from them creates a safety net like real-life two-factor authentication!
2. Create a supportive culture – There’s no point implementing a ‘question everything’ approach if the leadership team get frustrated with requests to double-check validity. It requires everyone within the organisation to be onboard, and the principles should be championed from the top down.
3. Give your team the skills they need – The whole team should stay well-informed about the latest threats to maintain a successful, consistent approach to deepfake threat mitigation throughout the organisation. Run training sessions on spotting deepfakes. Keep everyone up to date on the latest developments in technology and advances in cybercriminals’ tactics. This will ensure they can spot the tell-tale signs, such as:
– Facial inconsistencies: including unnatural eye movements
– Skin texture anomalies: unnatural blurring or smoothing of wrinkles or blemishes. A lack of detail is always a red flag!
– Lip-sync mismatches
 
Stay alert and sceptical
Sadly, we live in a world where we have to question everything we see and read. A daily feed of false news on social media has become an unwelcome part of our lives. This critical mindset should extend into the workplace too. It’s easy to assume that smaller organisations fly under cybercriminal radars. But with AI accelerating both the speed and sophistication of attacks, no business is too small to be a target, especially when it comes to finance and accounting functions. A sceptical mindset is one of the strongest tools against the threat of deepfakes, so we must stay alert and critical. By reinforcing internal protocols, staying informed, and trusting our instincts, we can make it significantly harder for deepfakes to deceive us – and for attackers to succeed.