3aIT Blog

Woman looking at a glowing green monitorPicture the scene... you get a video call from your boss or your financial manager asking you to transfer funds or open that email attachment they've just sent you. You'd do it, right? They've just told you to. However, a new cyber-threat is emerging, and this time, it's visual.

This email has been received from an external organisation

At this point, most of us are savvy to email phishing. We've covered it many times over the years, from scams that prey on panic about newsworthy events, scams that try and blackmail you into a response, and "whaling" scams in which you receive a spoofed email from a manager asking you to do something for them. There's various defences against these sorts of threats, although the most sophisticated of them can sometimes be pretty convincing, even to people like us that know what to look for to determine whether a message is genuine.

As if that wasn't enough to worry about, according to an article in The Register, criminals are now turning to new technologies to try and convince you to do their bidding. Freely available "Deepfake" tools may allow people to create a convincing approximation of your boss with which they could video call you with instructions to follow. As the article covers, this technology is also being used to create "fake" interviewees to try and get jobs with access to company data.

I can't believe my eyes

Woman on video callYou may have watched The Capture on the BBC recently, in which a politician literally has words being put in their mouth by deepfake technology, leaving it impossible for the public to know if what they're seeing is real. While the tech is not quite at the stage suggested in that show yet, the points it raises are certainly ones we'll need to consider in the fairly near future. Not only in the direction suggested in that series, but in the other one too. Once people are widely aware of deepfake technology and what it can do, what's to stop someone who genuinely said something that's gone down like a lead balloon from later claiming they never said that, and they're a target of a deepfake?

In a work context, the threat is obvious, as touched on above. If your boss asks you to do something on a video call, even if it seems out of character, you're far more likely to do it than if you got an email asking the same thing. Short of going to their office to verify it in person (which may be out of the question if you or they are not in the same office), how can you verify if a request has really come from them?

Don't Panic... Yet

A RobotNow, it's important to point out here that this is very unlikely to be a threat that you, personally, need to worry about just yet. It's still a fair amount of effort to pull off an attack like this, and deepfake technology is still in its infancy. The chances of the average person getting targetted in this way are currently very low.

Nevertheless, it is something that we will need to become more savvy to in the coming years, and we need to start having a discussion about the implications of all this, generally as well as specifically in cases like this. With all forms of virtual communication slowly becoming unverifiable, we just need robot technology to come on a bit, and physical confirmation will be ruled out too by the possibility your CFO has been replaced by an android. No genuine money transfers will ever happen again, because there will be no way to verify anyone is real! This is a joke. Probably...