A U.S. federal court request filed on October 13, 2021, revealed that cyber criminals stole $35 million from a company based in the United Arab Emirates by using a combination of email forgery and deepfake audio.

About the attack

According to a report by DarkReading, the attackers impersonated a director within the organization and pretended to be requesting company funds “as part of an acquisition of another organization.” By utilizing AI to imitate the director’s voice, the attackers convinced an unsuspecting employee to transfer money into numerous international bank accounts.

“In January 2020, funds were transferred from the Victim Company to several bank accounts in other countries in a complex scheme involving at least 17 known and unknown defendants… Emirati authorities traced the movement of the money through numerous accounts and identified two transactions to the United States…

The request asked the courts to designate a DoJ lawyer to be the point of contact in the U.S. for the investigation.”

Combatting rising deepfakes

While this is currently the second known deepfake attack involving an unauthorized transfer of funds, such deceptions are likely to increase in the near future as voice cloning technology becomes more easily accessible. The best defense against such attacks is to create a multi-layer verification process when authorizing company transactions, according to Etay Maor, the senior director of security strategy at Cato Networks, a network security firm. 

“If there is money to be made, you can be sure that attackers will adopt new techniques… It’s not super-sophisticated to use such tools. When it comes to a voice, it is even easier…

We are going to have to adopt some of the principles of zero trust into this world of relationships… It does not have to be a technological solution… At the end of the day, a simple phone call to verify the request could have prevented this.”

For more information about deepfakes, check out our post “The spread of the deepfake.”