A new report from VMware shows that cybersecurity experts are seeing more deepfakes being used in cyberattacks. According to the new report, deepfakes are increasingly used in cyberattacks as the threat of technology moves from hypothetical to real harm.
According to VMware’s annual Global Incident Response Threat Report released Monday, reports of attacks using face- and voice-modifying technology rose 13% last year . Also, 66% of cybersecurity experts surveyed for this year’s report said they saw at least one cybersecurity measure for deepfake in the last year.
“Deepfakes are not increasing in cyberattacks. They are already everywhere,” Rick McElroy , chief cybersecurity strategist at VMware, said in a statement . said.
How are Deepfakes Created?
Deepfakes use artificial intelligence to make it look like a person is doing or saying things they’re not. The technology entered the mainstream in 2019 , fueling fears that it could convincingly recreate other people’s faces and voices. Experts say victims’ likenesses could be used for artificially created pornography, and the technique could also be a tool for political ends.
While the first deepfakes were largely easy to spot, the technology has since evolved and become much more believable . In a video posted on social media in March, it was seen that Ukrainian President Volodymyr Zelensky was directing his soldiers to surrender to Russian forces. Although this move was swiftly condemned by Zelenski, it demonstrated the potential for damage caused by deepfakes.