The Number of Deepfake Videos Increases By 900 Percent Each Year

The Number of Deepfake Videos Increases by Percentage Every Year
The Number of Deepfake Videos Increases By 900 Percent Each Year

According to the World Economic Forum (WEF), the number of online deepfake videos is increasing by 900% each year. Several notable cases of deepfake scams made the news headlines, with reports of harassment, revenge, and crypto scams. Kaspersky researchers shed light on the top three scam schemes using deepfakes that users should beware of.

The use of artificial neural networks, deep learning, and thus deepfake deception techniques allow users around the world to digitally alter their faces or bodies, thus producing realistic images, video and audio materials where anyone can look like someone else. These manipulated videos and images are often used to spread misinformation and other malicious purposes.

financial fraud

Deepfakes can be the subject of social engineering techniques that use images created by criminals to impersonate celebrities to lure victims into their traps. For example, an artificially created video of Elon Musk promising high returns from a dubious cryptocurrency investment scheme spread quickly last year, causing users to lose money. Scammers use images of celebrities to create fake videos like this one, stitching together old videos and launching live streams on social media platforms, promising to double any cryptocurrency payments sent to them.

pornographic deepfake

Another use for deepfakes is to violate an individual's privacy. Deepfake videos can be created by superimposing a person's face on a pornographic video, causing great harm and distress. In one case, deepfake videos appeared on the Internet in which the faces of some celebrities were superimposed on the bodies of pornographic actresses in explicit scenes. As a result, in such cases, the reputation of victims of attacks is damaged and their rights are violated.

Business risks

Often times, deepfakes are used to target businesses for crimes such as extortion from company executives, blackmail and industrial espionage. For example, using a voice deepfake, cybercriminals managed to deceive a bank manager in the UAE and steal $35 million. In the case in question, it only took a small recording of his boss's voice to be captured to create a convincing deepfake. In another case, scammers tried to deceive the largest cryptocurrency platform Binance. Binance executive said “Thank you!” about a Zoom meeting he never attended. He was surprised when he started receiving messages. The attackers managed to create a deepfake with public images of the manager and implement it by speaking on behalf of the manager in an online meeting.

FBI warns human resources managers!

In general, the purposes of scammers using deepfakes include disinformation and public manipulation, blackmail and espionage. Human resources executives are already on the alert for the use of deepfakes by candidates applying for remote work, according to an FBI alert. In the Binance case, the attackers used images of real people from the internet to create deepfakes, and even added their photos to resumes. If they manage to deceive human resources managers in this way and then get an offer, they can subsequently steal employer data.

Deepfakes continue to be an expensive form of scam that requires a large budget and is increasing in number. An earlier study by Kaspersky reveals the cost of deepfakes on the darknet. If an ordinary user finds software on the Internet and tries to deepfake it, the result will be unrealistic and fraudulent is obvious. Few people believe in a low-quality deepfake. He can immediately notice delays in facial expression or blurry chin shape.

Therefore, cybercriminals need large amounts of data in preparation for an attack. Like photos, videos, and voices of the person they want to impersonate. Different angles, light brightness, facial expressions, all play a big role in the final quality. An up-to-date computer power and software is required for the result to be realistic. All this requires a large amount of resources, and only a small number of cybercriminals have access to this resource. Therefore, deepfake still remains an extremely rare threat, despite the dangers it can present, and only a small number of buyers can afford it. As a result, the price of a one-minute deepfake starts at $20.

“Sometimes reputational risks can have very serious consequences”

Dmitry Anikin, Senior Security Specialist at Kaspersky, says: “One of the most serious threats Deepfake poses to businesses is not always the theft of corporate data. Sometimes reputational risks can have very serious consequences. Imagine a video airing of your manager making polarizing remarks on (apparently) sensitive topics. For the company, this could lead to a rapid decline in share prices. However, although the risks of such a threat are extremely high, the chances of being hacked this way are extremely low due to the cost of creating a deepfake and very few attackers can create a high-quality deepfake. What you can do about it is to be aware of the key features of deepfake videos and to be skeptical of voicemails and videos that come to you. Also, make sure your employees understand what a deepfake is and how they can spot it. For example, signs such as jerky movement, shifts in skin tone, strange blinking or no blinking will be indicative.”

Continuous monitoring of darknet resources provides valuable insights into the deepfake industry, allowing researchers to follow the latest trends and activities of threat actors in this space. By monitoring the darknet, researchers can uncover new tools, services, and marketplaces used for the creation and distribution of deepfakes. This type of monitoring is a critical component of deepfake research and helps us improve our understanding of the evolving threat landscape. Kaspersky Digital Footprint Intelligence service includes this type of monitoring to help its customers stay one step ahead when it comes to deepfake-related threats.