At the beginning of the year, Kaspersky experts warned that cybercriminals could continue to exploit Artificial Intelligence (AI) tools to create more convincing and accessible deceptive content, increasing the risks associated with deepfakes. This technology consists of images, videos, and even audios and texts, altered to show information different from the original, for example, so that one person can impersonate another. Recently, different deepfake videos of celebrities have circulated in different countries to advertise false gambling games with the aim of stealing money from victims. In other countries such as Mexico, AI-manipulated videos have been promoted promoting “miracle products” that are actually a scam.

Since the majority of Latin Americans do not know what a deepfake is (69%) and do not know how to recognize content of this type (67%), coupled with the fact that it is a threat that will be seen constantly this year, Kaspersky explains how it works and shares recommendations so that users know how to identify false content.

As the use of AI becomes more widespread and more accessible, the risk of being a victim of deepfakes is increasing, since, supported by this technology, fraudsters can easily extract videos and images of their potential targets to alter them. Kaspersky experts have found that those seeking to create fake media often turn to the Darknet, where there is significant demand for this type of content. Users actively seek out people who can create fake videos for them and, like these cases, request that they be of specific targets, such as celebrities or political figures.

This is then used for malicious purposes, such as blackmail, revenge, harassment or committing financial fraud. The recent deepfakes, for example, spread on social networks, aim to steal users’ money, convincing them that they are going to invest in recognized institutions and obtain great returns or buy products that will help them improve their health, taking advantage of economic need. and a better quality of life for people, posing a greater risk for them.

“We are facing a scenario of advanced digital manipulation that compromises the truth, and requires us all to be aware of this phenomenon. In a world where the line between reality and digital fiction is increasingly blurred, we must consume digital content responsibly, being careful about what we read and listen to, and avoiding promoting misinformation by sharing false news, videos and audios,” comments Isabel Manjarrez, Security Researcher at Kaspersky’s Global Research and Analysis Team. “In today’s world, it is “It is very important not to believe everything you read and hear on the Internet,” he said.

Recognizing a deepfake can be complicated, as digital manipulation techniques are constantly evolving. However, Kaspersky recommends some tips to detect this type of content:

  • Facial and body anomalies. Look for inconsistencies in facial movements, expressions, or body movements that seem strange or unnatural. Detects misalignments in the eyes, mouth or other parts of the face.
  • Lip sync and expressions. Check that the words correspond correctly to the lip movements and that the facial expressions are appropriate for the context of the video.
  • Lighting and shading. Evaluate whether the lighting on the face corresponds to the environment. Inconsistencies in lighting may indicate tampering.
  • Awareness and education. Be aware of the existence of deepfakes and educate yourself about digital manipulation technologies. The more aware you are, the better you will be able to detect potential fraud.
  • Beware of fake websites. If you have concerns about a website or app, do not enter your details or make payments. Visit official sites directly, as well as social networks of institutions or public figures to verify the information.