Los deepFakes en la triología ingeniería social, inteligencia artificial y ciberseguridad

Deepfakes in the trilogy Social Engineering, Artificial Intelligence and Cybersecurity

The relationship between Social Engineering, Artificial Intelligence and cybersecurity has always been and will be there, for good and bad. And, intertwined between them is, tightly knotted, human weakness, human vulnerability, the human factor.
Around this, there are multiple risks and threats in which all or part of these factors intervene, which, combined as if it were a dangerous Molotov cocktail, contribute to the generation of cyber incidents that, although it may seem incredible, in most cases , are “supported”, “sponsored” and “carried out” by the attacked people or victims themselves.
Yes, people are one of the main attack vectors. It is enough for one person to perform a certain action for a cyber attack to be successful, whether it is complex or the simplest. The key is deception and the ability to convince.
It is true that there are very complex, powerful and robust technologies and mechanisms that cybercriminals can and do use daily in their cyberattacks. But, without a doubt of any kind, the human factor is decisive in most cases.
Obviously, we are talking about Social Engineering and how, through it, a certain “communication” or “request” can be sent to a person, so that it is as convincing as possible, thus getting the recipient to do what they want. wants me to do.
Being able to “sensitize” someone, being able to persuade them, being able to get them “on your side”, convincing them, is vitally important in the attack chain. The more trustworthy the communication appears, the greater the degree of success it will achieve and, therefore, the greater the achievement of objectives and impact.
It is not the same to write any email, from any person, to any recipient, asking them to do something, than to do it through an identity theft email from the general director of the company (BEC -Business Email Compromise- type attack, and/ or CEO fraud), with content that is more than recognizable, reasonable and credible, sent to an employee likely to fall for deception and perform the action/task requested.
This is a very “simple” example, really. But what if we also season this social engineering attack model with Artificial Intelligence and other types of technologies that make it a much more convincing and inescapable message? What if, for example, we accompany it with an audio or video (fake, of course, but of high quality), in which the person who appears is the general director himself requesting to carry out that action (although in reality it is not like that) ?
We are already in the era of the cyber attack that can use artificial intelligence to strengthen social engineering messages, carrying out an excellent and indisputable identity theft, thus destroying any cybersecurity measure that we may have established: the use of Deepfakes (of videos and /or audio)!
The solution and defense lies, therefore, in large part, in awareness, education, training and critical thinking that allows convenient analysis of the scenario of the intricate balance between social engineering, artificial intelligence and cybersecurity, in the era of deepfakes, being able to manage it properly.
At the intersection of social engineering, AI and cybersecurity, a complex trilogy develops that poses challenges and opportunities in today's digital age. As these disciplines converge, new threats emerge, with deepfakes (video or audio) being one of the most worrying manifestations.
Social engineering is an ancient art of manipulation that has found fertile ground in the digital age. Taking advantage of the human factor, its sensitivity, its psychology, is the objective. To achieve this, cybercriminals will use all types of techniques and tactics to deceive and obtain confidential information or other objectives. Online persuasion, manipulation of opinions, management of trends and beliefs, conviction, bias, are children of social engineering.
While social engineering is based on the understanding of human emotions and behaviors, allowing for more effective persuasion strategies (known as deep human understanding), overconfidence and lack of awareness can cause people fall into traps designed to exploit their weaknesses (what we know as human vulnerabilities).
Artificial intelligence has advanced learning capabilities, replicating patterns and generating new content that did not exist until now (generative artificial intelligence). This has represented a new milestone of enormous relevance in the creation of digital content. From generating text to creating images and videos, generative AI has exponentially expanded the possibilities for manipulation.
This represents a great step in terms of innovation and efficiency since AI allows significant advances in various industries and sectors. However, it also allows content manipulation that can be used to create false and misleading content.
Information is power. With information being as valuable an asset as it is, cybersecurity has become a crucial line of defense. Protecting data and digital infrastructure is essential to ensure integrity and confidentiality.
In this sense, we can adopt an active protection or defense model, using proactive measures to identify and mitigate threats before they go to work, causing the damage and impact they seek.
But, the evolution of the types of threats, as technology advances, so do the tactics of cybercriminals advance with them, constantly challenging defense strategies.
Deepfakes, as we said, can be the fruit or product of the conjunction of these three disciplines: social engineering, generative artificial intelligence and the existence of cybersecurity measures (technological or not). From this perspective, far from representing a competitive advantage, they represent a significant threat. The ability to create fake content, whether by imitating voices, or recreating images, faces, animations and videos in such a convincing way, poses risks to trust in information and its veracity.
On the one hand, the creative potential of generative AI in aspects such as training and entertainment, application in documentary, creative and artistic activities, automation, etc., has its most positive side. However, disinformation and post-truth are the other side of the coin, maliciously using deepfakes to create fake news and undermining trust in information.
So how could we detect a deepfake and protect ourselves from it? It could be done by mere intuition, specific knowledge, experience and human analysis, and even using specific tools for this that can be the guarantor of the future fight against this type of activity from the field of cybersecurity:
- Training, awareness and training on deepfakes, to educate people to recognize this type of elements and the possibility of their manipulation, empowering them to correctly identify this type of suspicious content and ignore it.
- Verification of sources and content that, perhaps as part of the previous point, allows people to confirm or deny the authenticity of the sources and contents of the analyzed deepfake, contrasting the sources and origins from which they appear to come and even the suspicious content itself with other files that we are certain are authentic and known. Thus we could find distinctions and discrepancies.
- Facial and movement analysis of the deepfake video, carefully examining the synchronization between the lips and the voice (the sound), as well as the coherence of the facial movements in unison with the context, could be, although complex (and more so as this technology advances and improves) a way to reveal possible inconsistencies in said video. It is an option in which it would be more advisable to have specific tools and software to carry out said analysis.
- Likewise, in the case of audio analysis, having specialized tools to analyze the frequency and intonation patterns of the voice would allow anomalies to be detected in sound recordings.
- Another important aspect to take into account would be the identification, analysis and verification of the metadata of the video file, which could provide clues about its authenticity and origin, although this metadata assignment could also evolve in a way that would be more credible.
- Regulatory and legal regulation on content platforms, which prevent (both technologically and from a legal and regulatory compliance point of view) the publication and spread of deepfakes and misleading content.
- Development of deepfake detection technologies that, through research and development of software and tools, can automatically and most effectively identify deepfakes.
- Defensive artificial intelligence that, as part of the previous point, would allow the use of learning technologies and algorithms to detect manipulation patterns in images, sound and videos.
In the fight against deepfakes, education is vital. Furthermore, research to improve detection tactics, techniques and algorithms and the implementation of defensive technologies will be essential to maintain the integrity of the information, guaranteeing its veracity, avoiding manipulation and achieving a high level of digital trust.
Have you been aware of receiving a deepfake in your company? At Zerolyx we are specialists and we have professional intelligence, cyber intelligence and cybersecurity services, with which we can help you: Cybersecurity Services. If you prefer, contact us and we'll talk.
return to blog

Leave a comment

Please note that comments must be approved before they are published.