Resilience against social engineering in the age of AI
Social engineering is a common method used by cybercriminals to target people. There is nothing new about this form of manipulation – but the advent of generative AI has significantly worsened the situation.
Simply advising people to look out for poor grammar and bad logos does little to protect against phishing e-mails. Nowadays, attackers can convincingly impersonate the communication style, voices and faces of individuals from the target’s personal or professional circle, with minimal effort.
Organisations have to urgently rethink their defence strategies – moving away from traditional security awareness training and towards strengthening employees' ability to recognize and resist manipulation.
Guesswork: what’s real and what’s not?
The rise of generative AI has made targeted social engineering attacks more precise, more automated, and significantly harder to detect. Welcome to the era of deep doubt (as WIRED put it in a headline) in which it is becoming increasingly difficult to distinguish between what’s real and what’s fake.
Cybercriminals are now using AI-generated deepfakes and synthetic voices to produce highly convincing impersonations. What’s more, generative AI can be used to create faces that are indistinguishable from real faces – and, even worse, are perceived as more trustworthy.
This makes common deception methods much more effective and difficult to detect: in Hong Kong, an employee was duped into transferring around £20 million to fraudsters posing as senior officers of her company in a deepfake video conference call.
Why traditional training fails
Conventional security awareness training usually focuses on known social engineering attack patterns – in particular typical e-mail phishing attempts. Employees are often required to follow rigid rules, and to pay close attention to detail when implementing them.
Even if we, as employees, are capable of correctly identifying the domain of a URL, we do not have the innate ability or focus to spot every single typosquat – especially when our job is to answer 300 (external) e-mails as quickly as possible every day.
At the same time, many security awareness measures are based on the outdated assumption that people can always rely on their analytical thinking skills at decisive moments.
In fact, many decisions – especially quick ones that lead to concrete everyday actions – are far more often determined by emotions and intuitive thinking than by rational analysis. Behavioural economics, which has been shaped by researchers such as Daniel Kahneman, Richard Thaler and Cass Sunstein, investigates precisely this field of tension between gut feeling and intellect. Until now, behaviour change interventions in cybersecurity have largely been designed without considering this approach. In the future, we should focus less on teaching new technical detection methods and more on raising awareness of the – essentially non-technical – attack method as a whole and how we can protect ourselves against it.
Recognizing attack techniques and strengthening trigger resilience
Social engineering attacks generally follow a clear attack cycle that can be taught. The attackers first search through publicly available information (OSINT) about organisations, roles and persons. Based on this, they develop credible scenarios to build trust with the target or to apply pressure. When we understand how detailed personal profiles can be created automatically, we are more likely to question urgent enquiries from supposed colleagues.
Attackers intentionally exploit classic principles of persuasion as described by best-selling author Robert Cialdini — such as authority (“The CEO wants this paid immediately!”), social proof (“Everyone is investing in this cryptocurrency right now!”), and liking (“I loved your presentation!”). While these techniques are common in marketing and sales, they’re also featured in darknet forums as tools for refining social engineering tactics. By learning to recognise these psychological strategies, we can become more alert to manipulation attempts and better protect ourselves.
This is where effective defences play a crucial role. One key protective strategy is becoming aware of our own emotional responses — especially in high-pressure situations that demand rapid action. We should be particularly alert when we feel emotional pressure building. Helpful techniques include pausing, reflecting on the situation (“How do I feel right now? Is this a credible request?”), seeking a second opinion, asking someone to double-check the request, and verifying it through an alternative communication channel. In the field of cyberpsychology, current research is exploring the concept of cyber-mindfulness— the ability to direct attention more deliberately in digital environments. This practice promotes thoughtful decision-making over impulsive reactions and helps build greater resilience against social engineering attacks.
Employee resilience: what now?
Understanding manipulation and deception doesn’t require technical expertise — these tactics have existed for as long as humans have communicated. But in the age of generative AI, social engineering has become more scalable, more convincing, and far harder to detect. In organisations, it’s no longer enough to spot obvious phishing emails; we must also be able to recognise sophisticated manipulation in the form of realistic voices, faces, or messages. Traditional security awareness training, which often focuses on technical details and rigid rules, is no longer sufficient. Instead, we need a deeper, cross-channel understanding of how attacks work — at a psychological or ‘meta’ level — and training in cyber-mindfulness: learning to detect and interrupt emotional triggers before they lead to impulsive actions. This approach is already the focus of current research and promises to build more resilient behaviour in the face of modern threats.
About the author
Cornelia Puhze is a security awareness and communications expert at Switch with a human-centred approach to security. She advises various communities on the human element in information security, from both a strategic and practical perspective. Cornelia is educated to postgraduate level in corporate and political communications and has a background in teaching.