Much the same as stepping out on a cold and dreary winters day, creating an effective defence against social engineering attacks requires layering up! Advanced social engineering has become one of the most reliable intrusion techniques for cyber criminals, not because organisations lack perimeter controls, but because attackers are increasingly exploiting the human and identity layers that sit between those controls. Today’s campaigns rarely rely on a single deceptive email. Instead, threat actors orchestrate multi‑vector assaults that blend email, voice, collaboration platforms, and compromised accounts to manipulate trust and bypass defences. In turn, creating an effective defence has becoming increasingly challenging.
KnowBe4 reports that between September 2024 and February 2025, phishing emails increased by 17.3% with 57.9% sent from compromised accounts, making them harder to block through traditional filters. AI‑generated content was present in 82.6% of campaigns, showing how automation is enhancing deception at scale. This level of automation will accelerate the speed and credibility of these communications in the months to come, further exacerbating the challenge.
Moreover, social engineering tactics are evolving beyond email. CrowdStrike’s 2025 Global Threat Report highlighted a 442% surge in phone‑based social engineering (vishing) between early and late 2024, with attackers impersonating IT support or leadership to extract credentials or push actions that enable deeper compromise.
Simultaneously, deep‑level impersonation has become commonplace. KnowBe4 analysis shows that 89% of phishing emails involve impersonation of brands or individuals, and nearly half originate from legitimate accounts within supply chains, making them extremely difficult to detect and dismiss.
For defenders, this evolution is not a sign of control failure but a call to rationalise defensive design. Layering identity‑centric controls such as Dark Web Monitoring and Multi‑Factor Authentication have been shown to block over 99% of automated attacks, while user behaviour analytics can flag anomalous use of otherwise valid credentials before damage occurs. Training remains important, but resilience now comes from designing environments that anticipate human error and contain impact quickly when manipulation inevitably succeeds.
For cyber security practitioners, these trends underscore a powerful truth: social engineering cannot be eliminated, only out‑designed. A strong combination of interconnected controls spanning people, process and technology must be employed to stand the best chance of mitigating this threat. In 2026, resilience will be defined not by preventing human error, but by anticipating it, and ensuring that, when deception happens, your environment is protected by technology and backed by rehearsed procedures to detect, contain, and recover with confidence.
Written by Tom Exelby, Head of Cyber at Red Helix and OxCyber Ambassador





