Abstract
Cybersecurity awareness training improves knowledge, yet human error continues to drive breaches. AI-enabled attacks such as deepfakes, voice-cloned vishing, and automated spear phishing magnify these risks. This review of 26 studies (2008–2025) introduces a residual-risk framework that measures outcomes beyond average effectiveness. Residual Insecure Behavior (RIB) captures risky practices that persist after training, while Residual Knowledge Gap (RKG) reflects remaining deficits. Across studies, residual risks were substantial—phishing susceptibility often above 10% and knowledge gaps over 30%. By applying RIB and RKG, future cybersecurity researchers can shift focus from statistical gains to reducing real-world exposure in an AI-driven landscape.
Open Access License Notice:
This article is © its author(s) and licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0), regardless of any copyright or pricing statements appearing in the PDF. The PDF reflects formatting used for the print edition and not the current open access licensing policy.
