Abstract
This paper explores the evolving threat of deepfakes in the context of insider threats, particularly how advanced persistent threats (APTs) are leveraging AI-generated audio and video to impersonate job applicants and gain access to sensitive systems. While deepfakes have legitimate applications in entertainment, education, and business, they are increasingly being weaponized for deception and cyber intrusion. The paper outlines recent incidents, assesses technical vulnerabilities, and evaluates current risk management frameworks such as NIST RMF. It concludes with policy and technology recommendations to enhance detection and prevention strategies, especially during remote hiring and onboarding processes.
Open Access License Notice:
This article is © its author(s) and licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0), regardless of any copyright or pricing statements appearing in the PDF. The PDF reflects formatting used for the print edition and not the current open access licensing policy.
