Abstract
Large Language Models (LLMs) are increasingly explored as tools for software development and could further constitute a supplementary source for the development of varied examples intended for pedagogical use. While they can improve productivity, their ability to produce code that is both secure and compliant with Secure Software Development (SSD) practices remains uncertain, raising concerns about their role in cybersecurity education. If LLMs are to be integrated effectively, students must be trained to critically evaluate generated code for correctness and vulnerabilities, raising an important question: How can LLM-generated code be effectively and securely incorporated into Cybersecurity education for teaching vulnerability analysis? This paper introduces CodeWars, a novel teaching methodology that combines LLM-generated and human-written code to examine how students engage with vulnerability detection tasks. CodeWars was implemented as a pilot study with a total of 32 students at Cardiff University and the University of Waikato, where students analyzed flawed, secure, and mixed-origin code samples. By comparing student approaches, analysis, and perceptions, the study provides insights into how vulnerabilities are detected, how code origins are distinguished, and how SSD practices are applied. Our analysis of student feedback and interviews indicates that Codewars produced structured and accessible code, simplifying vulnerability identification and offering educators the means to efficiently develop varied SSD teaching applications. These findings illuminate both the advantages and constraints of employing LLMs in secure coding and position this study as a foundational step toward the responsible adoption of AI in Cybersecurity Education.
Open Access License Notice:
This article is © its author(s) and licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0), regardless of any copyright or pricing statements appearing in the PDF. The PDF reflects formatting used for the print edition and not the current open access licensing policy.
