Abstract
This study explores the performance of several Large Language Models (LLMs) across different facets of Cybersecurity Modules. Using prompt engineering, this work evaluates publicly available LLMs for their ability to assess the suitability of secure coding topics based on learning outcomes, categorize these topics following OWASP standards, and generate up-to-date examples for curriculum use. The findings would highlight the transformative role that LLMs would play for future advancements in Cybersecurity education.